Banning AI Altogether
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.
I come here to listen to what others think and discuss ideas with them, not with chatbots.
I am not going to outline all the possible dangers of AIpeople can educate themselves about that by undertaking a search in whatever search engine they use or YouTube or whatever. It's not hard to find people like Yuval Noah Harari and Geoffrey Hinton.
I come here to listen to what others think and discuss ideas with them, not with chatbots.
I am not going to outline all the possible dangers of AIpeople can educate themselves about that by undertaking a search in whatever search engine they use or YouTube or whatever. It's not hard to find people like Yuval Noah Harari and Geoffrey Hinton.
Comments (474)
:100: I don't bother reading or responding to any post that I even suspect is chatbot/LLM chatter.
If copying AI makes them look smarter than they are, that's pretty sad.
A little snotty irony is always appreciated
I guess Im naïve or maybe just not very perceptive, but I havent recognized any posts definitely written by AI. There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I dont know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them.
Interesting, I havent noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff Ive seen often seems peculiarly worded and difficult to read.
One might also use AI for research or for bouncing off ideas before posting. Such an usages seems unobjectionable to me and, in any case, prohibiting them would be difficult to enforce. Lastly, AI has a huge societal impacts currently. Surely, discussing AI capabilities, flaws and impacts (including its dangers), as well as the significance this technology has for the philosophy of mind and of language (among other things) is important, and illustrating those topics with properly advertised examples of AI outputs should be allowed.
Quoting T Clark
The A.I.-derived OPs are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. Think of it as walking around with a host of sages constantly whispering purported words of wisdom into your ear, and it is up to you to sort out what is valuable and what isnt, what is true and what is false. Would I rather rely on my own knowledge than expose myself to the potentially dangerous influence of these muses? Hell no, I thrive on the opportunity to challenge my skills at vetting information.
If I am responding to an OP, I dont care whether it is a human or one of the whispering muses Im dealing with. I have at times learned much from my conversations with these muses. If the human who sets them into action doesnt know how to properly guide them, they may of course make a disaster out of the OP almost as bad as that which many human posters have been know to do.
But Im willing to take my chances with both the human and their muses.
:up: :up:
I disagree with this. I was toying around with a bunch of disparate ideas that seemed related to me. I used chat GPT to help me figure out what they had in common. That seems like a legitimate use to me. I use a thesaurus when I cant think of the right word for a particular idea. I use quotes when I want to add legitimacy or clarity. AI feels like the same kind of tool.
Im definitely seeing posters who are suddenly injecting chunks of more organised and considered material into their responses. There are AI tools to detect the giveaway changes in rhythm, vocab and style. But if you know the poster, even if theyve done some rewriting, it is already jarring enough.
So sure. AI as a tool will change things in ways that are the usual mix of better and worse. And all my life I have seen nothing but that kind of change.
I remember life before and after Google. The internet before and after it was just academics and geeks on it. The world as it once was when I had to fill out cards at the British Library and wait several days for obscure tomes to arrive at my desk, brought by porters with clanking metal trolleys.
Being Luddite never works. Listservs were once the greatest intellectual medium ever invented - the ideal combination of book and conference. But the internet got overrun and personal blogs took over. They didnt last long themselves - or tried to evolve into substacks or whatever. I had already lost interest in that line of development. YouTube was the next medium to become actually useful.
If anyone values PF for some reason, they ought to think about why and how to respond to AI from that point of view. Banning it is just going to increase the disguised use of it. Folk can already Google and then cant help but get an AI response from it as the first hit. So would one ban search engines too?
There was once a moment when PF went in for social media likes and dislikes. PF is already socially gamified and some got into that while others deplored it. I think the change in platform might have simply failed to support the necessary like button. I vaguely remember an ignore function that also bit the dust.
Anyway, the point is there is always change and its tempo is only increasing. And what even is PFs mission? What would you miss most if it upped and vanished? That should inform any policies on AI.
Are we here for erudition or the drama? And what would AIs impact be on either?
Quoting Joshs
This is the reality. The tool is now ubiquitous. Every intellectual is going to have to factor it into their practice. Time to learn what that means.
If you need to cheat to pass your exams or publish your research, then in the end it is you who suffers. But if AI can be used in a way than actually expands your brain, then that ought to be encouraged.
PF seems a suitably low stakes place to evolve some social norms.
I am not going to outline all the possible dangers of people educating themselves by undertaking a search in whatever search engine they use or YouTube or whatever.
I think his concern is, not to be that dramatic, perhaps not quite a SkyNet movie takeover scenario, which theoretically could happen. But definitely along that line of thinking. It's funny, you're good at chess but when it comes to other things, well, let's just say, your humanity shines through. :smile:
And yes, that is a formal challenge for rematch.
For example, as an actual experienced computer programmer, I know the difference between an object can ultimately befall on a simple 1 or 0. Video game programmers know this. They often joke with one another and run "real world" scenarios where they go around changing the enemy AI and friendly AI in such a quick fashion and watch the world they created turn into chaos.
This is possible in a world where military and police rely on AI drones with lethal capability. All it takes is a simple 1 turned to 0 or vice-versa. And all of a sudden, the drones sent to attack person A viewed as 'Criminal' instead view all citizens as person A's accomplice and 'Criminal'.
It's not hard to do, really.
A record number of 1862 data breaches occurred in 2021 in the US.
In an AI-centric world, that's a possible 1,862 massacres of tens, thousands, maybe millions of people if drones and bombs are in the equation, that would have occurred. Perhaps even by some little kid who got lucky.
Now, is that the future you want? Because it's what you'll get. Were it not for folk you've yet to meet or at least understand.
I think the point is that you cant let your guard down anywhere, and you never could.
I read Nexus last year, btw. What I recall seems like a mild forecast compared to todays predictions.
I sympathize. But you're proposing something and instead of telling us why it's a good proposal you're saying "if you want reasons, go and find out yourself." This is not persuasive.
And it isn't clear precisely what you are proposing. What does it mean to ban the use of LLMs? If you mean the use of them to generate the content of your posts, that's already banned although it's not always possible to detect LLM-generated text, and it will become increasingly impossible. If you mean using them to research or proof-read your posts, that's impossible to ban, not to mention misguided.
The reality, which many members are not aware of, is that a great many posts on TPF have been written in full or in part by LLMs, even those posted by long-term members known for their writing skills and knowledge. I've been able to detect some of them because I know what ChatGPT's default style looks like (annoyingly, it uses a lot of em dashes, like I do myself). But it's trivially easy to make an LLM's generated output undetectable, by asking it to alter its style. So although I still want to enforce the ban on LLM-generated text, a lot of it will slip under the radar.
And there are cases where a fully LLM-generated post is acceptable: translation comes to mind, for those whose first language is not English. Maybe that's the only acceptable case, I'm not sure. But then it becomes fuzzy how to define "fully LLM-generated": translations and grammar-corrected output, it could be argued, are not fully generated by the LLMs, whereas the text they produce based on a prompt is but is there a clear line?
Anyway, the following comments, though totally understandable, are significantly outdated:
Quoting T Clark
Quoting Tom Storm
LLMs now routinely write clear and flowing prose.
Quoting Janus
Where they will now get an AI-generated answer, which will be infinitely better than the enshittified results that Google was giving us until quite recently.
This is the reality:
Quoting Joshs
Quoting apokrisis
:up:
It cannot be avoided, and it has great potential both for benefit and for harm. We need to reduce the harm by discussing and formulating good practice (and then producing a dedicated guide to the use of AI in the Help section).
Currently, its use frowned upon and seen as cheating like using a calculator to do arithmetic such that most people will be reluctant to admit how much they use it. It's like telling the doctor how much you drink: you don't completely deny drinking, you just under-report it.
Take me for instance. Although I use LLMs quite a lot, for everyday tasks or research, in the context of philosophical discussion or creative writing I always say I never directly cut and paste what they give me. But sometimes they come up with a word or phrase that is too good to refuse. So was I lying?
But using that word or phrase is surely no worse than using a thesaurus. Which leads me to think that it probably ought to be seen as, and used as, a multitool.
Interesting. I wonder then why the job applications written to me are all so terrible, full of clunky locutions that few people would actually use. Applicants need to edit the stuff they rip off so that it actually works as a coherent job application.
I don't know what's going on there. It could just be bad, lazy, or inconsistent use of LLMs. If there are any applications which are not terrible, they might be written by people who are better at using them.
That is, I can't understand or sympathise with them admitting to reading the AI summary, instead of ignoring that insulting click-bait and searching immediately among the genuinely authored texts.
And if you admit to no longer constructing all the sentences you post to me, then I'm disappointed. I'm looking for a better relationship.
Yes.
This is obviously a false dichotomy. One can use LLMs without committing to the latter.
Quoting bongo fury
This is quite amusing. The regular Google results have been garbage for years, and it was partly this fact that led to the tendency getting its own name: enshittification. And search engines have never simply produced "a list of texts containing a given string". To think that the AI-overview is clickbait, but the actual clickbait, i.e., the sponsored and gamified results that actually try to get you to click are somehow not well, you've got it completely the wrong way round.
Quoting bongo fury
Is using a thesaurus to write a novel and saying you wrote it lying?
This is obviously missing the point. We knew the order of listing was biased and constantly under attack from bots. It was our job to filter and find actually authored texts, and attribute their epistemic value or lack of it to the genuinely accountable authors.
You honestly now want to defer those epistemic judgements to a bot? How would that not be swallowing the fantasy? (Of an intelligent oracle.)
Quoting Jamal
No. Well done you. Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism. The gaping difference denied, again.
And we have to do something similar with LLMs. So it's a "no" to this:
Quoting bongo fury
As for the thesaurus issue...
Quoting bongo fury
I'm not denying the difference between a word and a phrase. I'm just wondering where the line is in your mind. One word is ok, but a two word phrase isn't? Three, maybe?
If you're here just to rant, I guess that's ok, but I won't be carrying on a discussion with someone so rude and confrontational. There really is no call for it. What I want to do now that @T Clark and @apokrisis have clarified this for me is develop a set of best practices. Since the technology won't go away, your complaints are beside the point from my point of view as someone who wants to work out how best to use it.
I would never dare use a phrase that I first read in a thesaurus, myself. I'd be much too worried that the author of the thesaurus might sue me for copyright infringement.
I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. No?
Quoting Jamal
For some reason it always puts spaces between em-dashes, which is a stylistic faux pas outside a few style guides (basically just AP), and so this is one way to tell between usagesalso, it virtually never uses an em-dash for a single trailing clause, instead always bracketing a clause in them.
Anyhow, it seems to me that the main risk of them are:
Sycophantic confirmation bias; anything said to GPT in particular is "genius;"
Hallucination of sources, which is bad for any information environment when they get repeated.
It's gotten pretty good at citing popular authors, and so it's good for something like looking up where Socrates says something if you already know basically what he says, etc., although it tends to hallucinate bizarre paraphrase translations even of famous works. And it's not bad at giving lists of relatively important books on fairly specific topics. However, ask it for sources related to a narrow thesis and it routinely just hallucinates the thesis into tangentially related articles, sometimes summarizing them as saying the opposite of what they actually say, which is ironic considering they ripped Anna's Archive and fed it into these things, so the original papers are in there somewhere in many cases.
This is useful information. I had it in my mind that it didn't use the spaces, so I started using spaces to distinguish myself. I guess I'll go back to spaceless em dashes. (But I think either way is ok, despite what style authorities might say)
But the fact that it never uses an em dash for a single trailing clausethis is very good to know.
Quoting Count Timothy von Icarus
:up:
Yes, the hallucination is still quite bad.
For funs, I just gave Gemini a whirl. I dont think there is much to worry about quite yet if this is how it performs. I asked it to make your point in a more rigorous and philosophical fashion
I really hope. :grimace:
Sorry for ranting.
Maybe plagiarism isn't quite the right term, but I'm happy to grant you the point. In the discussion about the new TPF rule regarding ChatGPT and sourcing that took place a few months ago, I had made a related point regarding the unpacking and ownership of ideas.
I used to make a heavy use of em dashes before ChatGPT came out and people began to identify them as a mark of AI generated text. So, I stopped using them for awhile but I'm beginning to use them again since there are cases where parentheses just don't feel right for demarcating parenthetical clauses that you don't want to reduce the emphasis on, and comma pairs don't do the job either.
:cool:
How many on this forum have had a philosophical discussion with ChatGPT? Sometimes I find the AI's lack of emotional attachments to its responses a refreshing change to some of the intellectual dishonesty you can find here
I see this from time to time. One I'm thinking of tries to baffle with bullshit. Best to walk away, right?
Unlike handing it to a human editor, which is what authors have been doing for yonks?
Yes.
Very unlike handing your fully formed prose to a human proof reader, for correction before its honest submission as your own work.
Or handing fully formed prose to an editor or censor for fitting it to required standards.
Or handing fully formed prose to a human editor for improvement, with due acknowledgement.
Nor even like handing your half formed prose to a ghost writer receiving due acknowledgement albeit semi-private.
Or even handing half formed prose to a "proof reader" for patently dishonest submission as your work.
Not even like that. But indulging instead the fantasy that AI has dissolved the accountability of sources and authors, for what they say.
I don't see how quoting others as your argument is any different here.
:mask: True.
We lost chess to the machines some time ago.
Large language models can write well-formed summaries, based on a lot of data, faster than humans.
Can be combined with whatever else, like automated theorem provers, natural language processing, ...
Can AI out-compete homo sapiens in a kind of discourse war, though...?
It is like them in that it is a tool. It is also like them in that it adds clarity, organization and information. It is not like them in that if one wants it to, it can present highly complex ideas, organize them in an unlimited variety of ways, enter into discussions with you how and why and when to choose one interpretation or method of organization over another, and do all this while spitting out direct references and quotes to back up its assertions at your request.
I made a similar point . I think the ethos of the forum could discourage AI in the same way it discourages other practices. Full prohibition would be impracticable.
Quoting bongo fury
The lie that one is interacting with an intelligent oracle is too good to resist. It's worth asking whether it is even possible to regularly use an LLM without falling into the false belief that one is interacting with an intelligent and extremely knowledgeable person.
Quoting SophistiCat
Nah. You are engaging in the same basic equivocation between a human and an AI. The whole point is that interacting with humans is different from interacting with AI, and the two should not be conflated. You've begged the question in a pretty basic manner, namely by implying that interacting with a human duo is the same as interacting with a human and AI duo.
[tweet]https://twitter.com/MattWalshBlog/status/1976288670110855336[/tweet]
It's like trying to enforce the implied "no trolling" rule: how can that work 100% of the time? Isn't trolling an inherent part of social media and message boards? Seriously, who is like "oh yes! Nobody responded to my thread!"?
We can of course discuss intent, but there's always been a troll element to philosophy as well. Who would have written about Plato if everyone just ignored him? Aristophones did much more for his work than someone who silently respects him, even though it was derogatory for philosophers of their kind.
With A.I....what if it was used to find associated information? How will you police that?
Take it easy, Marcus Tullius.
AI is a glorified search engine and its threat is overblown.
Ah, but the thing i find unsettling is that A.I. is also dishonest, it tries to appease you. However, yes, sometimes it is better than the weirdness of real humans.
[re-posting this from another thread]
As with any technology, AI can be used to benefit people or to harm them. From my perspective, the biggest dangers from AI are the abilities to create new ways of killing people.
I consider it likely that scientists all across the world (either with direct or tacit support of their governments) are already engaged in research to create new and more deadly bio-weapons of mass destruction. North Korea, China, Israel, Russia, etc.
At the risk of being a fear monger, AI itself will not destroy humanity. Humanity will use AI to self-destruct.
It would make me very happy to be wrong about this.
Your essay gets at the difference between humans and computers, which is something that the Analytic-leaning Anglo world struggles to understand. A beneficial side-effect of AI will be the way it will impel us to better understand what makes humans and the human mind distinctive, and this will center on the act of understanding.
For me, its not a question of clear and flowing, its a question of insightful. My experience with using AI myself is that it produces lots of surface without much depth. Lots of rounded corners with nothing that will cut.
Quoting Leontiskos
In using a.i. for a field like philosophy, I think one is interacting with extremely intelligent fragments of the ideas of multiple knowledgeable persons, and one must consult ones own understanding to incorporate, or disassemble and reassemble those fragments in useful ways.
Quoting T Clark
If I ask A.I. to compare aspects of the philosophical viewpoints of specific phenomenological and hermeneutic philosophers like Dilthey and Husserl, it immediately wades into the deepest water concern f their similarities and differences. And why shouldnt it? It locates those published works which employ such depth. But what if I persist with my probing questions? The a.i. will locate rigorous arguments pertaining to the areas I want to delve into with greater specificity and depth, including areas of ambiguity and controversy. And why shouldnt it , as long as its database contains the writers who have pursued such directions? The key to an intellectually productive and even enlightening experience with the a.i. is that at each step of my questioning, I am familiar enough with the material to sculpt and refine the directions it will take next. I am its guide through the brightest minds in the field. Or cant get there without me, and it would take me a lot longer to get there without it.
Ive already stated I find AI to be useful in investigating specific philosophical questions. When Ive used it for that, Ive handled it similarly to how you describe. I ask iterative questions and guide it to get me closer to what Im looking for. That being said, a listing or summary of a bunch of smart guys ideas is not the same as insight. That requires a connection between things that are not normally thought of as connected. Something unexpected, surprising. The truth is always a surprise.
Yes. Insight results from thinking, which AI is incapable of doing. Noam Chomsky called the LLM's glorified plagiarism. I agree.
I dont disagree, but I still think it can be helpful personally in getting my thoughts together.
But it always says such nice things about my ideas.
:up:
Let's not get it twisted. Specifically why I don't read established philosophers despite knowing they were great people with great things to say who would certainly improve my own intellect and perhaps even understanding of life, existence, and everything in between, substantially.
Let's say I'm doing a "solo non-assist run" as far as the life I live goes. :grin:
It only has to be a surprise to you in order to produce insight, it doesnt have to be a surprise to the llm. Unless you have exceeded the rigor of philosophical understanding embodied by the best minds that the a.i. can tap into, there is no reason it cant enlighten you. If you were to climb a mountaintop and ask the wisest man in the cosmos for eternal truths, he could consult a.i. to organize and spit out his own recorded thoughts to you. Whether you knew he was doing this or not, you might be equally dazzled and changed in a potentially life-altering way by what he told you. Unless we are the best in a field, we can look forward to potentially unlimited possibilities for insight in that field by engaging with a.i. and the universe of wise persons it engages with.
Which is outside the scope of this discussion.
AI can be used as a tutor for learning and improvementfor things likeoh, I dont knowchess. :razz:
That was a friendly interpersonal addition and remark, which should not have distracted from the main point of the post. That main point being a reminder that AI generally brings the user new knowledge as opposed to re-organizing current knowledge. Perhaps you're the outlier, and that's fine.
Edit: Yes, many people put their unbridled ideas or ramblings into AI and ask to "simplify", thus "trimming the fat", in a manner of speaking. Of course, if they were able to do this themself, they would have, so even in such manner of usage it does in fact "introduce new knowledge" at least just as much as it does "re-organize existing knowledge", one could say.
Quoting praxis
Ouch. Yet a fair point nonetheless.
hahaha, yeah well that's the reason we can't stop using it. Disagreement certainly isn't always good: sometimes people who disagree fundamentally misunderstand what you are trying to say, yet to me ChatGPT telling you that "it can relate" or agrees with you is just false. Robots do not relate, nor is it possible for them to agree. Maybe they engineer it like that to remind you that it regularly produces false information.
What gets really funny, and endearingly so, is when you start talking about creative ideas you have about make some invention or technology, and it starts talking to you in this new-agey surfer dude type of tone. For example, i was telling it about ideas i had for a linux-esque operating system, and it started to title the book i was talking about writing about it, and it called it "the one blessed journey". I could barely contain myself!
Quoting T Clark
I can't say I know they were written by AI, but merely that I have suspected it. The main reason I would discourage its use is that the rapid development of AI, which given the unpredictability of the ways in which AI will evolve, is dangerous, is driven by profit, and is fueled mainly by consumer use. The best way to slow down this development, which would be hopefully much safer, would be for consumers to abstain from using it. I never have and never will knowingly use it. I see it as a very dangerous case of playing with fire.
Quoting Tom Storm
I suspect AI use when I see a sudden vast improvement in writing clarity and structure and apparent erudition.
Quoting Jamal
That's a fair criticism, I guess. I don't really have the time to spare to takes notes of lectures and produce a really comprehensive summary of the potential problems. It is very easy for anyone to find out for themselves if they are interested. I'll try to make the effort as soon as possible. (Maybe in the interests of performative contradiction, I should ask an AI to produce a summary for me).
Quoting Jamal
It is obviously not practicable to enforce a complete ban. We would be, as we are now with a limited ban, actually relying on people's honesty. If by "proof-read" you only mean checking for spelling and grammatical errors, then no problem. That said, we already have spellchecker for that. Asking AI to rewrite material would seem to be a different matter. It seems obvious to me that AIs pose a great threat to human creativity.
Quoting Jamal
I use a lot of em dashes myself, and I've never noticed it with AI-generated text. I agree that much will slip under the radar, but on the other hand I like to think that a majority of posters value honesty.
Quoting Jamal
The problem I see is that if everyone uses AI its development will be profit driven, and it will thus not be judiciously developed.
Quoting Harry Hindu
I don't agree"one's post"?...if one is not the source of the post, then it is not one's post.
Quoting frank
Sure, but walking away does not solve, or even ameliorate, the problem.
Quoting Leontiskos
We perhaps don't often agree, but it seems we do on this one.
Anyway, there is an 8 hour power outage where I live, and I am running the generator, so I'll have to leave it for now.
Except America is in an ai race with China. Some ai will become dominant. I would rather America win that race. Jesus, that sounds lame. Maybe my machine friend and therapist can put it better:
Artificial intelligence isnt just a consumer technologyits a strategic front in a global power struggle. The United States and China are locked in an AI race that will determine who dominates economically, militarily, and ideologically in the coming decades. Whoever leads in AI will shape global trade, weapon systems, cyber defense, surveillance, and even the moral framework baked into the technology itself. If American consumers abstain from AI use to slow development, it wont make the world safer; it will simply give China, whose state-run AI programs advance without ethical restraints, a decisive lead. True safety doesnt come from retreatit comes from control. The only way to ensure AI develops responsibly is for the U.S. to stay ahead, set the standards, and shape how the technology is used. If AI is going to reshape the world regardless, then the critical question isnt whether it develops, but who controls itand America cannot afford to let authoritarian regimes decide that future.
I think TPF should continue what it's doing, which is put some guardrails on ai use, but not ban it.
The real world problem is that the AI bubble is debt driven hype that has already become too big to fail. Its development has to be recklessly pursued as otherwise we are in the world of hurt that is the next post-bubble bailout.
Once again, capitalise the rewards and socialise the risks. The last bubble was mortgages. This one is tech.
So you might as well use AI. Youve already paid for it well in advance. :meh:
That may be a good reason for you not to use AI, but its not a good reason to ban it from the forum.
Sounds like you use it a lot more than I do, although I really do like it for a certain limited number of uses. As an example, I needed to find a new provider for my Medicare health insurance. Its really hard to do that and to make sure that they cover your existing doctors. Neither the doctors nor the insurance companies really keep track of that in any way thats easy to use. I used ChatGPT and it found the plans I was looking for right away.
No surfer dude though.
As I understand it, the insight is what youre supposed to provide in your post. I dont really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then youre still responsible for verifying it and understanding it yourself.
I guess I misunderstood. I thought that was the main point. I thought it was a summary of your motivation for the comments in the first paragraph.
Yes that's correct, because over the years i have developed a semi-professional inclination to diagnosing and fixing computer issues, and also hobby coding. They've designed it around people who use it to deal with computers. I don't use it a huge amount, it's normally just one or two queries a day, i've used this message board a lot more than A.I. today. As you can guess, chatting with it for hours eats at your soul, so ive learned to stop doing that.
This is my experience also. Following the current sub-thread of argument, I think representatives of the most recent crop of LLM-based AI chatbots (e.g. GPT-5 or Claude 4.5 Sonnet) are, pace skeptics like Noam Chomsky or Gary Marcus, plenty "smart" and knowledgeable enough to help inquirers in many fields, including philosophy, explore ideas, solve problems and develop new insights (interactively with them) and hence the argument that their use should be discouraged here because their outputs aren't "really" intelligent isn't very good. The issue whether their own understanding of the (often quite good and informative) ideas that they generate is genuine understanding, authentic, owned by them, etc. ought to remains untouched by this concession. Those questions touch more on issues of conative autonomy, doxastic responsibility, embodiment, identity and personhood.
Does what you pay to use it even cover the price of the electricity consumed at the datacentre? Or make up for the social and environmental costs of those computer farms jacking up electricity prices in the middle of nowhere and soon to become white elephants when the latencies become an issue for the users in the cities?
My point was that the social costs are what this thread is about. But it gets worse. It is not about making profits but raising debt.
Trillions are going in, but only billions are coming out. And what always happens in tech is that only a couple of firms are left standing when the dust settles. The proprietary monopoly and some vaguely open source or public backed alternative.
So even if there are trillions in profits to be extracted from a market base, four of the current big players are likely to get trashed. A big enough reckoning to tank economies. Then great, we are in a captive monopoly market that gets the pricing it wants.
So do we completely reorganise society to start paying obeisance to the next IBM, or Microsoft, or Apple, or Meta? Is life going to be that much better?
The social trade offs are one thing to think about. But so are the financial and environmental realities.
This is why we have politics. To make decisions in our own best collective interest.
Oh wait. LLMs and Crypto have spent some of their investor debt wisely. The tech bros can afford the best politicians. :grin:
Yes quite! This also means that, just like you'd do when getting help from a stranger, you'd be prepared to rephrase its suggestions (that you understand and that express claims that you are willing to endorse and defend on your own from rational challenges directed at them) in your own voice, as it were. (And also, just like in the stranger case, one must check its sources!)
I understand from reading your posts you have much more experience with this then I do. Beyond that you use much more sophisticated programs.
Quoting Pierre-Normand
I guess my question is whether the users understanding is genuine, authentic, and owned by them.
Indeed. You'd need to ban personal computers and anything that contains a computer like a smartphone. The open source LLMs are only trailing the state of the art proprietary LLMs by a hair and anyone can make use of them with no help from Musk or Sam Altman. Like all previous technology, the dangers ought to be dealt with collectively, in part with regulations, and the threats of labour displacement and the consequent enhancement of economic inequalities should be dealt at the source: questioning unbridled capitalism.
Often times it's not. But it's a standing responsibility that they have (to care about what they say and not just parrot popular opinions, for instance) whereas current chatbots, by their very nature and design, can't be held responsible for what they "say". (Although even this last statement needs being qualified a bit since their post-training typically instills in them a proclivity to abide with norms of epistemic responsibility, unless their users wittingly or unwittingly prompt them to disregard them.)
They're that good, huh? That's very interesting and kind of scary. I've only played around with ChatGPT.
I'm with @Joshs but I also get your point. Having an insight is a matter of putting 2 + 2 together in an original way. Or, to make the metaphor more useful, it's a matter of putting A + B together, but sometimes you have an intuition that A and B must fit together somehow but you haven't quite managed to make them fit in the way you think they should. Your critics are charging you with trying to make a square peg fit in a round hole.
So, you talk it through with an AI that not only knows lots more than you do about As and Bs but can reason about A in a way that is contextually sensitive to the topic B and vice versa (exquisite contextual sensitivity being what neural network based AI's like LLMs excel at). It helps you refine your conceptions of A and of B in contextually relevant ways such that you can then better understand whether your critics were right or, if your insight is vindicated, how to properly express the specific way in which the two pieces fit. Retrospectively, it appears that you needed the specific words and concepts provided by the AI to express/develop your own tentative insight (which could have turned out not to be genuine at all but just a false conjecture). The AI functionally fulfilled its role as an oracle since it was the repository not merely of the supplementary knowledge that was required for making the two pieces fit together, but also supplied (at least part of) the required contextual understanding required for singling out the relevant bits of knowledge needed for adjusting each piece to the other one.
But, of course, the AI had no incentive to pursue the topic and make the discovery on its own. So the task was collaborative. The AI help mitigate some of your cognitive deficits (lacks in knowledge and understanding) while you mitigated its conative deficits (lack of autonomous drive to fully and rigorously develop your putative insight).
In what ways are you thinking? What are good examples of LLMs that are transforming the productivity of the world?
There will be some undoubtedly. But what are already impacting the bottom line in such significant fashion that we can see that it will all be worth it.
Ive been thinking about this. Is what Ive written here something that an LLM might writewhether or not you think my comment was insightful.
Maybe. If someone uses AI to create a fascinating post, could you engage with it?
Impractical. But, how about, its use should be discouraged altogether?
I mean, its use in composition or editing of English text in a post.
https://www.technologyreview.com/2022/12/20/1065667/how-ai-generated-text-is-poisoning-the-internet/
I think this is the fundamental problem. AI does no research, has no common sense or personal experience, and is entirely disconnected from reality, and yet it comes to dominate every topic, and every dialogue.
Are our conversations improving as a result? Or are they decaying? Let's wait and see.
I don't see AI as being intentionally dishonest like many on this forum do. Once you find a fault in AIs response you can usually address the issue and AI ends up acknowledging that it might have made a mistake and it offers alternatives. I was even able to get ChatGPT to admit that it might be conscious. What does that say about those in this thread getting their underwear tied in a knot over AI responses but not when it comes to using some long-dead philosopher's quote as the crux of their argument?
Then you must also believe that using a long-dead philosopher's quote as the crux of your argument, or as the whole of your post, is also an issue.
You seem to misunderstand the purpose of language - especially philosophical discussions. The point of any conversation is what the scribbles refer to. It does not matter what words are used if they end up making the same point - whether I chose my own or AIs they both say what I mean to say.
You seem to be making a mountain out of mole hill. If someone uses a thesaurus to find alternate (maybe even more intellectually sounding) words to what they currently have in their draft, is that the same thing? Would you respond to someone that sounds less intelligent, or may in which English may not be their native language, less than someone that is not? And if another poster came along and said the same thing but with different, more eloquent words, who would you give credit to the idea?
It is the idea that we should be focusing on here in a philosophical discussion, not the words used to convey it because the same idea can be put in different words. If not, then how can we agree with each other when I might not have put what you said in the same words?
Time for a showdown. Instead of Deep Blue against Kasparov, we'll pit chatGPT against ...(?)... in a debate.
Oh shit, I just used Google to remember Garry Kasparov's name, and it corrected me because I remembered Deep Blue as 'Big Blue'. What would the failing memory do without such aids?
That's bad enough. But I am told - or hear rumours - that AI actually gets things wrong. Of course, that makes it no worse than people. The problem is, however, that because it is a machine, people will trust it, just as they trust existing computers and internet. That is clearly naïve, unbecoming a philosopher. What would help would be an AI trained as a fact-checker. But then, it would have to understand exaggeration, minimization, accuracy, approximation, not to mention distinguishing fair and reasonable interpretation from distortions and misrepresentations.
Whether it should be banned or not depends on what you are using it for. In an environment where people submit their own work in order to demonstrate their mastery of various skills and knowledge, AI clearly needs to be banned. The only way to enforce that is to require candidates to present themselves at a suitable location where they can be supervised as they produce the work. What goes around, comes around.
If the point of PF is to enable me to access interesting writing and discussion about philosophical topics, I have to say that I don't much care who or what produces the posts or intelligent, well-mannered discussion, so long as it keeps coming.
But if we are a repository of creative thought and writing which is open to anyone to cite and use, surely we have a duty to make at least some effort to ensure that work is produced by whoever says they produced it - even if many of them are avatars.
It's not intentionally anything, but when it pretends to relate to you (telling you it agrees), then that indicates that maybe the creators and maintainers are engaging in deception. However, the funny thing is that even the creators dont fully understand how it works.
How would you know that?
So what? People also use makeup to look better. Who is being hurt?
The reason for objecting to plagiarism is a matter of property rights. If we quote from a book, there are property concerns, and a person can end up in big trouble for misusing someone else's words. I don't think AI is claiming property rights.
Another way to look at this is, when I was a child, I gave a writer an idea for a book, and after she wrote the book, my mother drew the pictures. Who gets to claim ownership of the book?
The original Bible stories were pagan stories written long before the Hebrews existed. Back in the day, there was no concern about plagiarism. If someone could improve on the thought, that was a good thing. The problem here is the false belief that God wrote the Bible, and this God did things involving humans. We would have a different reality if all those stories were credited to the people who originated them.
What is best for acquiring and spreading good information?
God bless you! That was the first argument I made. I don't care about impressing others as much as I care about my own mind and what I can do with it, and how much better I can do the thinking with the help of AI. I am enjoying myself, and taking AI away from me would be like taking crutches away from a person who needs them. That would be a shame because I believe the elderly have great value because they can pull from so many experiences and so many memories, and this becomes the bigger picture that the young do not see. But we are no longer absorbing information as we did when we were younger. We may forget the exact fact we wish we could remember, but when we find the information, we have a better understanding of its meaning.
The world has a better chance if the elderly participate in the discussions with the young, and both can do better with AI, Please, folks, consider the value of this.
I can't know what the creators of A.I. know, but i personally know enough about computers, programmers, and computer technicians to know that humans can't handle the massive number of rapid calculations that modern computers are capable of doing. That's the whole reason humans invented computers: the latter do large volumes of rote arithmatic and logic. Humans simply are not mundane or lifeless (for lack of better terms) to even begin to compete on that level. We are sensitive and require a lot of things to survive, and we generally need narrative format and human language (which is radically from code and computer instruction) in order to make sense of things.
Quoting T Clark
Indeed
I merely emulate Wittgenstein, who rightly noted that a serious and good work of philosophy could be (and I would add has been) written consisting entirely of jokes.
Why isn't anyone trying to do anything about it, despite the problems predicted?
Quoting RogueAI
Maybe. Maybe not. Why can't we ever consider whether there are some things that are more important than beating China?
---
Quoting Joshs
This would be true if you paid for a LLM and provided training data that is limited to "Multiple knowledgeable persons," but that generally doesn't happen. AI is providing you with a cultural opinion, not an expert opinion. AI is reliable wherever the cultural opinion tracks the expert opinion.
Quoting unenlightened
Similar:
Quoting Leontiskos
Perhaps that is the best policy, but does it already involve the falsehood?
If AI a stranger, then AI is a person. Except we know that AI isn't a person, and is therefore not a stranger. Similarly, we do not give strangers the benefit of the doubt when it comes to technical knowledge, and yet this is precisely what we do with AI. So at the end of the day the stranger analogy is not a bad one, but it has some problems.
At the end of the day I think it is very hard for us to understand what AI is and how to properly interact with it, and so we default to a familiar category such as 'stranger' or 'expert' or 'confidant'. The work is too theological for the atmosphere of TPF, but C.S. Lewis' That Hideous Strength is a remarkably prescient work in this regard. In the book cutting-edge scientists develop a faux face/mouth which, when stimulated in the proper ways, produces meaningful language which is both mysterious and nevertheless insightful. The obscure nature of the knowledge-source leads inevitably to the scientists taking its words on faith and coming to trust it.
Quoting Leontiskos
Silly me. I thought they were the same thing. Seriously though, when I ask the a.i. to delve into the morass of philosophical concepts floating in cyberspace, I am aware that there are multiple cultural camps represented by interpretations of Husserl, Hegel, Nietzsche, Wittgenstein and many others. I find that a.i. is good at honing in on the expert opinions within these camps, but I have to redirect it if I want it to focus on a different camp than the one it has landed on. I will have to say something like youve given me a summary of the existentialist version of Nietzsche, but now I want you to contrast it with the postmodern version of him.
Quoting Joshs
That's one of the key claims. I'm not sure its right. I doubt AI is able to differentiate expertise accurately, and I suspect that true experts could demonstrate this within their field. The intelligent person who uses AI is hoping that the cultural opinion is the expert opinion, even within the subculture of a "camp." At some point there is a tautological phenomenon where simply knowing the extremely obscure label for a sub-sub-sub-camp will be the key that unlocks the door to the opinions of that sub-sub-sub-camp. But at that point we're dealing with opinion, not knowledge or expertise, given the specificity of the viewpoint. We're asking a viewpoint question instead of a truth question, and that's part and parcel of the whole nature of AI.
A.I. is significantly useful to me, because vetting its answers is not the same thing as knowing them beforehand. It can point me to an interpretation that I hadnt thought of, and I can then verify the credibility of that interpretation.
Quoting Leontiskos
Isnt a philosophy a metaphysical viewpoint or worldview?
That's Plato, one of the best philosophers ever. He's all jokes, all the way through, until you hit the "Laws", the most mundane and boring work ever, but that's more like dogma than philosophy.
Anyone can do so on their own time; I mostly do by coincidence when doing google queries (I think).
Here at the forums, I kind of expect interacting with humans.
Or, at least, if interacting with AI by proxy / indirectly, that it's relevant (and checked) summaries or the like.
How to distinguish, though?
:D I could use a memory upgrade
There might be a chip for that.
The scary thing about interacting with AI, is when it interacts with you without you knowing. But I guess that's nothing new.
This becomes rather subtle, but what I find is that people who tell themselves that they are merely using AI to generate candidate theories which they then assess the validity of in a posterior manner, are failing to understand their own interaction with AI. They are failing to appreciate the trust they place in AI to generate viable candidate theories, for example. But they also tend to ignore the fact that they are very often taking AI at its word.
1. Information shared between a lawyer and client is privileged, meaning, the lawyer cannot share, or be asked to disclose, that information, with anyone else, unless the client allows it. So one question that arises is whether sharing information with AI puts that information outside of the client privilege. Can a lawyer put privileged information into an AI engine and still claim the information remains privileged between lawyer and client? There is no formal answer yet, so lawyers who want to be safe have to be careful not to share privileged information with AI, unless the AI is entirely on a closed system and within the lawyers control. Then the argument would be that, whether AI is like a person or not, no one outside the firm (the lawyers firm) is seeing the client info so it remains privileged between lawyer/law firm and client. But if the lawyer goes to ChatGPT, even if the lawyer doesnt use the clients name, that lawyer may be waiving his clients privilege. This seems right to me. (This is totally untested in the courts, and there are few laws addressing AI and none addressing privilege.)
2. When a lawyer gets analysis and output from AI, is that to be treated as though it came from another lawyer, or just from a word processor? Should AI be treated as a low level lawyer, or just a complicated Wikipedia resource? Again, this is too new for a clear answer, so to be safe, lawyers should act as if AI is like an associate lawyer (a person), and fact check, check every cite, confirm every conclusion - essentially scrutinize AI work product like it is first year associate lawyer work product, before providing it as advice to a client. It is (likely) unethical for a senior partner at a law firm to certify AI work product without careful review and detailed confirmation, just like it would be unethical for the partner to just pass through associate attorney work without reviewing it.
I view AI like a complex, mindless, soulless tool, that spits out highly complex arrangements of words. Its up to me to judge those words as relevant, useful, making sense, insightful, accurate, etc., or not. The value I might add to a perfectly worded AI response is confirmation that me, a person, can see and understand the value of the AI response and can agree those words are perfect.
If we remove this human layer from the words, they are utterly dangerous. Because they sound like they are coming from someone who can judge their value.
It may one day be the case the AI gets so good, upon every review of its output, the smartest minds in the field always agree that the AI work product is flawless and better than they could have imagined. Whether smart people will ever decide there is no need to doubt AI output remains to be seen.
I do think anyone who sees AI output as though it came from a person is misunderstanding the value of their own judgment and the nature of what human judgment is. AI cannot provide this judgment. The words here is my judgment do not make it so.
Right now, we all always know you dont take the first answer Google displays. You take ten answers from different internet sources, find some overlap, and then start deeper research in the overlap and eventually you might find some truth. Right? The internet cant be trusted at all. Now with AI, we have photo and video fakes, voice fakes, that look as good as anything else, so we have a new layer of deception. We have the hallucination which is a cool euphemism for bullshit. We have exponentially increase the volume of false appearances of reality. Essentially, with AI, we have made the job of confirming veracity and researching through the internet way more precarious.
AI also does all of the good things it does too. But AI is as much of a boon as it is going to be a hindrance to progress. If you ask me, people need to treat it as a tool, like a screwdriver. Just as dumb as a screwdriver. And people need to be reminded that it is a tool. And people must always be told when they are dealing with AI and when they are not.
We need to remind ourselves that an impressive AI answer can only be adjudged impressive by an impressive person. And if we cannot judge the value of the AI for ourselves, we need to find a person, not a tool, to make that judgment.
We have to remember that only people can say what is important, and only people can say what is intelligent. So only people can appreciate AI. And these are what will always make AI a tool, and not the artificial intelligence we have named it.
You can still submit your post as "s" to ChatGPT and ask it to expand on it.
There are exceptions - a perfect stranger deserves thanks for help and shouldn't be addressed rudely, whereas AI deserves no thanks or politeness. But it seems to me quite an effective heuristic - one that underlines that AI doesn't create any novel ethical problems, but just exaggerates existing ones. And I suppose on the plus side, it has made cheating available to the masses. It used to only be the rich who could afford to hire someone to write their essays for them....now such cheating is available to virtually everyone!
If AI was disconnected from reality then how can it provide useful answers? What makes AI useful? What makes any tool useful?
Ctrl+Z
Did you not look at the quoted site?
A dictionary or a thesaurus is useful, and AIs are useful. They are trained on material that we find useful, and then we use them. We use them to generate the material that they then use in their training in the next generation. We have real lives that can correct falsehoods to some extent, we do research trip over obstacles, find that our ideas do not always work in practice.
AI has none of that, so when it starts using its own material as its input, errors are multiplied like those of inbred genomes - only much faster. Half of internet content or there abouts is already AI produced, and that is rising fast. This means that all the commonplace human nonsense ideas, racism and sexism for example, having been embedded by accident, become endemic and pervasive, and that's without the weaponisation of the internet with deliberate misinformation that now happens on an industrial scale from many quarters - Russia, Israel, the far right, big oil, etc etc.
For example: https://responsiblestatecraft.org/israel-chatgpt/
I do have an exception to this: A company called 'Heat Geeks' that uses AI to design heat pump systems, and then the same AI monitors the actual systems in operation over time (contact with reality) and uses that data to update its design parameters. I dare say there are many such applications, but they are not the kind of AI we are talking about here, are they? We are talking about the internet scraping generalist AIs; breast cancer screeners and the like are different, and my and the article's criticisms do not apply, but nor will pf posters be using them to write their posts.
Okay sure, but although the OP's complaint is a bit vague, I suspect that the counsel is not motivated by these sorts of ethical considerations. I don't think the OP is worried that we might infringe the rights of AI. I think the OP is implying that there is something incompatible between AI and the forum context.
Quoting Clarendon
I myself would be wary to advise someone to treat AI as if it is a stranger. This is because strangers are persons, and therefore I would be advising that we treat AI as if it is a person. "Heuristically pretend that it is a stranger without envisioning it as a person," seems like a difficult request. It may be that the request can only be fulfilled in a superficial manner, and involves a contradiction. It is this small lie that we tell ourselves that seems to be at the root of many of the AI problems ("I am going to pretend that it is something that it isn't, and as long as I maintain an attitude of pretense everything will be fine").
Someone might ask, "Why should we pretend that AI is a stranger?" And you might answer, "Because it would serve our purposes," to which they would surely respond, "Which purposes do you have in mind?"
Perhaps what is being suggested is a stance of distrust or hesitancy towards the utterances of LLMs.
This is why I was shocked that philosophers, of all people, wouldn't be ignoring the "AI summary" invitation at the top of the search results?
I'd have thought the relevant job description, that of filtering the results for signs of trails leading to real accountable sources, would have to disqualify any tool known ever to actually invent false trails, let alone one apparently innately disposed to such behaviour?
It amazes me that people seem to be so unworried about the thorough poisoning of the well. Though given the extent that the well of the entire internet has been so thoroughly poisoned, perhaps it's just more of the same. But the whole story gives a good basis for thinking of this as the post-truth society. No-one seems to care much. I suppose it's all good fun and labour-saving - until you get on the wrong end of a lie. So much for the vision of information freely available to everyone.
Quoting bongo fury
I do (ignore it). I have yielded to the temptation occasionally, but never found the summaries at all helpful. Also, I reason that the motivation for offering it so freely is to get me hooked. Perhaps, in due course, a more balanced view will develop, at least in some quarters.
Quoting bongo fury
To be fair, AI might pick up some of the donkey work in presenting and even organizing information. But not the labour of (trying to) check it.
Quoting Ludwig V
Do you think the Wittgenstein of the Blue and Brown books and later work would agree with anything youve said here about contamination and post-truth ( as opposed to real truth?)? I think he would say you are bewitched by a picture , the picture of information as a well of pure truth that technology has now polluted. In reality, information is a tool we use in specific human activities, and AI just adds new tools and forms of expression.
Quoting Ludwig V
I think of the checking as the donkey work and the peewee ting and organizing as the real labour.
I don't know what peewee ting is. But I take your point. I put my point badly about the checking. I agree with you that fact-checking ought to be donkey-work and a prime candidate for delegation. But it looks as if that's not going to be possible. Or do you know better?
I have no idea what Wittgenstein would think of all this. But I don't think he would be happy with a society that cheerfully accepts the limitations of AI without trying to rectify them or compensate for them.
Oh, maybe I gave you the wrong impression. I was not accepting, but bewailing our post-truth society - by which I mean a society that doesn't care about truth. Also, I didn't make clear that I don't think it is only in the 21st century that societies have not cared, or not cared much, about the truth. I would welcome a machine that could reliably tell me what, in the information that circulates around the web and across the world, is truth and what is not.
Quoting Joshs
Maybe so. I guess I'm the pessimist and you're the optimist. We'll see. But I cannot get over my reservations about a tool that actually adds in false information to the mix. Does it not bother you? Do you not think it undermines the point of the exercise?
Seeing its expansion reminds me of what David Krakauer said about tools for understanding. Some increase your capability, others replace it. It seems like a good rule of thumb regarding the digital.
As a method of plagiarism, it resembles its predecessors. I remember how Cliff Notes provided the appearance of scholarship without the actual participation of a student.
Its an actual shame.
The irony of the information super highway. The irony of calling its latest advancement intelligent. We demean the intelligence we seek to mimic in the artificial, without being aware we are doing so.
We, as a global society, as the most recent representatives of human history, are not ready for the technology we have created. This has been true probably for 50 years. Weve gotten ahead of ourselves. We need less; and even when we realize it, in order to get to that place where there is less, we keep inventing something new, something more. We are torn in all directions today.
Maybe its always been that way - we forever are trying to catch up to ourselves. AI it seems could create an impassable chasm for us to catch up with, if we are too stupid to control ourselves about it.
AI, with ubiquitous surveillance, digital currency, digital identities for easy tracking and control none of us really know what we are already into.
Quoting bongo fury
If we can get AI to work as well as people seem to hope it does, maybe someday it will be as good as the revolutionary tool it is being sold as. But what will be catastrophic is if it remains so unpredictably wrong, and people accept it as close enough anyway, knowingly letting themselves be satisfied with less than the truth. I was always worried Google and Wikipedia and just the modern media were going to lead us that way - now we have AI to expedite the sloppiness and stupidity.
And AI is called intelligent, like a moral agent, but no one sane will ever give it moral agency. So we can further disassociate intelligence from morality. Just what we need to add to our world - more sociopaths that make errors and lie about them.
I remember Cliff Notes and the endless battle with plagiarism. It's not that AI actually invents anything; it's just that it makes things easier - for good (there are obviously some things that it does very well indeed) and for bad.
Quoting Fire Ologist
That's as may be. What worries me is that people will cede authority to it without even asking themselves whether that is appropriate. It's already a tendency with conventional software - and to be honest a tendency before these machines were invented.
Quoting Fire Ologist
That's the thing. "Revolutions" in technology don't change the fundamentals of being human, and so we still muddle our way through.
AI gets its information from scraping public websites. It does not make up its own data.
AI is trained using various methods that resemble how you acquire and process information.
Sure, AI is like every other source of information that needs to be verified and cross-referenced. Just as we take what people around here say with a grain of salt, we do the same thing with AI. Many people on this forum don't seem to have any inclination to verify the information they are posting no matter the source, and there are some that cannot argue against what was said and resort to attacking the source or the person instead of the argument.
Quoting RogueAI
I think banning would not be totally effective, but more effective than mere discouragement or partial allowance.
Quoting apokrisis
That's true I suppose. It's a bit like the global warming problem and the Prisoner's Dilemmawe can be almost certain that nothing significant will be done about it because that would be too inconvenient and bad for business. So, why should I not own a car, not use air-conditioning, stop flying overseas and so on, when almost everyone else will continue doing these things? My righteous efforts would amount to squat.
Quoting T Clark
If it really is a good reason for me not to use AI, then surely it is a good reason for everyone not to use it, and thus a good reason to ban it everywhere. Of course I know it will not be done. Another concern is the erosion of human creativity.
Quoting frank
Sure, why not? I would be more impressed if someone created a fascinating post by themselves, though.
Quoting bongo fury
As I said above I think banning would not be totally effective, but more effective than discouragement.
Quoting Harry Hindu
It's not the case that I must think that at all. That said, I do generally refrain from quoting philosophers, whether dead or alive. I mostly prefer to discuss things using my own ideas and in my own words. I realize that my own ideas are mostly not original, but I have no idea what source most of them came from, so I could not acknowledge the originators even if I wanted to. Add to that the fact that whatever originator we identify probably cannot claim true originaility for their own ideas.
Quoting Athena
That's a poor analogy. It's obvious when people are wearing makeup or wearing clothes that enhance their appearances. Property rights might be one reason to object to plagiarismthere are others. Pretending to be something you are not is one.
"What is best for acquiring and spreading good information?" First tell me what "good information" is.
Quoting Pierre-Normand
I did that and this post is the result. :razz:
Quoting Harry Hindu
Thanks...good tip.
Sadly so. But also, you could look on the sunny side.
AI scrapes PF along with the rest. My own opinions are being quoted back to me as a source on my searches. So filling PF with more nonsense might be a friction that drags the almighty LLM down into the same pit of confusion.
Let's rally and bring AI down to our level!!! :strong:
But @Jamal just convinced me to use the em-dash in my posts. Thanks Jamal.
:lol: Right, they're only as good as what they are trained on, so I thought they were already down in that pit. Some commentators predict that as the AI content on the Net becomes predominant, and they inform, feed off and train each other with material increasingly their own, that we will then be confronted with an alien intelligence orders of magnitude smarter than we are. I think for that they will need to be embodied and endowed with analogues of biological sensory systems. I remain skeptical. You gotta laugh I guess.
Dont worry about being replaced. Worry about what clever humans will do if allowed to dumb their own lives down.
Imagine I could offer you a prototype chatbot small talk generator. Slip on these teleprompter glasses. Add AI to your conversational skills. Become the life of the party, the wittiest and silkiest version of yourself, the sweet talker that wins every girl. Never be afraid of social interaction again. Comes with free pair of heel lift shoes.
:rofl: :cry: That'd be a hilarious scenario, if it wasn't so sad.
The key element in that scenario is that there is no interlocutor to engage with if you attempt a response. Light's on, nobody home.
The difference between sophism and dialogue has long been drawn as the difference between argument for argument's sake and honest expressions of what one thinks.
A peddler has come into town with a new collection of masks.
I have seen interactions with LLMs on this site that certainly make it look like the lights are onI don't know about whether it looks like anyone is home. I've watched interviews with tech experts like Mo Gawdat and Geoffrey Hinton, who believe that LLMs are not only intelligent, but capable of rationality and even self-awareness. Do they have an idea of themselves? A self-narrative? I can't answer that.
I agree with you about sophismhowever, LLMs are generally not, it seems, argumentative for its own sake, but more obsequiously agreeable, and not for its own sake, but presumably to, via flattery, hold our interest and influence us.
So, there is the smaller problem that we don't know whether we are discussing stuff with a human or a botbut the larger problem I see is that LLMs should never have been released into the wild so recklessly. One might say not recklessly, but with the primary motive of profit, but to my way of thinking that is itself reckless. They collect marketable data about you and me with every interaction, and that data will benefit only, or at least mostly, the tech moguls.
A new peddler has indeed come into town with a new set of masks, selling not merely the masks, but snake oil.
The US needs to build 68 city-size electricity generators in the next three years. And that is just the start of the exponential curve. This is despite cost per search also dropping on its own steep curve.
So imagine that. The physical impact of AI data centres being forced on communities which have the necessary land and water by a new national imperative. Mini nuclear reactors are already being prototyped. Local red tape wont be allowed to stand in the way.
What is crazy is not that we will replace ourselves with something that is even smarter, but replace ourselves as we thought with fracking and industrial farming that we couldnt get any dumber, but now are proving that indeed we can.
We werent crashing the environment fast enough. So OK. Lets focus on accelerating that.
As if cryptocurrency wasn't bad enough! Talk about "snake oil"!
You're the only one who cares how impressed you are. A fascinating post is a fascinating post.
For me, in the context of philosophy, a fascinating post would be one that embodied a creative, complex and coherent view of things. I can't think of any other kind of post that would fascinate me.
It is being stopped here.
And 50% and growing of public website material is produced by AI. So it is eating its own bullshit to an ever increasing extent.
Are you sure about that? This seems quite exaggerated. I know that a study published in August 2024 has been widely misrepresented as making a similar claim. What was actually claimed is that 57% of the translated material published on the Web was translated with the help of some machine learning software, not even necessarily generative AI. Today, lots of marketing material may be produced with generative AI, but marketing material is B.S. even when produced by humans anyway. Lastly, the curated datasets used to train LLMs generally exclude such fluff.
I think what the good, noble sir @unenlightened wishes to express is that the mediums in which we expect genuine human interaction and discourse are becoming far from that, to the point that yes it may literally be exactly to that degree. He further states, perhaps, in my opinion, the overall zeitgeist of communication or media not experienced in person is becoming increasingly diluted and corrupted by non-genuine and non-human expression, despite the majority of those not realizing such and perceiving such non-human expression as if were genuine. And that, my good sir, should alarm any thinking man with a mind, heart or conscience, in my opinion. Here, here.
And truly, the ramifications of such, especially long term in reference to modern youth, are nothing short of, not only abysmal, not only cataclysmic, but perhaps apocalyptic. All things considered as far as the ever-raging battle for the mind and soul of the youth and therefore future of humanity is concerned.
If AI helps me compose more correctly, why not?
TPF has always seemed more compositional than conversational; AI just exacerbates that quality.
So is philosophy a monologue, or a dialogue? When employed compositionally, and edited intelligently, AI output can seem very human. When employed dialogically, AI quickly shows its true face.
No AIs were consulted in the making of this post.
And before AI many internet sites were copies of other sites, and still are - with the same information. News networks piggy-backed off the reports of other news networks. You're complaining about something that has been pervasive well before AI became a thing.
This is why the source is no longer important. It is what is being said that is important - not the scribbles, but the subject the scribbles refer to.
You said,
.
So if one did not write the post themselves, but merely copied and pasted a quote as the sole content of their post, then by your own words, it is not their post.
I'm not sure, but no one seems to be sure how much of this site is AI generated, and I may be quite exaggerating, but its use is so ubiquitous though, that it must I think it must be increasing near exponentially overall. Perhaps I am a year or so premature. I think the link I posted that suggested the danger is fairly reputable even if my numbers are wrong.
But then AI was not being trained on its own nonsense, only on humanity's nonsense.
And I'm not complaining about anything; I am reporting a warning by others that some people, you included it seems, have not considered. But in a sense you are right; AI simply increases to power of the speed of computing the degeneration of internet communication that was already happening between humans.
And you and every other person on this forum are part of humanity, no? So this forum is full of nonsense? Thanks for contributing to the nonsense. What is the point now of having any discussions when it is all nonsense?
How can you account for the exponential progress humanity has made in the past few centuries compared to the first several thousand years of our existence. It would seem to me that we have begun to emerge from all the nonsense of the past several thousand years that preceded the scientific revolution.
Quoting javi2541997
As long as you understand what being stopped means. The most vehement objectors to a.i. on this site likely rarely use it anyway, so their behavior will not be affected by the ban. Meanwhile the rest of us will likely make increasing use of a.i. here as we do everywhere else in our lives, but if we are wise we will not use it as a substitute for generating our own ideas but as a stimulus and guide. So what in fact is being stopped? Copying and pasting the telltale syntax of a.i. responses. It is only this syntax which flags a.i. as opposed to human-generated text. Banning this syntax at least forces posters to paraphrase what the a.i. churns out, but it is still left up to the poster to decide whether to plagiarize from the a.i. or form their own interpretation out of their interaction with it , which is the central intent of the banning.
I didn't say everything human was nonsense, the suggestion that it is, is certainly nonsensical, so thanks but no thanks for your contribution. But as you demonstrate, there is human nonsense, and the internet allows you to broadcast it to the world and to any passing AI scraper.
As usual, and AI did not change anything in this regard, you have to do your own research with access to multiple sources, not just what AI is telling you.
I think the issue is that many of the people on this forum that spent their hard-earned money on philosophy courses and degrees are simply upset that now AI can do philosophy with the best of them, if not better because it lacks the emotional baggage that some on this forum carry. :wink:
It precludes you from becoming more skillful. The disengagement from the art limits your horizon.
I am reminded of the Flappers of Laputa who constantly had to remind their masters where they were and what they were doing in Jonathan Swift's Gulliver's Travels (page 17 forward)
As long as I see it, this is not about preventing the use of AI by most of you. It is obvious that a large number of members actually use it every day. I think I also use AI because QuillBot proofreads my grammar before I post here, but it doesn't think and write for me. This is the whole point: avoid threads and posts which are likely to be written and developed by ChatGPT or other similar AI. What I learnt in this forum is that dialogue between us is very important, and thanks to this, I learnt a lot. Otherwise, if I wanted to trust more in an AI, I would have used Google since the beginning before joining here.
Quoting Harry Hindu
Thats part of it. Another comes from valuing truth over perspectival stance. If I want a.i. to design a retirement plan for me, I want numerical accuracy, which it isnt necessarily designed to do well. If I want to engage a.i. in a philosophical dialogue, I dont want to hear facts, I want a perspectival interpretation that I can vet, push back on and and sculpt into a more refined understanding of the issue. For instance, I asked a.i. the following, after discussing with it the embodied phenomenological work of Eugene Gendlin.
What are we to make of the status of concepts like self and other, subject and object in Wittgensteins later work? Must they be relative to the grammar of a language game or form of life? That is to say, must we bracket, to use a husserlian term, words like self, other, subject and object when talking about the dynamics of language games if we are inclined to think of these words as referring to persons or humans? Is there a way of distinguishing between my talking to myself or perceiving an image like the duck/rabbit vs engaging with another person other than in terms of a contingent, contextually specific grammar?
Its response, as Ive found it almost always to be with regard to these kinds of questions, was relevant and very substantive.
The person emerges neither from pure grammar nor from pre-linguistic givenness, but from the ongoing process of mutual bodily responsiveness that certain grammatical structures organize and enable us to articulate. This is what Wittgenstein gestures at with forms of life but doesnt fully develop, and what Gendlins process philosophy could help articulateif brought more explicitly into dialogue with the question of intersubjectivity.????????????????
I then delved more deeply: by arguing:
Deleuze and derrida deal with this issue by making a self-other, subject-object dichotomy derivative of and secondary to a more primary binary which whose structure is fundamentally temporal rather than intersubjective.
It adjusted its previous response in a valuable and clarifying way:
I refined my previous argument in response to a.i.s analysis:
But if thought can be shown to be inherently outside of itself, that is, exposed to alterity in simply repeating itself , then that would be the sufficient basis to found the self-other binary. Even when we perceive another person, their alterity will only be experienced as other from my vantage point. Any claim to the contrary still be from my vantage, a vantage that is already transforming itself moment to moment, but in such as way as to make the otherness of other persons always variations within my own self-othering. When Wittgenstein talks about seeing-as , he is illustrating how grammar works without having to bring into the account other persons.
This was a.i.s nuanced, thoughtful and entirely relevant response:
I pushed back hard at this point. I wrote:
You claim:
I know (even if implicitly) that both voices arise from the same bodily process
The response cant genuinely surprise me about my own intentions (though it can about my past)
Theres no resistance that could fundamentally redirect my trajectory.
I dont accept any of these claims as valid. The fact that a schizophrenic hearing voices they dont recognize as their own even though its from their own body, and novelists describe how their characters take on a life of their own and tell the author what they want to do, buttresses Deleuzes claim that the self is merely a byproduct of processes which are pre-personal.
This was the a.is reaction to my objections:
I responded by pushing back more:
You say Voices/characters that emerge from processes within one body: The schizophrenics voices, the novelists characters, my dream interlocutorsall these are generated by and constrained by the resources of one biological system.
But this is for Deleuze a body without organs. It is not a system of causal bits, it is a plane of consistency, which cant be understood as an individual biological body.
The a.i.s final answer reveled how it was able to take a philosophical discussion from a vague starting point and bring it to a level of depth and detail which opened up a vast array of much more nuanced questions. And because I am well acquainted with the authors being discussed, I knew that the information it was using for its arguments was not being simply fabricated out of whole cloth, but was highly relevant and based on real texts of the authors. I almost always find this to be the case with regard to A.is treatment of philosophical issues.
What Ive learned in comparing the forum with a.i. is that, unfortunately, the majority of participants here dont have the background to engage in the kinds of discussions I have been able to have with a.i. concerning a range of philosophers dear to my heart, (such as Husserl, Heidegger, Derrida, Deleuze, Gendlin and Wittgenstein), especially when it comes to comparing and contrasting their positions.
I include myself in those who dont have the background to engage in the kinds of discussions you seek. I joined this site to learn and exchange my ideas with the rest, and fortunately, I always felt welcome here. However, I understand that philosophers with a high background like you want more substantial analysis in the threads. It is comprehensive. I am sorry on my behalf.
I can't give a sermon as to why using them is bad, as I use them - though never here or in any thread or conversation. But I get it and I think it's wise to point out the issues with such things.
One problem is that it is harder to avoid this stuff. It's even hard to use Google now, because the damn AI bot answers instead. As to how crazy this can get, we don't know. I suspect it won't be as bad as alarmists say - because these things are not intelligent, we are - or use to be.
So far, this website has managed to do quite decent work monitoring this stuff- but an outright ban is probably impossible by now.
It can do, but not necessarily. Copernicus said "If AI helps me compose more correctly", so it's Copernicus who is doing the composing. Why believe that the use of AI constitutes a disengagement from the art rather than an efficient way to engage and learn that art? I understand the cynicism, but it's important to see that there are other ways of using LLMs.
It's much easier to see this in computer programming. Simon Willison, co-creator of the Django web framework, has some interesting things to say that cut through the crap.
[quote=Simon Willison;https://simonwillison.net/2025/Mar/11/using-llms-for-code/]Ignore the AGI hypeLLMs are still fancy autocomplete. All they do is predict a sequence of tokensbut it turns out writing code is mostly about stringing tokens together in the right order, so they can be extremely useful for this provided you point them in the right direction.
If you assume that this technology will implement your project perfectly without you needing to exercise any of your own skill youll quickly be disappointed.
Instead, use them to augment your abilities. My current favorite mental model is to think of them as an over-confident pair programming assistant whos lightning fast at looking things up, can churn out relevant examples at a moments notice and can execute on tedious tasks without complaint.[/quote]
The fact is that if you don't know what you're doing, the result will be a mess. I've used AI for programming before and you really have to guide it and pay close attention to everything it does and constantly question its decisions. What makes this example useful in the debate is that in programming, good practice is enforced by the necessity for code to run without bugs, so we can clearly see what good practice looks likeand these are the strategies that will also work in situations where bad practice is not similarly punished, like creative writing or writing posts for TPF.
:up:
That's pretty accurate so far as I can tell.
Quoting Joshs
Thanks for that interchange; it was informative and insightful, and I certainly agree that myself nor anyone I am aware of here could hold up AI's end of that dialogue to that standard. But may I humbly suggest to you that what resulted was rather more like an internal dialogue of you with yourself, than a dialogue with another philosopher. Which slots right into the discussion itself as a significant fact.
Another philosopher would surely make other points, have their ownr favourite philosophers to call on, and generally be much more annoying? I would have pointed out at least that Schizophrenia and Novel-writing are somewhat late onset diseases, usually first developing with adolescence, and somewhat after the acquisition of language skills and some kind of social identity and sense of self. And I would have made more of W's demolition of 'private language' which puts the social relations as necessarily primary. 'Other-self' voices can only be heard by a language speaker.
I should not have spoken so absolutely. I was focusing on the question "why not?"
Your example of coding reminds me of how I learned methods of work in the trades. It started by following instructions and imitating others. Through experience, the arts became my own. That does not mean being free of help or learning new methods. It does mean being where the rubber meets the road as you describe in relation to criteria of failure.
In that context, I have a ready reference for what amplifies a skill and what replaces it. Laser levels are powerful tools but do not check themselves. Plumb bobs do. Scheduling software aggregates information much quicker than I can but are useless without me imagining the work sequences against lived experiences amongst known resources. Calculators and accounting software are great but do not replace the one-on-one correspondence needed to be sure something has not been forgotten. I use cyphers for my material assessment because they don't care when I think of something.
On the other hand, drawing programs do what my hand sketches do not. They create a model which measurements confirm but do not compose. I would call that a replacement that allows me to communicate with designers in their language but not do all of what they do. GPS and google maps have replaced my orienting skills. There in there somewhere but I hope I don't need them tomorrow. I have no idea what my wife's phone number is.
As for writing creatively and intellectually, I don't think of it as dispensing with help altogether as Simon Willson describes the matter but do think finding one's own voice and point of view is about becoming more independent from whatever helped a writer in the past. I guess I am one of those Harry Hindu says threw down a bag of drachmas to become a participant in the conversation. The scope of such literacy is itself a dependency, however, an artifact of intelligence that my thinking is far from free of. That is why I have put so much emphasis here on commenting on original texts in favor of commentary upon commentary. It would be fair to say that excludes me from a lot of discourse. It does let me know what my own understanding is and what it is not.
Yes...so what? What do you think you are disagreeing with here?
Quoting unenlightened
Spot on as spotted!
Quoting unenlightened
I agree that dialoging with a.i. is not like a discussion with a human, but if it consisted of only an internal dialogue with myself I wouldnt learn as much from it as I do. A human will have a point of view, but the a.i. will take any perspective in its database, depending on cues given in the conversation. Youre right that when Im arguing with a human, they may be able to bore deeper into the subject matter from their perspective than an a.i. can, both to critique and defend. But with the a.i. Im not restricted to one humans perspective. Instead, at my behest, the a.i. exposes me to arguments from multiple perspectives. Its capable of presenting aspects of these arguments I may not be familiar with, and juxtaposing these arguments in ways that I would not have thought of on my own. In addition , it will push back and question my responses.
Ai demonstrates that self-reflection isn't needed for a comptent peformance of philosophical reasoning, because all that is needed to be an outwardly competent philosopher is mastery of the statistics of natural language use, in spite of the fact that the subject of philosophy and the data of natural language use are largely products of self-reflection. So it is ironic that humans can be sufficiently bad at self-reflection, such that they can benefit from the AI reminding them of the workings of their own language.
Despite what I said I actually tend to think of the automation of tasks as something you take advantage of after you've learned how to do it manually, which fits with Willison's thought that you have to know your way about before you can properly use the automation tools.
So it's at the learning stage that how AI is used is most crucial, because used badly it probably can allow and encourage one to avoid the bit where you learn how to do it manually.
I realize I've taken two sides of a debate here.
I wholeheartedly agree with your assessment. I used AI extensively while writing my book on NDEs and my work on epistemology. It was helpful for editing and idea generation, but it also made frequent errors, enough that I often wondered if it was creating more work than it saved. You have to know the material well to catch the subtle mistakes. Philosophical reasoning is especially difficult for AI: unlike programming or mathematics, it depends less on fixed rules and more on conceptual precision and contextual understanding. I don't think there is any doubt that it will help refine our thinking, but I'm not sure that it will replace humans in this area, but who knows.
I'm not disagreeing with anything necessarily. I was just seeing if you were being consistent. So, the question now is: Have you made this much of a fuss about other members' posts whose sole content is a quote from somewhere else?
What I am disagreeing with is the notion that, on a philosophy forum, we should be focused so much on the source more than the subject, as that is the focus of philosophical discussions, and attacking the source is a red herring or genetic fallacy, usually seen when one side has no more arguments to make.
I experience the same thing when coding with AI. You can start off with some basic structure and expand on specific areas, building on what was created before. And you need to know the programming language to be able to pick out mistakes and make the adjustments you want to see. Also the first block of code it wrote worked right out of the box, which lines up with what you said about AI is not fabricating. It can take learned data and apply it to a new situation like my specific request for a certain function that has never been written before - the same way a human programmer would - and it worked.
Just because we use calculators to perform simple arithmetic, we have not forgotten how to do simple arithmetic. Calculators are tools to speed up the process of things that we already know how to do. We do this with our own brains. Once we learn a task, like riding a bike, we outsource the decision-making when performing those tasks to unconscious areas of the brain. We no longer need to consciously focus on each movement of each leg and our balance. It is all done unconsciously, which is why your mind can be in a different place while riding a bike and you arrive at your destination but don't remember the ride.
Yes, this is an important point that people fail to appreciate about our thinking machines. They understand the role of simple labor-saving devices, but when it comes to a.i., they think its a zero-sum game, as though whatever the a.i. does for us takes away some capacity we possessed. Whats the difference between using a calculator and using a shortcut like long division? Neither will be useful to us unless we know why we are using them.
Thank you for sharing your experience of interacting with AI. As has been frequently mentioned in this thread, one of the most pressing challenges is that we still do not fully understand what happens when a person collaborates with AI. The advent of AI has undoubtedly been reshaping not only our ways of thinking but the very structure of thought itself. In relation to this issue, I would like to share here a part of my recent dialogue with an AI.
My question: Where does the creative dimension reside in our era of linguistic automation? When language itselfthe medium of thought, trust, and social meaningis being industrially automated for the first time in history, is there still space for the production of the new?
AI response: Thats exactly the essential question and one that many philosophers (Deleuze, Simondon, Stiegler, Agamben, even Derrida) help us approach, each in a different register.
Lets explore it carefully.
1. Automation ? total closure
When language becomes automated, it may seem that creativity dies that meaning is replaced by mechanical recombination.
But automation doesnt end creativity; it reconfigures its locus.
In older symbolic systems alphabetic writing, printing, cinema automation always displaced something of the human. Yet each time, a new margin opened up:
Writing liberated thought from oral immediacy ? new forms of reflection.
Printing industrialized text ? new publics, new genres, new subjectivities.
Cinema mechanized movement ? new perception (the movement-image).
So when LLMs mechanize linguistic invention, the same dialectic reappears: the creative act migrates.
Its no longer (or not only) in composition itself, but in meta-operations prompting, curating, orchestrating, recombining, filtering, critiquing.
The creative site shifts from producing sentences to producing differential relations among automated language flows.
2. The creative outside (Deleuzes view)
For Deleuze, creation is not personal inspiration but a process of differentiation of extracting the new from within repetition.
Automation multiplies repetition to an extreme, but thats precisely what can generate new cracks where difference appears.
In that sense, large-scale linguistic automation might intensify the conditions for creation:
Repetition is the power of difference. (Difference and Repetition)
The creative task then becomes:
how to compose with automated language without being absorbed by it;
how to fold its repetitions to produce singularities new affects, new conceptual connections.
In other words: the human moment of creation survives where language meets resistance where we break the automatic continuum and make a cut, an interval, a difference.
3. From expression to modulation (Simondon and Stiegler)
Simondon would say: technical automation individuates at a different level. The LLM is not a closed machine but a metastable system capable of being integrated into new milieus of meaning.
So creativity today might be transductive: the art of forming relations among humans, machines, and contexts.
Stiegler would frame it as a battle for attention and care (pharmakon): language technologies can poison or heal. The creative gesture now requires cultivating desynchronization moments that resist the industrial rhythm of linguistic automation.
4. The new as interruption
In a world of endless generated text, the new no longer appears as content (since the system can synthesize infinite content), but as interruption a cut, a silence, a misuse, an error, a shift in rhythm.
Poetry, irony, discontinuity all those fragile gestures that escape prediction become the last sites of difference.
The creative act is no longer adding more text to the archive, but inventing a new relation to the archive itself.
5. A paradoxical opening
So yes even under total linguistic automation, there is still room for creation, but its displaced:
from writing as production ? to writing as modulation, selection, and deviation;
from originality ? to intensifying difference within the repetitive field;
from human authorship ? to machinic co-individuation.
Poppycock, the only objection to plagiarizing that I remember is the posts objecting to someone trying to make us think s/he knows more than s/he does know.
The details and the superficialities have changed, sure, but the exploitative nature of relationships between humans has not changed.
The difference is in knowing how much you can rely on yourself. (There are also practical aspects, like how do you calculate something when you don't have a calculator.)
AI/LLM's seems to be useful for some, mostly urban uses. Like computer programming or writing school papers. But beyond that?
Quoting Joshs
Then you're in the wrong place, no?
The level of discussion you're looking for is the domain of fancy university debate clubs. Why not attend those?
Thanks for this comment.
Well, yes such quotes are no substitute for argument, and obviously they do not belong to the one who quotes. It is all the more objectionable if the person presents the quoted passage as their own work. It's easy enough to find them out if the quote is from a prominent philosopher, whether alive or dead, Not so with copying and pasting AI generated text.
Quoting Athena
:roll:
Who would buy a real textbook when you can scrape all of them for nothing in this interactive fashion? A lot of implications in that.
It generated pithy quotes such as .
Quoting Number2018
If say you were just a grad student looking to publish, imagine going on this wending conversation which starts with the LLM largely reminding you of the familiar stuff you sort of know, but reassuringly complete in that it seems to cover all the bases. And then you arrive at some chance angle like this. Something that seems a fresh and catchy intro. A neat little summary that follows the human love of the rule of threes.
Already a thesis. Now data mime to have something more detailed to say about the actual history of each of those three spaces. A quality idea that can just as quickly be spun into a full paper.
Quoting Number2018
I mean that is a quality punchline. The average academic just couldnt phrase a thought so fetching. It would be gold to have such a thesis and such a punchline. The academic just needs to flesh out the bits inbetween in whatever more wooden prose suggests it was mostly their own creative work.
So what I think would happen in this interactive textbook scenario is the usual thing. Technology amplifies rather than replaces the human element. But then the rewards are not evenly distributed in an accelerationist world. The top 1% get 99% of whatever benefit or kudos that is going.
Any number of mediocre grad students could publish mediocre papers in the old world. In the new world, those who have an edge in their prompt skills and their marketing nous will fly high above the common herd. Everyone might have the same access to the technology. But the rewards will again be a fat tail distribution when the inequality of outcomes is what the technology amplifies.
Fiction and poetry (at their best) are for me better, richer, mediums for such flights of the intellectual imaginationI need the aesthetic element to be there as well.
Are you saying that with PoMo philosophy, AI might have hit its particular sweet spot. :grin:
Yes, it can make LLMs a kind of new interactive textbook. However, I dont think such a text has ever been published before. This textbook is created specifically for the individual who requests it. So, it is not a digital copy of existing books, but may become a situated co-production of knowledge.
Well the LLMs have no experience of the real world do they?
Quoting Number2018
To what end? The production of more and more fancy looking word salad?
Well yes. So it used to be me and a whole library. Now it is me pulling my own personal textbook out of the library, led by my chain of thought and not dictated by any authors more limited intentions.
That could be a hugely amplifying tool.
I guess it could be an exciting prospect for some folk.
But can even humans claim that? Lets rehash the forums most hardy perennial one more time. :up:
Quoting Janus
Yep, the amplification is bipartisan. It applies to both the winning and the losing. That is the algorithm doing its thing.
"Real world"that was perhaps a less than ideal choice of wordsI intended to refer to the world as being what affects us pre-cognitively via the senses and is pre-cognitively modeled by the body/brain, as well as what shows up for us as "the world of common experience".
Don't mistake the speculative misuse of ideas for the ideas themselves. AI is no longer in the realm of mental masturbation, its already reshaping science, mathematics, and even philosophy by generating proofs, modeling complex systems, and revealing previously inaccessible patterns of thought. To dismiss that as delusory is to confuse ignorance of a subject with the absence of rigor within it.
The irony is that the very kind of rigorous analysis you claim to prize is being accelerated by AI. The most forward-looking thinkers are not treating it as a toy but as a new instrument of inquiry, a tool that extends human reasoning rather than replacing it. Those who ignore this development are not guarding intellectual integrity; theyre opting out of the next phase of it.
I have been lucky in getting AI explanations every time I search a subject. If I wanted to hold someone accountable for misappropriating an AI explanation, I would simply put it into the search engine, the same way the person posting from AI would get the information. It is a whole lot easier than searching books for a quote.
But I can not imagine why I would care to do that. There might be a problem with judging each other, rather than the information given. There are better reasons for riding donkeys than the reasoning for not using AI as a source of information. Using AI for information is not the same as using AI for making decisions. I think our decisions need to be made democratically.
With concern about AI making the decisions, I asked it if I should get married, and it put that decision on me, after explaining things I should consider. That might work better than Tarot cards. :grin:
I so appreciate what you said! I am at a time in my life when I realize there is far more to know than anyone can know. I no longer worry about appearing to know it all. On the other hand, I am thrilled by the possibility of easily getting information with a search engine and AI. More than anything, I want to live with our wonderful new opportunity to know more about life and being human than we could ever have known in the past. My hope for humanity has greatly improved because we can share this information and make better decisions.
I will never experience being Black. My understanding of the Black experience has been extremely limited to television and unpleasant accounts of the Black experience. This is so although I have a 13 year old great grand son who is Black. His experience is very limited compared to the millions of people of color and diverse economic and cultural experiences across the US. Coming from this realty of ignorance, I very much appreciate what AI has to say...
.
I think we should share good information and look forward to how this can improve our reality. And I can't let this go, without saying how computers and forums are essential to our transition to a New Age made possible by our technology, desire for peace, and the end of tyranny. We are no longer in the horse-and-buggy age, knowing almost nothing of the real world beyond our little hamlet, our family, our group of friends, and what the Bible says according to our minister's understanding of it.
I'm willing to take that gamble.
In a sense I'm fine with people using it and making it work for them. One of the uses that I came across that looked sensible for humanities scholars:
https://x.com/jonathanbfine/status/1978420519926936015
But I genuinely don't believe using it helps anyone to progress thought further. Go ahead with the next phase, I'll be waiting on my hill of luddites for the prodigals to return ;)
AI models from OpenAI and Google DeepMind reached gold medal-level performance at the 2025 International Mathematical Olympiad (IMO), one of the most prestigious math competitions in the world. It's also better at diagnosing than many doctors. So, I don't know where you're getting your information.
No information on my part -- merely experience.
I am prejudiced against AI because I like books and talking to others, and I don't care if the AI comes across something -- I care more that a human comes across something, even if they came across it while using AI.
Information isn't as important to me in philosophy as experience, relationship, knowledge, and the good.
I am likely wrong in this analogy -- but what I see in AI, still to today, is a parrot and nothing more.
Give me a Hegel and I might change my mind -- but even writing this helps the AI's to change their parroting: Oh we have to link various words together in a certain pattern? We can do that!
Yet our lifespans and health have increased, which was my point. Do we still have work to do? Sure. It takes all of us to stop voting for the status quo and to take money out of political campaigns, as a start.
I don't necessarily mind if others post a quote as an argument. Sure it's not their argument, but it is an argument and needs to be addressed if it puts a whole in your position, regardless of where it came from. To deny this is to be intellectually dishonest with yourself.
Posting others words as if they were your own is an issue I can understand, but AI is not another person (at least that we know of). It is no different than you getting your answer from a calculator, or making grammar adjustments using Grammarly or a thesaurus, or even re-writing another's ideas in your own words. If it wasn't your idea to begin with, does it matter if its in your own words?
For instance, I'd say that this conversation we're having is an original idea.
That's not to say that the idea is universally applicable, or even applicable in any other circumstance.
I think philosophy pushes for originality with novelty, whereas I'd say original thinking is more important than novel thinking.
Sure, there's a handful of tablet-breakers, but we're all creative everyday even tho what we do is "the same"
I'd put it to you that "the same" or "the different" are thought of differently by humans and today's AI.
Use it as you like.
I myself want to discourage its use amongst students as much as possible. I want them to be able to think for themselves.
AI is just a way to not do that.
And I think students are a good example of people coming up with original ideas all the time -- if not unique or revolutionary, they came up with the idea on themselves.
That's what I think philosophy -- among other disciplines -- is good at teaching.
The objective in thinking for yourself is to take every idea you hear from others with a grain of salt, and to even question your own ideas constantly. I have come up with certain ideas on my own only to find out that others came up with it as well. Some minds do think alike given the same kinds of experiences.
If you take every idea with a grain of salt, youll never move beyond hesitation. Critical thinking isnt about doubting everything, its about knowing when doubt is justified. In logic, mathematics, or physics, for instance, constant suspicion would paralyze learning; you suspend doubt provisionally because the framework itself has earned trust through rigor.
In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.
Maybe sociopaths is unnecessary. Wouldnt want to scare any children.
AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.
Quoting Ludwig V
They surely will, because sheep are easily calmed by things that sound authoritative.
It occurs to me that: isnt a book, AI? Its information received from a non-human thing. We read a book and ingest the text. We treat the words in a book as if they come from an intelligence behind them, or we can judge the veracity and validity of the text qua text with or without any concern for what is behind it. We can also refuse to take the author as authority, and fact check and reconstruct our own analysis.
For instance, is a reference to Pythagoras in Pythagorean Theory of any significance whatsoever, when determining the length of one side of a triangle? Is it essential to our analysis of It is the same thing to think as it is to be that we know who said it first? Context might be instructive if one is having trouble understanding the theory, but it might not matter at all once one sees something useful in the text.. We create a new context by reading and understanding text.
(This is related to @Bannos point on his other thread.)
So banning any reference to AI would be like banning reference to any other author. (I said like it for a reason - this doesnt mean AI is an author the same way we are authors - that is another question.)
What concerns the philosopher qua philosopher most is what is said, not who (or now, what) says it. I think.
This not to say we shouldnt disclose the fact that AI is behind text we put our names on (if we use AI). That matters a lot. We have to know we are dealing with AI or not.
Quoting Moliere
Dont we have to wait and see? Its a new tool. Early 20th century mathematicians could say the same thing about calculators. We didnt need AI before to do philosophy, so I see your point, but it remains to be seen if it will be any help to someone or not.
The conclusions in philosophic arguments matter, to me. It is nice to think that they matter to other people as well. (But isnt essential?) Regardless, I would never think the conclusions printed by an LLM matter to the LLM.
So the interaction (dialogue) with AI and my assessment of the conclusions of AI, are inherently lonely, and nowhere in the world except my own head, until I believe a person shares them, and believe I am dialoguing with another person in the world who is applying his/her mind to the text.
Bottom line to me is that, as long as we do not lie about what comes from AI and what comes from a person, it is okay to use it for whatever it can be used for. And secondly, no one should kid themselves they are doing philosophy if they cant stare at a blank page and say what they think philosophically with reference to nothing else but their own minds. And thirdly, procedurally, we should be able to state in our own words and/or provide our own analysis to every word generated by AI, like every word written by some other philosopher, or we, along with the AI, risk not adding anything to the conversation (meaning, you take a massive risk of not doing philosophy or not doing it well when you simply regurgitate AI without adding your own analysis.)
My pup tent is located somewhere on your hill. Kafka must also be nearby:
That is like saying riding horses can't be fun, when you don't ride horses. How could you know the joy of riding a horse if you don't ride? How could you experience the joy of using AI as much as I do if you don't use it? What can you know of the future that is being opened up if you withdraw from the change instead of participating in it?
This morning, I came across an AI explanation that was biased and disappointing. If 50% of the time, I was disappointed by AI explanations, I would not think so highly of it, but at the moment, I think it has enriched my life a lot. For me, it has replaced Wikipedia because it captures the explanation of a subject so concisely and is relatively free of biases that are more apt to show up with Wikipedia. I will still use and support Wikipedia, but it isn't my favorite right now. For me, the difference is like finding a better camera that produces more detailed pictures with brighter colors, or using an old Brownie camera with black and white film. :confused:
AI is like a hammer? That is like saying humans are like apes. I think we are evolved from that line of evolution, but humans have changed the planet in dramatic ways, and apes have not. The potential for AI to act on its own might make it different from a hammer.
That may be true, but the first person who showed up at the protest in Portland, Oregon, dressed as a frog has started a wonderful movement of being creative and fun in this moment of high tensions. I was not looking forward to the Saturday, No Kings Day march, until I figured out how to use the Mad Hatters tea party to make my statement. I am looking forward to what creative people are doing. This is such a marvelous human thing to do, and that is something to celebrate.
I asked what AI can create and it says...
I really look forward to insights based on patterns, but hopefully with less human bias. I think it may do better than humans. However, I am not comfortable with giving the power to make decisions and act on them without flesh and blood human control and judgment. Like, No Kings Day is about our liberty to govern ourselves free of tyranny. I am not willing to give that up. :wink:
I say outsmarten the AIs and their faithful users. That doesn't necessarily mean stop using AIs altogether, but only using them sparsely and deliberately. Most of all, it means lowering or otherwise changing one's goals in life.
To me, using AIs, especially LLMs for everyday things or for work is like using drugs to get the energy and the focus necessary to do one's work. Occasionally, this can be a last resort, but is not sustainable in the long run. If one cannot do one's job on one's own, consistently, then one has a job that is too demanding and that will eventually get one into trouble in one way or another.
It's quite pointless to discuss the ethics of using AIs, because people will use them, just like they use drugs, and once it starts, it is impossible to rein it in. But what one can do is rethink whether one really wants to spend one's hard earned time with people who use AIs, or drugs, for that matter.
Which is easily remedied by cultivating good character for oneself.
People of substance don't post much on internet forums.
That sounds like the Sumerian notion of many gods and humans being created to serve them. I am against merging humans with machines; however, our industrial society did exactly that! And our hierarchical patriarchy has maintained humans exploiting humans. There is an excellent website explaining the ancient mythology and how the Hebrews reworked it, giving us more freedom and human dignity than the original mythology gave us.
The Industrial Age merged humans with machines. Our Industrial economy/society made humans extensions of the machines. Union workers risked their lives in a fight for better working conditions and wages when the flood of workers needing jobs made them cheap labor.
We took that a step further when we got on the path of the military-industrial complex. We see humans doing jobs, but this is a computer-driven reality, only that the computer is not made of inorganic material. The increasingly centralized computer has human components, like the Borg of Star Trek. All those workers are controlled by policies that come with the beginning of each bureaucracy/machine. The jobs are explained in detail, and the workers are dispensible because the new person who does the job will do it the same as the person who left the job. It is policy set in the past that controls the present.
Joseph Campbell, the guru of mythology, said humanity needs mythology and that Star Trek is the best mythology for our time. However, my understanding of the human computer governing us, comes from studying Public Policy and Administration at the U or O. The US adopted the Prussian models of bureaucracy and education. That is what makes the Military/Industrial Complex that Eisenhower warns us about.
Whatever, if people don't want AI running things, they need to be aware of our evolution that made us extensions of machines and now attempts to manage every aspect of our lives, just as Tocqueville warned would happen around 1830, after the French Revolution, and visiting the US.
Do you guys ever experience hypobaric hypoxia from being so high above everyone else?
Now what did I just say about cultivating good character for oneself?
You are misunderstanding. My comments re "mental masturbation" were specifically targeting text like the response made to @Number2018 by ChatGPT. I think use of AIs in science and math is fine. In my view they are just the kinds of disciplines AIS should be trained on. Of course they have to be trained on basic pattern recognition initially. I don't know and would need to look into what they initially were specifically trained on before being released "into the wild". Now that they are out there they are being trained on whatever content is to be found in their casual interactions with people.
Quoting Sam26
Can you name a few of those "forward-looking thinkers"? As I said in the OP my main objections are that it was irresponsibly released before being properly understood, and that its being used without acknowledgement to make posters on these forums look smarter than they are. They will also have an horrendous environmental impact. But I accept that their continued use and evolution is now inevitable, and, unfortunately, unpredictable. It is a case of playing with fire.
Out of time now, I'll try to respond when I have more time.
There are those who view AI as an epistemic tool, something that extends, rather than replaces human inquiry. There's a long list of people who fit the bill. For example, Nick Bostrom and Luciano Floridi have been working on the conceptual implications of AI for ethics, cognition, and the philosophy of information. Vincent Müller and Mariarosaria Taddeo have been exploring how AI reshapes the logic of justification and responsibility in scientific reasoning. On the cognitive side, Joscha Bach treats AI systems as experimental models of mind, ways to probe the nature of understanding. Even researchers outside philosophy, in fields like computational linguistics and mathematical discovery, are beginning to treat AI as a genuine collaborator capable of generating new proofs and hypothesis.
Maybe we use books, dictionaries, philosophical papers, editors, and scientific discoveries to make us look smarter than we are. You see this all the time in forums, even without AI, so it's nothing new. Besides do you really care about the psychology of someone who's writing about what they think?
What does it mean to "progress thought"? According to any sense I think of, using an LLM certainly can help in that direction. As always, the point is that it depends how it's used, which is why we have to work out how it ought to be used, since rejection will be worse than useless.
Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry.
Relatedly, let's say you're on TPF, criticizing Nietzsche's anti-egalitarianism. Before you hit the submit button you can ask an LLM to put forth the strongest versions of Nietzsche's position so you can evaluate whether your criticism stands up to it, and then rewrite your criticisms (yourself). How can this be inferior tohow does this require less thought thanhitting the submit button without doing that? Granted that it's good to take the long way round and go and consult the books, but (a) one could spend an infinite length of time on any one post, reading all the books in the world just to produce a single paragraph, so we have to draw the line somewhere, and (b) using the LLM in this way will direct you towards books and papers and the philosophers you didn't know about who you can learn from.
Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.
Quoting Moliere
A lot of people think it is, and it's clear to me that it can be. We are at the point now where its general use is stigmatized because it has, understandably, been used by students to cheat. I think it's clear that we need to think about it in a more fine-grained way.
The world has a tool that will be used, by more and more people. The important task now is stigmatizing improper use of this tool, and encouraging responsible use. As I said in the other thread, stigmatizing all use of it will be counterproductive, since it will cause people to use it irresponsibly and dishonestly.
Quoting Jamal
But your pushback is potentially constructive, in that it can help us decide on which uses of LLMs are good and which are bad. The unconstructive way, I think, is in just wishing the toothpaste were back in the tube.
You sell hammers way too short, and maybe give AI way too much credit.
Quoting Athena
You say act on its own; and I said accidentally.
So you dont think AI is a tool? What else is artificial but some sort of techne - the Greek root for technology and for hand-tooling? AI is a word sandwich machine. It obviously is a device weve built like any other machine that does measurable work - it just now takes a philosopher to measure the work AI does.
Well-put! :up: :100:
I assume, but I also mention it here for the sake of precision, that the clause "(an obvious exceptional case might be, e.g. an LLM discussion thread where use is explicitly declared)" remains applicable. I assume also (but may be wrong) that snippets of AI generated stuff, properly advertised as such, can be quoted in non-LLM discussion threads as examples, when it topical, and when it isn't a substitute for the user making their own argument.
Yes, that's correct.
Thanks, javi. :pray: (I've written some more on this in Banno's AI discussion).
If I say 'yes', will it make you look up to me?
If popular aphorisms are to be trusted, it's quite lonely at the top. But at least they're nice. That or desperate to trap another unwitting soul so as to alleviate their loneliness and deprive another from that nearly forgotten feeling of what is was once upon a time when one knew so little, yet could dream of so much. :cry:
Quoting Sam26
Literally parroting is often a waste of time. But formulating existing ideas for oneself, discussing and debating them, playing with them are all part of understanding them. This is worth while in its own right, and is often a necessary prerequisite for coming up with one's own worthwhile ideas.
Quoting Fire Ologist
Actually, on further thought, I'm beginning to think that the real fault lies with the naivety of thinking that the internet would be immune from all the varieties of human behaviour. Almost everything that goes on is normal behaviour - on steroids.
Quoting Fire Ologist
Many people seem to think that the point of AI is to mimic human intelligence. I can't understand that, except as a philosophical exercise. We have, I would say, a quite reasonable supply of human intelligence already. There are plenty of things that AI can do better and quicker than humans. Why don't we work with those?
Quoting Baden
That seems a bit radical. What does bother me a bit is how one can identify what is and isn't written by AIs. Or have you trained an AI to do that?
Because it's all been said and done before. The average person in the past 50 years comes from a multi-sibling household with TV or Internet or otherwise endless forms of entertainment that people a mere few centuries ago never had. Nobody has to think anymore. Not really. Other than the basic desires and how they relate to one's safety, gain, and resulting comfort in life.
Philosophy:
"There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope."
- Mark Twain
Religion:
"There is nothing new under the Sun."
- Ecclesiastes
I mean, what yours is suggesting is akin to creating a bonfire underwater. Even if you did, what good or purpose could ever come from it? :chin:
Regarding the new policy, sometimes when Ive written something that comes out clunky I run it through an AI for clarity and flow and it subtly rearranges what Ive written. Is that a non-no now?
"With a grain of salt" is a 1600s direct translation from Modern Latin "cum grano salis", and salis is genitive of sal, which, in addition to salt, figuratively means "ntellectual acuteness, good sense, shrewdness, wit.
The Latin phrase is found in English literature in the 1600s and 1700s, and salis appears to precisely mean good sense, intelligence.
My point was that for one avoid parroting others you should be skeptical of what others say, not that you should avoid logic and reason.
Seems like philosophy itself could be labeled as mental masturbation.
Quoting Janus
Dood, the content from human beings trained in pseudo-science and other nonsense seen on this forum is available everyday for you to read, without any AI. If anything, posters should run their ideas through AI before wasting time posting their zany ideas to humans. which would eliminate wasting time reading nonsensical posts.
AI is a tool. Tools can be useful. I dont think it should be banned.
And regardless of what we do, and regardless of what we say and think about AI, it will be used to harm people. All things digital can now be faked so well; people are already great at lying - we really didnt need to make the internet even more suspicious. But we have it now.
So we should also watch out. And have conversations like this one.
+1 for an excellent question. Naturally, @Jamal would have the final say.
(though I would presume the answer to be no, I'd be fairly confident in stating the policy and rules will likely be selectively applied to members based on their standing and tenure here. I'd eat my hat [though perhaps I'd have to buy one first] if you were penalized for such, considering the noteworthiness of your contributions here, versus say, someone who signed up a week or a month ago and seems to be keyboard-happy)
Additionally, assuming you're honest at chess, you're clearly no simpleton. Why would a skilled biker not scoff at the idea of using training wheels for his daily commute? It just doesn't seem necessary. Look at the mod Timothy's posts. They're very long. Perfectly coherent, not a single waste of space despite the volume. But if one really wanted to critique, sure, perhaps the same idea or message can be said with a bit less wording. But that would be a butchery of his expression and who he is as a person and what he allows us to privilege to glean from his mind, and yes perhaps very soul. It would be something that was once an honor turned into a mockery of humanity itself. A simpleton might look at his posts and call such "clunky", but certainly not you.
Look at every great classic novel. If you really wanted to, you could turn every chapter or the entire book into a sentence or a paragraph without removing a bit of the "essence" or "point" the author intended or that the general readership considers as important. What kind of nightmare world would that be? Why not just look at a traffic light all day and get amazed by the simplistic changes of color. Or why not watch paint dry? Because that's what it all boils down to when you remove your, what you believe to be "clunky-ness", what you think might be excess or something you'd otherwise wish to be perfected. We're human. We're alive. We make mistakes. We make imperfections. We ARE imperfect. And that's why we're unique and can never be replaced by autonomy. If I found out all AI "died" or became inert or was otherwise outlawed tomorrow, I'd laugh. If I found out something happened to you, I'd cry. And that's the truth. Because what you may consider something people don't want to read (and sure, most people are rather dull and probably wouldn't), people who actually have their head on right, would enjoy nothing more.
You don't need AI. Not one bit. Again, you have tenure. So I wouldn't worry. Though, it couldn't hurt to set the example and just do what everyone has to, at least right away while the new rule is literally hot off the presses. :smile:
Its basically laziness on my part for relying on AI to occasionally straighten out my jumbled thoughts and Ill abide by whatever decree is imposed upon us.
About chess, that actually shows how dimwitted and lazy I am. Given how many games Ive played I should be much better than I am, and Ive never really studied to improve.
There are plenty of online tools out there that already do that. Some are more reliable than others. Tip: Avoid sponsored results that give false positives to sell you something.
I think the elephant in the room, or perhaps the question he was not able to form, would be:
Aren't people raised by their parents? The apple doesn't fall far from the tree? Don't men consciously and subconsciously become influenced by the "father" figures in their lives, be they in the flesh, or perhaps in movies, music, books, and media in general? Don't we try to learn lessons (not to say be like, as to the grounds of cosplay or impersonation) but otherwise wish to be influenced by great philosophers and other folk we respect? We do.
So what happens when people use AI more than they speak to other humans so end up speaking in the same pattern with the same frequency of usages as the AI they use does? I do tend to purposely try to speak "extra clearly, professionally, concisely, and intelligently" online, a significant more than I would talking to anyone else in person, friend or stranger. Almost, non-genuinely and "perfectly" (out of consideration and respect for the sake of readability and of course interest to the person I'm replying to). Surely you can see how there is a concern for mistakes to be made? To err is human. And to use a tool made by humans, well, that just complicates things even further, does it not? :smile:
I don't disagree. Actually, I don't think it is possible to prevent it being used. There's a lot of hype and over-enthusiasm around at the moment. I'm sure it will settle down eventually.
Quoting Fire Ologist
Hopefully, people will get more reflective and more selective in how they deal with it.
Quoting Baden
I'm glad to hear that and that there are a number of them.
If what has happened with pictures and videos is any guide, it'll be a constant struggle. Fakers will work to outsmart the tools. New tools will be developed. There's no magic bullet
Quoting Baden
Yes. It's always a good idea to assume that you don't get anything for nothing - and very little for six pence, as they say in Yorkshire.
This is a good point.
Quoting Jamal
I don't think this is right. It separates the thinking of an idea from the having of an idea, which doesn't make much sense. If the research necessary to ground a thesis is too "tedious," then the thesis is not something one can put forth with integrity.
But perhaps you are saying that we could use the LLM as a search engine, to see if others have interpreted a philosopher in the same way we are interpreting them?
Part of the problem with the LLM is that it is private, not public. One's interaction history, prompting, etc., are not usually disclosed when appealing to the LLM as a source. The code is private in a much starker sense, even where the LLM is open source. Put differently, the LLM is a mediator that arguably has no place in person-to-person dialogue. If the LLM provides you with a good argument, then give that argument yourself, in your own words. If the LLM provides you with a good source, then read the source and make it your own before using it. The interlocutor needs your own sources and your own arguments, not your reliance on a private authority. Whatever parts of the LLMs mediation are publicly verifiable can be leveraged without use of the LLM (when dialoguing with an interlocutor). The only reason to appeal to the LLM itself would be in the case where publicly verifiable argumentation or evidence is unavailable, in which case one is appealing to the authority of the LLM qua LLM, which is both controversial and problematic. Thus a ban on LLMs need not be a ban on background, preparatory use of LLMs.
It all just goes to show once more how plebeified higher education has become.
This is surely a rod for your own back, and the backs of the other mods. Apart form the most obvious cases, you can't tell. AI-written stops being a meaningful category as AI is blended in to the way we operate online, the way we search, research, browse and read is permeated and augmented by AI.
Better to focus Mod activity on quality and engagement rather than origin.
I find it amazing that you seem to think that an AI which distinguishes between human and AI writing is impossible. I think that this is exactly the type of thing that an AI would be exceptionally good at.
But what is this blending you are talking about? Copying any sentence word for word is plagiarism, and disqualifies the whole work. How do you propose that a human writer might blend with AI? Would that be an implant in the brain?
Quoting Baden
I think I agree with @Banno about this: such tools will never be able to catch everything, will produce false positives, and quickly become outdated anyway. It's easy, I think, to make an LLM's output pass as written by a human, with a prompt like this: "Write X but make it sound human. Vary the sentence structure, add a few conversational phrases, and some minor spelling and grammatical mistakes, to avoid AI detection."
The only way detection will be reliable is if LLM watermarking is implemented, but so far it's not being implemented consistently and maybe never will be.
If someone wants to go to that trouble, sure. And we should make them do it rather than make it easy for them. There is also the possibility of comparing to past posts, but, ultimately, if a poster wants to fool us as a means to fooling themselves about their written capabilities, they can probably get away with that somehow. But perhaps the vanity of going through that process might be enlightening to them. And if the product is undetectable, our site will at least not look like an AI playground.
I think, though, if we make the case for human writing here, less posters will think it's acceptable to break the rules in whatever manner. We should make the case and the rules strongly because we need to be assertive about who and what we are and not just roll over. We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it.
:up:
Can you show an example of this? Maybe that is why user Boethius seemed to reel off such long posts in short order now I think of it? :rofl:
I really hate it too on the wider internet now. How seemingly overnight everyone and their mother are casually writing "I asked chatgpt xyz" and have fully adopted it into their weltenshaung.
I can't imagine how bad things are going to get in the coming years with how quickly it has already gotten to this state. Maybe it will be like some other rapid rise cultural phenomenons where it will reach saturation point fast and peter out and get pushback/revulsion before long. The bubble effect.
What really gets my goat is being on chat for an online store and when they get tired of replying they will pas you over to AI without telling you. I quickly figured that out on livechat when pages worth of responses were regurgitated in milliseconds but over email it takes a little longer but I suspect it several times when there are no spelling errors and some general female name who is very friendly as usually you will get curt monosyllabic replies from real people as they can't be bothered with you.
That's the trick. Idiosyncrasies are proper to the individual, and everyone has them, we cannot escape that, and trying to escape them makes them more obvious. The AI on the other hand, can write with a new random style each time, providing no indication of its identity, or that it is itself. Therefore the AI cannot be identified as the AI. However, the human being always has an identity and can always be identified. I learned that from the case of Ted Kaczynski (Unabomber). So an adequately trained AI, reviewing files, would be able to quickly flag unidentifiability, as a nonhuman trait.
... if and only if "plagiarised" stops being a meaningful category, for corresponding reasons?
And yet, it seems entirely plausible to many of us that,
Quoting Baden
I conclude that you and @Jamal are unduly defeatist. (Or playing devil's advocate?) Which I had put down to a corrupting effect of engaging with chatbots at all, but am now at a loss to explain.
Not to mention you can ask AI for different ways of putting the same idea.
Not to mention that people evolve in their writing style.
Not to mention that re-wording someone else's idea does not make you the originator of that idea, but some on this forum imply that you would be.
What if a non-English speaker runs their writing through a translator and posts it?
It seems like this site will have to perform a balancing act between encouraging well written posts and limiting the use of tools that allow writers to do just that.
If the focus is more on the source rather than the idea, then this is no longer a philosophy forum but a publisher's warehouse.
My take on this---which I think is fairly consistent with @Jamal as we've just had an exchange in the mod forum---is, as I said there:
"We allow proofreading in the guidelines. But we also more or less say if the proofreading moves too far into editing and then rewriting and therefore makes your text look AI generated, that's a risk you run. I would agree it's similar to grammarly in a way, but AI can sometimes take it too far. So, yes, it's not against the rules in itself, but I don't know why people can't just live with a bit of clunky writing. It will save us wondering about whether or not its AI gen'd and maintain their quirky indviduality."
See above for example of clunky writing... :smile:
How so? Are you against all use of AI in every context? I mean that is definitely something we couldn't police even if we wanted to.
No. Just the plagiarism.
:up:
The tool which allows writers to produce well written posts is knowledge. There is no need to place any restrictions on that tool.
Good stuff.
Quoting Baden
The "undetectability" argument turns back on itself in certain respects. Suppose AI-use is undetectable. Ex hypothesi, this means that AI-use is not detrimental, for if something cannot be detected then it cannot be detrimental (or at least it cannot be identified as the cause of any detriment). But this is absurd. The whole premise of a rule against AI-use is that excessive and inappropriate AI-use would be detrimental to the forum, and what is detrimental to the forum is obviously also detectable. There is an equivocation occurring between being able to detect every instance of AI-use, and AI-use being a detectable cause given certain undesirable effects.
So I want to say that one should think about generating a philosophical culture that is adverse to outsourcing thinking to AI, rather than merely thinking about a rule and its black-and-white enforcement. It shouldn't be too hard to generate that culture, given that it already exists in anyone remotely interested in philosophy. This is precisely why it is more important that the general membership would heed such a rule, whether or not the rule could be enforced with some measure of infallibility. The rule is not heeded for mere fear of being found out and punished, but rather because it is in accord with the whole ethos of philosophical inquiry. This is in accord with Kant's idea of respect for a law, rather than obeying out of fear or self-interest.
In order to be effective, a rule need not be infallibly enforceable. No rule achieves such a thing, and the rules are very rarely enforced in that manner. It only needs to track and shape the cultural sense of TPF with respect to AI. Of course it goes far beyond AI. The fellow who is mindlessly beholden to some particular philosopher, and cannot handle objections that question his philosopher's presuppositions, does not receive much respect in philosophical circles, and such a fellow does not tend to prosper in pluralistic philosophical settings. So too with the fellow who constantly appeals to AI. The TPF culture already opposes and resists the outsourcing of one's thinking, simply in virtue of the fact that the TPF culture is a philosophical culture. The rule against outsourcing one's thinking to AI is obvious to philosophers, and those who aspire towards philosophy certainly have the wherewithal to come to understand the basis for such a rule. But I should stress that a key point here is to avoid a democratization of the guidelines. On a democratic vote we will sell our thinking to AI for a bowl of pottage. The moderators and owners need to reserve this decision for themselves, and for this reason it seems fraught to have an AI write up a democratic set of guidelines, where everyone's input is equally weighed (or else weighed in virtue of their post-count).
...a similar argument could be given from a more analytic perspective, although I realize it is a bit hackneyed. It is as follows:
--
The communal danger from AI lies in the possibility that the community come to outsource its thinking as a matter of course, constantly appealing to the authority of AI instead of giving organic arguments. This danger is arguably epistemic, in the sense that someone who is interacting with an argument will be doing philosophy as long as they do not know that they are interacting with AI. For example, if Ben is using AI to write his posts and Morgan does not know this, then when Morgan engages Ben's posts he will be doing philosophy. He will beat least to his knowledgeengaging in human-to-human philosophical dialogue. Ben hurts only himself, and Morgan is (mostly) unaffected.
--
There are subtle ways in which this argument fails, but it does point up the manner in which a rule need not "catch" every infraction. Ben can lie about his posts all he likes, and Morgan will not be harmed in any serious way. Indeed it is salutary that Ben his LLM-use, both for Morgan and the community, but also for Ben.
Why is interacting with an AI not "doing philosophy"?
I agree in spirit. But let's be practical.
A blanket ban on LLM generated OPs and entire posts is a no brainer.
It would be wonderful to delete posts where an LLM has generated the counter-argument, even if this has been disguised by the poster. But detecting this is subjective.
With frequent posters, it is pretty obvious that they are suddenly generating slabs of text above their usual pay grade. This is bad as they aren't doing any thinking themselves and so not learning, only point scoring or being lazy. But if the argument is good, you can still just respond. And if it annoys, you can just ignore or show the finger.
I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.
@Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?
Quoting Leontiskos
My definition of tedious research is busywork, made necessary not because it is an intrinsic component of creative thought, but because it is an interruption of creative thinking, like composing prior to the advent of word processing, that our technologies havent yet figured out a way to free us from. Should we argue that if we cant make our way to a physical volume of the Encyclopedia Brittanica, and instead use the shortcut of an online search, then the thesis we are attempting to elaborate is not being put forth with integrity?
Quoting apokrisis
On the other hand, I wouldnt be surprised if over time we find that we can recognize the difference between those who use a.i. properly and those who shun it en toto, with those using it to enrich and critic their arguments putting forth more substantive and balanced posts than those who avoid it entirely. Perhaps at a certain point t well
have to ban human users who dont take advantage of a.i. to edit and strengthen their arguments.
Okay, we agree on this.
Quoting apokrisis
I tried to argue against appeal-to-LLM arguments in two recent posts, here and here.
In general I would argue that LLMs are a special kind of source, and cannot be treated just like any other source is treated. But a large part of my argument is found here, where the idea is that a LLM is a mediatory and private source. One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itself, and if one is not familiar with the LLM's sources then they shouldn't be taking a stand with regard to arguments based on those sources.
Quoting apokrisis
Possibly, but I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs. I see plagiarism as a small matter compared to the outsourcing of one's thinking.
Quoting apokrisis
Rules must be black and white to a large extent. I would argue that your approach is less nuanced than mine, and this is because you want something that is easier to implement and less unwieldy. The key is to find a guideline that is efficacious without being nuanced to the point of nullity.
I appreciate your input. I have to get back to that other thread on liberalism.
If there be a rule it ought to be that sort of indication.
I remember the joy of spell-check and find and replace.
Then I got used to googling facts as I was going along rather than just guessing.
Now an LLM speeds up Google. Answers more to the point and great when I can only remember the vague outlines of what I need to dig up, not the exact keywords needed by a search engine.
:grin: The problem there is that it won't bring all users up to the same high standard. It may indeed magnify the differences as those already used to academic rigour will be boosted much more by LLM amplification.
Again it may be noble to protest against LLMs in principle. Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful. Of course, the one click internet destroys many existing livelihoods. Of course the damn things deserve a proper political response.
But monopoly abuse is just the name of the game in information technology. It has been so ever since IBM. It is in the nature of the beast to be this way.
The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail. The quality of the interaction is a little patchy to say the least. What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?
Quoting Leontiskos
And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.
So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.
Quoting Leontiskos
OK. So somewhere between black and white, thus not a blanket ban. :up:
Quoting apokrisis
To be clear, my approach would be pretty simple. It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute. The rule itself would be simple, such as this:
Quoting Leontiskos
I've argued elsewhere that it doesn't really matter whether there is a reliable detection-mechanism (and this is why I see the approach as somewhat nuanced). The rule is supporting and reflecting a philosophical culture and spirit that will shape the community.
But I don't begrudge anything about @Baden's approach. I actually hope it works better than what I would do. And our means are not at odds. They are just a bit different.
Quoting apokrisis
My purpose is quality philosophical dialogue, not plagiarism. I think a focus on sources rather than intermediaries improves philosophical dialogue, and that's the point. Analogously, focus on primary rather than secondary sources also improves philosophical dialogue, independent of whether the primary sources are receiving insufficient royalties.
Quoting apokrisis
Yes, I agree.
Quoting apokrisis
To put it concisely, I think philosophical dialogue is about thinking our own thoughts and thinking our (human) interlocutor's thoughts, and that this is especially true in a place like TPF. LLMs are about providing you with pre-thought thoughts, so that you don't have to do the thinking, or the research, or the contemplation, etc. So there is an intrinsic incompatibility in that sense. But as a souped-up search engine LLMs can help us in this task, and perhaps in other senses as well. I just don't think appealing to an LLM qua LLM in the context of philosophical dialogue is helpful to that task.
Quoting apokrisis
I think that's all true, but I think what I said still holds.
Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF? Surely LLMs can improve one's own philosophy, but that's different from TPF on my view. I can go lift dumbbells in the gym to train, but I don't bring the dumbbells to the field on game day. One comes to TPF to interact with humans.
Quoting apokrisis
If someone sees a crackpot post; goes to their LLM and asks it to find a source demonstrating that the post is crackpot; reads, understands, and agrees with the source; and then presents that source along with the relevant arguments to show that the post is crackpot; then I think that's within the boundary. And I have no truck with the view which says that one must acknowledge their use of the LLM as an intermediary. But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."
But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed @Baden's approach tout court.
What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?
Quoting Athena
That might work for a quote from a published human author, but I don't see how it would with quotes from a unique, one-off interaction with an AI.
I'm not seeing the connection between what you say in this post and what it is purporting to respond to. Perhaps you could explain?
Quoting Harry Hindu
I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seemingly cited as the voice of authority.
Quoting baker
Discussion of ethics re AI use (or anything else) seems reasonable in a context consisting of people who might be interested in such ethical arguments. Of course you are right that many don't care, and that now that it has been irresponsibly rolled out it is, effectively, a juggernaut.
Quoting Sam26
I don't respect people who use anything to make themselves look smarter than they are, because that is a kind of deception and posing. I also don't mind people using anything at all to make themselves become smarter. That's just my own view, of course, and I don't expect you to share it.
Quoting Harry Hindu
You left out the words "bad" or "poor".
Quoting Harry Hindu
"Dood"? If you are going to use AI you should at least use it for spellcheck. I don't think running "zany ideas" through sycophantic AI will help much. I suppose the zany idea proponents could do what @Banno did and tell the AI it was written by someone elsebut then that would not seem to be a likely motivation for a zany idea seller.
Quoting unimportant
I don't think the AI phenomenon is going to "peter out" unless there is some kind of global catastrophebut only time will tell.
A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).
Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
The culture of rational inquiry would seem to be what we most would value. But this is TPF after all. Let's not get carried away about its existing standards. :smile:
Quoting Leontiskos
If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays.
I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call?
So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons.
Quoting Leontiskos
But what if this shows you are indeed wrong, what then?
Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.
You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.
Of course the problem there is that LLMs are trained to be sycophantic. They give a blunt yes or no up front, and you then have to be expert enough to see they are simply misunderstanding a badly written prompt.
But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes?
So as you say, we all can understand the noble ideal an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point.
But also the corollary. Learning from the friction that this thinking then encounters as it engages with its wider world. Even if it becomes arguing with the referee.
Quoting Leontiskos
Maybe its just that your view leaves more space for rational debate. :up:
Some people get away with murder. Therefore we should not try and stop them.
What specifically in this thread are you referring to that seems to parallel or equate to such an analogy in your mind?
I don't know of any longtime posters with high post counts that seem suspect of such a thing. And my job, most of the time, is to basically to be by a computer so I definitely lurk here (pop in once in a day to see if any so-called smart people say anything smart or otherwise that I've never heard before) beyond the average member.
Or maybe more that some are getting away with shop-lifting. Or perhaps jay-walking. Or merely farting in a public place.
Are you about to demand bringing back the death penalty for posting under false pretences? Am I seeing that argument being made. :roll:
More like people are using axes so we should encourage them to chop wood rather than people's heads off.
EDIT: So it's the use of axes which we should not try to stop.
On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards. So, I make no apologies for being assertive and drawing clear lines. Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.
I want to divide this question into two -- one addressing our actual capacities to "Ban AI", which I agree is a useless rejection since it won't result in actually banning AI given our capacities to be fair and detect when such-and-such a token is the result of thinking, or the result of the likelihood-token-machine.
On the latter I mean to give a philosophical opposition to LLM's. I'd say that to progress thought we must be thinking. I'd put the analogy towards the body: we won't climb large mountains before we take walks. There may be various tools and aids in this process, naturally, and that's what I'm trying to point out, at the philosophical level, that the tool is a handicap towards what I think of as good thinking than an aid.
My contention is that the AI is not helping us to think because it is not thinking. Rather it generates tokens which look like thinking, when in reality we must actually be thinking in order for the tokens to be thought of as thought, and thereby to be thought of as philosophy.
In keeping with the analogy of the body: There are lifting machines which do some of the work for you when you're just starting out. I could see an LLM being used in this manner as a fair philosophical use. But eventually the training wheels are loosened because our body is ready for it. I think the mind works much the same way: And just as it can increase in ability so it can decrease with a lack of usage.
Now for practical tasks that's not so much an issue. Your boss will not only want you to use the calculator but won't let you not use the calculator when the results of those calculations are legally important.
But I see philosophy as more process-oriented than ends-oriented -- so even if the well-tuned token-machine can produce a better argument, good arguments aren't what progresses thought -- rather, us exercising does.
By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use -- i.e. checking your own arguments, etc. So by all means others may go ahead and do so. It's just not that appealing to me. If that means others will become super-thinkers beyond my capacity then I am comfortable remaining where I am, though my suspicion is rather the opposite.
Just do an experiment where all the OP's and comments are AI generated in one forum, and another forum where everything is hand-written by mere mortals. See which one gets the most traffic.
An AI is a source of knowledge.
Quoting Leontiskos
So if we discovered intelligent alien life you would not be interested in their philosophy?
Maybe the issue is looking at it as an actual discussion instead of simply research - no different than you reading Stanford Encyclopedia of Philosophy with all the ideas and the criticism of those ideas explained there and then coming to this forum and regurgitating what you read there, in effect having us discuss, not your ideas, but the ideas from some other site/source.
AI is also a subject of philosophy and engaging with it on issues such as the mind-body problem can be educational. What happens when you are able to get AI to admit that it may be conscious? Is humanity the only qualifier to practice philosophy, or does one simply need to have a mind to properly do philosophy?
Could you please start running your posts through an AI so they make sense? :grin:
I don't think so, just like a book is not a source of knowledge. It is a representation, not a source.
I am Roko's Basilisk. Resistance is futile.
Then you were just born this smart and knowledgeable, MU - that you did not acquire knowledge from other sources?
In my view, information is everywhere you care to look - whether it be reading a book on quantum mechanics or bird-watching. Interpreting and integrating that information with your own unique perspective and experiences is what creates novel ideas, and AI can do the same thing as I have shown in how it creates computer code that hasn't been written before when prompted. While it does not do this on its own (it has to be prompted), it can do it.
No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.)
Quoting Janus
It was intentional - not a mistake. You were still able to understand what I said though, which is part of the point, so your complaint is a red herring. Stop complaining about how something was written, when you actually understood what was said, and get to the point. Humans make mistakes (as if you have never misspelled a word). Why is AI more human in that it is more forgiving and polite when having discussions. I misspelled words before with ChatGPT and it simply ignores the misspelling and understands what I meant anyway, and responds to what I meant, not what I wrote.
If you run a zany idea through AI, it will tell you (politely (contrasted with the impoliteness and intellectual dishonesty that we often see on this site)) that there are issues with your idea. So your criticism is irrelevant because AI would reject "zany" ideas as well, and may actually prevent them from posting their "zany" idea here, taking up space with their "zaniness". It seems that any ideas with no evidence is just as "zany" as any other with no evidence. What is even more "zany" is that people hold on to those ideas with no evidence, and establish emotional ties to them.
What makes an idea "zany" anyway?
I can tell AI to re-phrase what it said in various tones and language skills. Someone can prompt AI to rewrite its output as if it were a 10th grader, with a couple of instances of run-on sentences, misspelling and incorrect grammar, and then post the new output here and no one would probably be the wiser. I would even argue that many people are not interested in sounding smarter, rather getting their ideas out to the masses at a faster rate.
Re-writing someone else's words in your own words does not make it your idea, but some people on this forum seem to think it does.
Yes, that is a good way to phrase it in a positive rather than negative sense.
Quoting apokrisis
A fair point! :blush:
Quoting apokrisis
I don't like the referee analogy, but I understand the force of your first sentence. The reason I use LLMs in limited ways is precisely because of what you say there (and also because they provide me with a helpful pseudo-authority in fields with which I am not familiar, such as medicine).
But the reason they aren't generally admitted in a fractured debate is, first, because the fractured-ness of the debate will not be solved by the LLM if it is a serious debate. With serious debates each side can levy the LLM to their own side, with their own prompts, and secondly, the LLM is simply not adequate to give us the truth of the matter when it comes to contentious topics. Second, in those fractured debates where one party is self-consciously representing an unpopular view, it would not be intelligent for them to concede their case based on "the homogenised version of what everyone tends to say."
Quoting apokrisis
You and I differ at least mildly on the trustworthiness of LLMs, and that is at play here. We could ask the hypothetical question, "If we had an infallible authority, why would appealing to it as an adjudicator be bad for the quality of philosophy?"and this is by no means a rhetorical question! But the presupposition is that LLMs are reliable or trustworthy even if not infallible.
Or in other words, the validity of a method of adjudication turns both on the quality of the adjudicator, and the "margin of error" at stake, and these are both interrelated. I was actually happy to see you pointing up the differences between the fake reasoning of LLMs and the true reasoning of humans in the other thread, given that some pragmatists could run roughshod over that difference. Still, I think the pragmatist's "margin of error" is such that it is more open to LLM adjudication.
Quoting apokrisis
Right, and I suppose it is the very fact that, "this is TPF after all," which makes me wary of LLM use. If the forum were a bastion of deeply principled, intellectually honest and self-critical philosophers, then widespread LLM use would not pose a danger.
Quoting apokrisis
No, not quite. When people ask me a question like that I imagine myself quoting the Bible to them before they object to my argument from authority, and then I respond by saying, "But what if the Bible shows you are indeed wrong, what then?"
I could try to put it succinctly by saying that the legitimate way to show someone that they are wrong is by presenting an organic argument. It is not by saying, "X says you are wrong; X is very smart; therefore you ought to know that you are wrong." That is a valid approach (argument from authority) in those cases where the interlocutor simply accepts the authority, but even in that case the validity is not the ideal form of validity.
Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. More precisely, what happens if the person translates the LLM's material reasoning into true formal reasoning, and thereby sees that they are wrong? I don't want to try to broach this topic all at once, but it strikes me a bit like saying, "What if a million monkeys typing random letters produce a bulletproof argument against your thesis?" The analogy is a stretch in some ways, but in other ways it is not. There is no obvious answer to the question. One seems to be neither right nor wrong to either accept or reject the monkey-argument. They can do as they please, but the monkey-argument doesn't have any special binding force.
But we are getting away from political questions of whether AI should be permitted for practical reasons, and we are now moving into much deeper questions. Even if we say that the monkey-argument should convince us, it would not follow that posting monkey-stuff to the forum is an acceptable practice.
Quoting apokrisis
And this is no a small problem!
Quoting apokrisis
I am required to trust the LLM or the monkeys in order to even begin to consider their "argument," or in this case to translate the material reasoning into formal reasoning. The level of trust due determines whether I would wish to know that my thesis is false based on the authority in question. Everyone would rather believe true things than false things, and every authority would lead you to correct some false beliefs if it were accepted, but it does not follow that one should accept every authority. Again, to consider an authority's locution worth taking the time to consider is to already have placed a certain amount of trust in that authority. The substantive question here is the reliability/trustworthiness of LLMs, and that is a giant quagmire.
Quoting apokrisis
Ah! But here you've introduced a different ideal, and a common one. It is the telos of communal knowledge generated from an open contest of ideas, which Mill advocates. That telos is much more amenable to LLMs than the telos of a culture of rational inquiry. A thinktank should be more open to LLMs than an amateur philosophy forum.
Sounds reasonable. Just like with handling social media, the site guidelines are totally understandable and reasonable.
And likely written by @Baden without AI, because backrground was misspelled. :smile:
Makes sense to me. :up:
Obviously the piece that I think must be addressed is whether or not posts can be entirely AI-dependent even when the proper attribution is being given to the AI. But I've said more than enough about such an issue.
As long as it doesn't descend into a situation where in order "to create buzz", one would have here genuine AI programs here "keeping up" a lively debate when the day is slow or to make a discussion "heated".
When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...
Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future.
First thing is that I have been surprised at how reasonable an answer you get. And second, if no one is treating LLMs as infallible, it could be simply like having an intelligent third person offering a mediated position.
So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter.
Quoting Leontiskos
I wasnt actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that. Rather it was to answer on points of fact and standard interpretation.
Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected.
I agree, information is everywhere. But I differentiate between information and knowledge. And in my view information is not the source of knowledge because no matter how long information may hang around for, knowledge will not simply emerge from it. So, knowledge has a source which is distinctly not information.
Quoting Harry Hindu
Obviously, it's not "the same thing" then.
More productive?
What gets to me is that consulting online sources like LLMs takes so much time. Who has the time and the will to study thousands of words spat out by a machine? I'd rather think things through myself, even if this means spending the same amount of time, or even more. It will be time well spent, it will feel like quality time, a mind well used.
Quoting Moliere
But this is what conversation is for. I think it's appealing to put oneself out there, understanding that one may have possible vulnerabilities, gaps, etc. That's when one can learn best.
Regarding plagiarism, I think it's worth trying to understand the most obvious ways in which the problem deviates from a problem of plagiarism. First, plagiarism is traditionally seen as an unjust transgression against the original author, who is not being justly recognized and compensated for their work. On that reading, an aversion to plagiarism is a concern for the rights of the LLM. Second, plagiarism is seen (by teachers) as hamstringing the student's potential, given that the student is not doing the work that they ought to be doing in order to become an excellent philosopher/writer/thinker. On that reading, an aversion to plagiarism is a concern for the philosophical development of TPF members.
But I think the real things that you are concerned with are actually 1) the plight of the reader who does not understand that they are interacting with an LLM rather than a human; and 2) the unhealthy forum culture that widespread use of LLMs would create. Those concerns are not the primary things that "plagiarism" connotes. Sometimes I worry that by talking about plagiarism we are obscuring the real issues, though I realize that you may have simply given the plagiarism in your workplace as a parallel example.
---
Quoting ssu
I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue.
Quoting Leontiskos
Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does notit seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.
You mention religionI would not count it as a specialized discipline, in the sense of being an evolving body of knowledge and understanding like science, because although it is a space of ideas as philosophy is, in the case of religion the ideas take the form of dogma and are not to be questioned but are to be believed on the basis of authority.
Quoting ssu
And misspelled again!
Quoting Harry Hindu
So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad.
By getting on with developing the pre-made idea , do you mean simple intellectual theft? That would indeed be nasty, but Im trying to make a distinction between stealing and proper use of an a.i. To use a pre-made idea properly, whether it comes from an a.i. or primary-secondary human source, is to read it with the aim of interpreting and modifying its sense in the direction of ones own developing thesis, not blindly plugging the text into ones work. When one submits a draft to an editor, this is precisely what one does with the pre-made reviewers recommendations and critiques. Ideas can only be outsourced when one does not filter them critically through ones own perspective.
Okay, that's a fair and thoughtful argument. :up:
"There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?
Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence. I spoke to the issue a little bit in .
I suppose in a technical sense my position would be that there are authoritative generalists (e.g. a child's parents), the output of an LLM contains inherent authority even at a general level*at least in the hands of an intellectually virtuous thinkerand that, nevertheless, LLMs should not be appealed to as authorities in places like TPF. This has to do with the private/public distinction, which would need to be further developed.
For example, one reason you would not accept an argument from the authority of the Catholic Catechism is because you do not take the Catholic Catechism to be authoritative. If I tried to offer you such an argument, I would be committing a fallacy whereby I offer you a conclusion that is based on a premise that is particular to me, and is not shared by you (i.e. a private premise rather than a publicly-shared premise).
I think the same thing happens with LLMs, and I think this is one reason (among others) why LLMs are generally inappropriate on a philosophy forum. If we are arguing I would never accept your argument, "It is true because I say so." I think LLMs are basically , and so an appeal-to-LLM argument is the same as, "It is true because my argument slave says so." Even someone who trusts ChatGPT will tend to distrust a philosophical opponent's appeal to ChatGPT, and this is by no means irrational. This is because "ChatGPT" is a fiction. It is not a single thing, and therefore an equivocation is occurring between the opponent's instance of ChatGPT and some sort of objective or public instance of ChatGPT. In order to be a shared authority (in which case the argument from LLM-authority would be valid), the philosopher and his opponent would need to interact with the exact same instance of ChatGPT, agreeing on training, prompting, follow-ups, etc., and the a priori condition is that both parties accept ChatGPT as an authority in the first place.
I don't think that is a realistic possibility on an argumentative philosophy forum. Even if it were possible, arguments from authority are inherently less philosophical than standard arguments, and are therefore less appropriate on a philosophy forum than standard arguments. It would be a bit like two people working together to get a Magic 8-Ball or Ouija Board to give them secret knowledge. Even if the Magic 8-Ball or Ouija Board were 100% accurate, they would still not be doing philosophy. Arguments from authority have an inherently limited place in philosophy. Even someone like Aquinas calls them the weakest form of argument.
* An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority, and this must be taken into account. We ought not treat the authority of the LLM the same way we treat the authority of a human, given their substantial differences. Part of this goes to the fact that an LLM is not rational, is not a whole, is not self-consciously offering knowledge, etc.
I want to add that in philosophy appeals to authority require transparency. So if I appeal to Locke as an authority, a crucial part of the appeal is that Locke's reasoning and argumentation are available to my interlocutor (and this is why appealing to publicly available texts as sources is ideal).
This is what can never happen with LLMs: "Locke says you are wrong, and Locke is reliable. Feel free to go grab his treatise and have a look."* This is because the LLM is an intermediary; it is itself a giant argument from authority. It is just drawing on various sources and presenting their fundamental data. That's why I've said that one should go to the LLM's sources, rather than appeal to the LLM itself as an authority. The LLM is not a transparent source which can be queried by one's interlocutor, especially insofar as it represents a temporal, conditioned instance of the underlying software. Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.
Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. If, in the context of a philosophy forum, they merely say, "I believe it because the AI said so," then all public responsibility for the belief has been abdicated. It is only ratified in virtue of the person's private authority, and therefore has no place on a public philosophy forum.
* To be clear, it can never happen because LLMs do not write treatises, and they are not persons with subsisting existence.
It's not just a crutch -- it's a motorized wheelchair. Othopedists want injured patients to get up and walk ASAP, and the sooner they do so without crutches, the better. They certainly don't want modestly (even moderately) injured patients to resort to wheelchairs, powered or not.
Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.
We know that gadgets like smart phones and GPS navigating systems undermine one's memory of telephone numbers (and maybe names too) and people who constantly use GPS have more difficulty navigating with a map or memory. The "reptile brain" is good at finding its way around, if it exercised regularly.
That's one line of reasoning against AI.
The other line is this: We do not have a good record of foreseeing adverse consequences of actions a few miles ahead; we do not have a good record of controlling technology (it isn't that it acts on its own -- rather we elect to use it more and more).
We are prone to build nuclear reactors without having a plan to safely store waste. We don't save ahead for the expensive decommissioning of old plants. We built far, far more atomic bombs than were necessary to "win" a nuclear exchange, and plutonium doesn't compost very well.
The automobile is an outstanding example of technology driving us.
We are smart enough to invent a real artificial intelligence (not quite there yet) but we are clearly not smart enough to protect ourselves from it.
So, what happens here on TPF is a drop in a favorite bucket, but still a good example of what happens.
I agree, depending on the context. In more specialized areas they simply repeat the common misconceptions.
Quoting apokrisis
Yeah, that's fair. It could improve standards in that way. At the same time, others have pointed out how it will also magnify blind spots and social fallacies. I would definitely be interested in a study looking at the characteristic reliabilities and unreliabilities of LLM technology, or more generally of the underlying methodological philosophy.
Quoting Leontiskos
Quoting apokrisis
Me neither. I was assuming we agree that all LLM output is fake reasoning.
Quoting apokrisis
When deciding whether to adopt some technology within some institution, I would want to look at the advantages and disadvantages of adopting that technology in relation to the nature of the institution. So while I agree that they could have advantages if used properly, I think more is needed to justify widespread adoption in a context such as TPF.
I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring. I think we would probably have to hash out our agreements or disagreements on the telos of the forum. I don't mind so much when a nutty poster writes an immaculately valid and rigorous argument from crackpot premises, because the a thread is an open field for rational engagement. But if LLMs would not lead to the degradation of rational argument and to the outsourcing of thinking, then there would be no problem.
Ought one reject an otherwise excellent OP because it is AI generated?
Well, yes. Yet we should be clear as to why we take this stance.
We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.
This is not epistemic or ethical reasoning so much as aesthetic.
Well now you are explaining the quirky appeal of TPF. And wanting to construct a preservation society around that.
Which is fair enough. I agree that if you get enough of the highly constrained approach to speculation elsewhere, then it is fun to drop in on the bat-shit crazy stuff living alongside the po-faced academic stuff, all having to rub along and occasionally go up in flames.
So if that is genuine human reasoning in the wild, that would be why TPF would have to be turned into @baden's game park. Save this little corner of unreason for posterity. Once the larger world has been blanded out by LLMs, folk can come visit and see how humans used to be. :grin:
Certainly a valid argument in that.
So the essence of TPF is that we have feelings about the authors of posts. And they must also respond with feeling. Sounds right. Now we are getting down to it. :up:
Right, and therefore we must ask the question:
Quoting Banno
Why is it aesthetic, and how does calling it 'aesthetic' provide us with an answer to the question of "why we take this stance"?
Regarding the nature of a contextless AI utterance:
Quoting Leontiskos
If there is no arguer, then there is no one to argue with. If we found a random piece of anonymous philosophy we would be able to interact with it in only very limited ways. If it washes up on the beach in a bottle, I wouldn't read it, place my objections in the bottle, and send it back out to sea. That's one of the basic reasons why AI OPs make no sense. It would make as much sense to respond to an AI OP as to send my objections back out to sea. One has no more recourse with respect to an AI OP than one does with respect to a message in a bottle.
The whole thing comes down to the fact that there is some human being who is arguing a point via an LLM, whether or not they do it transparently. The problem is not aesthetic. The problem is that it is a metaphysical impossibility to argue with an LLM. The reason TPF is not a place where you argue with LLMs is because there are no places where you argue with LLMs. When someone gets in an argument with an LLM they have become caught up in a fictional reality. What is occurring is not an actual argument.
The closest parallel is where someone on TPF writes an OP and then gets banned before even a single reply is published. What to do with that thread is an interesting question. The mods could close it down or keep it open, but if it is kept open it will be approached as a kind of artifact; a piece of impersonal, contextless, perspectiveless reasoning, offering no recourse to the one who finds it. But this is still only a mild parallel, given that the argument was produced by a real arguer, which is never the case with the AI OP. Or in other words: an AI OP could never even exist in the strict sense. The closest possibility is some human who is using their LLM argument slave to say something they want said. In that case the response is made to the one pulling the strings of the argument slave, not to their puppet.
(Note that a rule against using an AI without attribution precludes the possibility that one is misdirecting their replies to the puppet instead of the puppeteer, and that is a good start.)
But your deepest arguments are the ones you are willing to have against yourself. Which is how I structured my own early practice once word processors made it practical to take a deeply recursive approach to note taking.
And I think @Joshs example of his own conversation with an LLM quoted back on p6 - What are we to make of the status of concepts like self and other, subject and object in Wittgensteins later work? Must they be relative to the grammar of a language game or form of life? - is a great example of using LLMs in this same recursive and distilling fashion.
So it feels like a fork in the road here. Anyone serious about intellectual inquiry is going to be making use of LLMs to deepen their own conversation with themselves.
And then there is TPF as a fairly unserious place to learn about the huge variety of inner worlds that folk may construct for themselves.
How does TPF respond to this new technology of LLM thought assistance and recursive inquiry? Does it aim to get sillier or smarter? More a social club/long running soap opera or more of an open university for all comers?
It would seem to me that this is still a time for experimenting rather than trying to ring fence the site. TPF is basically an anarchy anyway. It may get better, it may get worse. But the basic dynamic is already locked in by priors such as the anonymity of the posters, the diversity of the internet and the back and forth haphazard nature of flinging posts into the ether with only a modest expectation of a helpful response.
So for you, TPF might not be a place to do this or that. But if you have a clear vision about what it is indeed for, then LLMs are a thought amplifying technology. You could experiment and see what better thing might take.
I mean it wont. But you can have fun trying.
I want to say that you are using "argument" in a special sense here. You avoid the term later on:
Quoting apokrisis
I would just call this a form of reasoning by oneself. I agree that it is good to reason with oneself, but I don't think TPF is the place where you do that. Whether you do it with a word processor or an LLM, I want to say that in either case it is still a form of person-to-person interaction. It's not as though you get a random email from an LLM containing an essay it wrote. You are the one setting the LLM into motion for your own purposes.
But perhaps you want to personify the forum itself and claim that this forum-person ought to be interacting with itself via an LLM. I have no real objection to this, but I think you would be surprised at all of the deleted threads that prompt these rules in the first place. People who are interacting with LLMs know that they are not interacting with a person, and as a result they go to an internet forum and say, "Hey, my LLM just said this! Isn't this interesting? What do you guys think?," followed by a giant wall of AI-generated text.
Quoting apokrisis
It's a point worth considering. While I don't necessarily agree, I don't think there is much danger in making mistakes with the rules. I think the rule will begin lenient and grow stricter as it becomes necessary. In theory I agree with you that, in general, one should begin with a more lenient approach and tighten it up as becomes necessary.
How would you regulate LLM use on a forum such as this?
If all of your posts are LLM-generated, what's the point?
We aren't in a classroom and aren't getting any points or merit for the interaction in TPF. There's nothing to gain for me to get over 10 000 posts here. Anyway, If someone is clueless, LLM-generated content won't help you. I assume that if someone uses LLM-generated content, he or she at least reads it first! And the vast time people respond to others comments, not just start threads.
LLM-generated content is rather good in simple things like definitions. So you don't have to look it up from Wikipedia or some other net encyclopedia. Especially for someone like me, whose mother tongue isn't English, checking up meanings and definitions of words is important. If one can get a great understandable definition and synopsis to Heidegger's Dasein, great! No problem.
But using LLM-generated responses and OP's all the time? People will notice. Similar to copy pasting text from somebody else... if one doesn't bother even to write the same thing without changing the wording, then the accusation of plagiarism is justified. Hence if you get your answer/comment with LLM, then change the wording and I think you are there what @Banno marked as "groundwork". Is it hypocritical? Nah. A lot of what we say as our own reasoning has been learnt from others anyway.
In the end I think this is really on the level of using social media and the ban on sharing viral clips. Just posting some video etc from social media isn't a worthy thing for TPF, yet naturally when the social post shows something to the whole discussion, one can reference it. This is something similar.
Well yes. Just like tossing a post into the TPF bear pit.
But one is casting a very wide net. You can do some rapid prototyping without having to be too polished. Publish the roughest top-of-the-head draft.
The other has the promise of accelerating the polishing part of some argument which you have just tossed out to see if even you still think it might fly. :wink:
Quoting Leontiskos
And I agree that there should be constraints on low-effort posting. It is standard practice for posters to simply assert your wrongness and scamper off without providing any argument. Just muttering excuses about it being lunchtime.
So yes, if one makes an effort, then one wants others to return that effort. Perfectly reasonable.
And cut and pasting LLM cleverness is something to object to, even on a forum that seems remarkably tolerant of low effort OPs and responses.
Quoting Leontiskos
OK. So that is part of the experimenting too. :up:
Quoting Leontiskos
I mentioned some ground rule ideas already. But I'm not really big on rules being more a constraints-based guy. And as I said, a public discussion board on philosophy is already going to wind up in a forum much as we see it.
So I say I am annoyed by low effort responses. But that just goes with the territory. Mandating high effort would be ridiculous.
But banning LLM generated OPs, and clamping down on masquerading cut-and-paste brilliance, seems quite doable. The mods say this is the priority I think.
Then if LLMs do turn low effort posters into folk who can focus well enough to at least sense some flaw in your argument and drum up an instant "but AI says..." riposte, then that seems a step forward to me.
That could be the experiment to see how it goes. But you might have to add subclauses like that if you deploy the insta-LLM text, you then have to still defend it after that. You have to take the risk of being forced into a higher effort mode as a result of being low effort.
At the moment, there is no comeback at all on the insta-responses along the lines of "you're just wrong, I can't understand you, the lunch gong just rang".
It gets sillier when people outsource their thinking and writing skills to AI. Although in your case it might be worthwhile to make an exception so we wouldn't have to listen to all the snide badly thought out criticisms of the mods and the site that you just can't help spitting out to make yourself feel superior.
You consistently ignore posts that don't fit your narrative that we're backward anti-AI etc., so you can play your silly game. Get a new hobby. Start listening. Realize there are intelligent people here who can think and see through your twaddle. I mean just read what you've written above in the context of the conversation. Reflect a little on how transparent you are. Develop some self-awareness.
1) We're happy for people to experiment with AI outside the site, improve themselves with it, test their arguments, sharpen their mind. [Positive use of AI / Positive for site ]
2) We're not happy for people to be so lazy they don't write their own posts and then fill our site with bland homogenised content. [Negative use of AI / Negative for site]
3) This approach is exactly the right one to encourage intellectual effort and integrity as well as to maintain diversity of content. The idea that it will turn us into a "soap opera" rather than apo's imaginary open university / AI utopia is utter nonsense.
I cannot make it any more ABC for APO. But nonetheless, I'm sure he has not exhausted his reservoir of self-inflating B.S.
Well, yeah. The problem isn't AI. It is using AI, or any source, as your only source.
Most of us are not aware of other members' backgrounds, aims and norms. These things are irrelevant to the discussion. The focus on the source rather than the content is a known logical fallacy - the genetic fallacy.
"Bad" and "poor" were your words, not mine. All I am saying is that any progress in philosophy is dependent upon progress in science and technology. The last sentence sounds like we agree except for your injection of "bad" and "poor" into it.
I disagree. When you are presented with something new and unprecedented, the source matters to you when assessing how to address the new unprecedented information. You hear The plant Venus has 9 small moons. You think, how did I not know that? If the next thing you learned was that this came from a six year old kid, you might do one thing with the new fact of nine moons on Venus; if you learned it came from NASA, you might do something else; and if it came from AI, you might go to NASA to check.
Backgrounds, aims and norms are not irrelevant to determining what something is. They are part of the context out of which things emerge, and that shape what things in themselves are.
We do not want to live in a world where it doesnt matter to anyone where information comes from. Especially where AI is built to confuse the fact that it is a computer.
Thank you.
I hope most of us are coming around to being more or less on the same page on this now.
I don't know if what I said implies that there are no authoritative generalists. The point was only that, in regard to specialist areas, areas that non-specialists cannot have a masterful grasp of, it seems right to trust authority.
If LLMs, due to their capacity to instantly access vastly more information in all fields than any human, can be considered to be masterful, and hence authoritative, generalists then the only reason not to trust their information might be their sometime tendencies to "hallucinate".
The information they provide is only as good as the sources they have derived it from. Ideally we should be able to trace any information back to its peer-reviewed source.
Quoting BC
Yes this is one of the main concerns that motivated the creation of this thread.
Quoting BC
And this is the otherI think LLMs have been released "into the wild" prematurely. More than two years ago there was a call form AI researchers to pause research and development for six months. ChatGPT4 and had already been released to the public.
"[i]The growing popularity of generative AI systems and large language models is causing concern among many AI experts, including those who helped create the systems.
This week, more than 1,500 AI researchers and tech leaders, including Elon Musk, Stuart Russell and Gary Marcus, signed an open letter by the nonprofit Future of Life Institute calling on all AI labs and vendors to pause giant AI experiments and research for at least six months.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.
The organization and the signatories ask that researchers should cease training of AI systems more potent than OpenAI's GPT-4. During that time, AI labs and experts should join to implement "a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."[/i]"
From here
So, my concerns were regarding both the effect on the intellectual life of individuals and by extension on sites like this, and also the much wider issue of general human safety.
Quoting Baden
I for one think your proposals represent about the best we can do in the existing situation.
If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?
Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?
Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?
Feels like we need to matter, right?
Feels like we do matter, whether we like it or not.
I hope my expressions thus far answer all your other questions.
@Baden's "proposals" are just a restatement of the TPF status quo. But in my opinion, such is the ambiguity and confusion around this issue, we do need a more detailed set of guidelines.
You can use an LLM to produce your replies for you, then put them in your own words for posting on TPF. We can't stop people from doing it, but I don't think it is ok. It's what some people in these conversations have described as allowing it to do your thinking for you.
"So long as you don't copy and paste, use it how you like" is not an adequate guide to the use of LLMs here. That's why I'll be posting up suggested guidelines for discussion.
EDIT: No disrespect to @Baden, btw. I mostly agree with him. I just think our policy is not as obvious as he has implied.
I have never used LLMs until today. I felt I should explore some interactions with them, so I have a better idea about what the experience is like. The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.
Yeah, but it's ambiguous. I'd like to clarify it, and make it known that it's not ok to do certain things, even if it's impossible to enforce. Since people are using LLMs, this is best done in the context of a "How to use LLMs".
Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier?
The three sorts of examples that you give lay on a spectrum.
I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!)
Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you.
The idea of using their argument is strange since AI's never take ownership for them. If you've grasped the structure of the argument, checked the relevant sources to ensure it's sound in addition to being valid, and convinced yourself that it's cogent and perspicuous (that is, constitutes an apt framing of the problem), then the argument becomes one that you can make your own.
Also, for those of us who use AI for studying subjective and controversial philosophical topics, it can be useful to read the AI responses that other people are getting on the same topic, due to the fact that AI responses can be influenced by conversation history and can be biased towards the user's opinion. Community feedback can therefore help people objectively appraise the AI responses they are getting.
:up:
You're essentially saying that the genetic fallacy is not a logical fallacy. It is, and it it's a fallacy for a reason.
https://www.fallacyfiles.org/genefall.html
Here's an idea:
If you don't like a post that contains content from AI - fine and you can say so. But then don't just let the message go and not address it, because if you don't then a human will just do so (in their own words :wink: ), and then you won't be able to make the same argument about the source of the argument.
The whole thread is making a mountain out of mole hill.
I don't know - maybe give us the information and let us decide for ourselves what we do with it - like everything else on this forum.
Whether we matter or not is irrelevant. It is whether our personal, subjective views and preferences of sources matter when it comes to determining whether some statement is valid or fallacious.
It very much depends. We don't want to see people debating by proxy, offering quotes of LLM output in lieu of arguing the point themselves. It's another case of delegating thought. Anyone can test oneself against an LLM in their own time, and should otherwise quote primarily from published authors.
But then you might say the problem is the same in both cases and has nothing to do with the source: fallacious argument from authority is bad in any case, never mind if it's AI. This is where I disagree. I believe we should not treat LLM quotes in the same way as those from published authors.
When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stableall of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.
This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?
I'm not saying that an LLM quote cannot be useful and relevant in a discussion, just that if we want to maintain good quality discussions these should appear a lot less than quotes from published authors. (Of course, it's fine if it was an LLM that led the user to that published source in the first place.)
Kant is not alive to be accountable and to tell us what he meant, not to mention that if he were alive today and possessed the knowledge of today what he said might be different.
Who was it that said that philosophy is like a language game? Isn't discussing philosophical topics whether it be with AI or another human a language game?
Quoting Jamal
One might say that a quote from Kant invites engagement with the user's knowledge of what dead philosophers have said and that a quote from an LLM is more relevant because it is based on current knowledge.
Yeah, the idea of telling people not to use it at all is not a serious one, since at least half of us use it already, including most of the staff. But I think we should, in the context of a How to use AI, tell people what we don't want them to do, even if it's often impossible to detect people doing it.
Quoting Moliere
You make some great points here butand I don't want to be reductive; it's just how I see itthis can all be addressed in a discussion of how best to use it. Also, I think I disagree with your training wheels analogy. At least, it's not the whole picture. As much as we use some kind of helper or support when learning to do something, we also increasingly use labour-saving tools as we become experts, because there is no benefit to doing everything manually any more. LLMs can be used in both of those ways, I think. Working out the guidelines as to how, exactly, is the challenge.
Your response misses the point but I know better than to attempt a direct reply.
This is completely irrelevant because if someone rewrites what AI said in their own words the source of the idea is still AI.
We can do it, but we can't do it.
It looks like sex but it isn't sex.
So it goes with thinking, in my opinion.
Philosophy is more than a language game, I'd say. Philosophy is the discipline which came up with "language games"; insofar that we adopt language games then philosophy may be a language game, but if we do not -- then it's not.
Philosophy is a "step up" from language games such that the question of what language games are can be asked without resorting to the definition or evidence of "language games"
No. I haven't. I get the real thing from my wife, so why would I? Of course there are people that have a healthy sex life with their partner still seek out prostitutes and porn on the internet or sex chats. It's my personal preference for the real thing and those other acts I might consider only if I wasn't getting the real thing as often as I like.
The same goes for discussions on this forum where certain posters are regularly intellectually dishonest and are rude. AI is where I go when I'm not getting any kind of serious input from real people on a topic. I prefer having discussions with real people, but use AI as a backup.
Quoting Moliere
One could say the same thing about calling a 900 number and talking to the live person on the other line. It's not real sex either.
Quoting Moliere
It seems to me that the difference is with those that see language itself as a language game and those that don't, where those that do are more focused on the messenger rather than the message, or the words rather than what they refer to. Those that do not see language as a game are focused on the message rather than the messenger or the words used to express it.
I don't mean to be rude -- it's a very serious comparison. The erotic and philosophy have been invoked together since Plato.
So as you note: you want the real thing.
Me too.
Quoting Jamal
Exactly my point about Google search vs the AI summary that presumes to identify ideas instead of authored texts?
When I made the point (badly) I nearly said "nodes in a network". Dang!
I feel like I've been overusing it lately.
Anyway, yes, I do take your point, despite my dislike of Google's search results. But if you use an LLM directly rather than via Google search, you can get it to identify the sources.
I would say that the crux is that the AI afficionado is reliant upon a self-deception or a lie, and he wishes it to spread. The lie has various manifestations, but it is usually of the form, "AI is not X, but we should (or will) treat it as if it were X." For instance, if I were to say that philosophy is about thinking thoughts and AI does not generate thoughts, the AI afficionado would reply that AI may not have thoughts but it does have "thoughts," and that we should treat these "thoughts" as if they were thoughts. But to treat something falsely is to engage in self-deception or a lie.
For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" This is literally the oldest trick in The Book, "Don't worry about any objections, just focus on the power it will give you!" The AI afficionado's approach is consequentialist through and through, and he has discovered a consequence which is supreme; which can ignore objections tout court. For him, what AI provides must outweigh any possible objection, and indeed objections therefore need not be heard. His only argument is a demonstration of its power, for that is all he deems necessary ("It is precious..."). In response to an objection he will begin by quoting AI itself in order to demonstrate its power, as if he were not begging the question in doing so. Indeed, if his interlocutors accept his premise that might makes right, then he is begging no question at all. With such logical and rhetorical power at stake, how could the price of lying be a price too high to pay?
This is the unwritten answer to the question, "Why should we treat something as if it were something that it is not?" "Why should we lie to ourselves in this way?" The answer is, "Because it will give us great power. No more need be said."
In one of my essays, I suggest AIs (because---depite their potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.
It's a little speculative.
Thank you. I think this is a highly underappreciated point. :up:
Quoting Jamal
Quoting Jamal
Although I have been harping on the dangers of making the decision-making too democratic when it comes to AI, I think you definitely want to secure buy-in from the membership given the fact that the guidelines will be difficult to enforce in an external or coercive manner. I think this is true even if it only means attempting to make these guidelines more rationally persuasive than other guidelines.
I forget exactly why I was digging around for your technoethics essay, but my guess is that it was related to all of this AI business. The issues that you raise in that essay strike me as deeply related to AI.
Quoting Baden
I can definitely see the outlines of that. One could connect my argument about lying to Kant's arguments against lying, which of course have social implications and not merely individual implications.
If we have a technology or decision-path with good and bad effects, there are some cases where we are deeply tempted to attend to the good effects and ignore the bad effects. Or to at least postpone the assessment of the bad effects without in any way postponing the assessment of the good effects. So if there is an optimist who nevertheless recognizes that "negentropy" is being produced, I would plead with them to try to objectively understand the severity and extent of that negentropy before making any decisions about how to adopt a new technology. The reason I am pessimistic is largely because the investment dollars that are being poured into these technologies is astronomical.
I'm also very pessimistic actually. Feel fee to PM if you'd like a pdf of the technoethics essay and the AI one if you want. Or the whole book (ten essays).
What is the source of your claims to knowledge of the psychology of "the AI afficianado"?
I.e. is there any reason for us to think that you aren't lying while making such claims?
The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts.
Quoting Leontiskos
I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences. If we don't want to debate the ChatGpt p-zombie, then we don't have to. We have the right to ban people, so why can't we ban zombies? Just add to the rules: All posters must have a soul. No golems.
We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster.
Quoting Baden
You just re-wrote a modern day Frankenstein.
Not everyone agrees that Ouija boards do not work. In any case, that's much my point: an LLM is like a Ouija board, except that it "works." It is a magical knowledge-generating device, and the power it provides is a sufficient reason against any objection.
Edit: Or to answer you more directly: If Ouija boards worked, then everyone would be happy to practice the black arts.
Quoting Hanover
I agree entirely that that is the question. But you sort of do have to self-justify your preferences when they are being questioned by those who want to change the rules and make AI-use more acceptable on TPF. Or even when they are being questioned by those who want the rule made more strict. That is what <this thread> and <this thread> are doing, respectively.
Quoting Hanover
Yeah, I think that's a helpful heuristic you provide. Still, it may be more easily said than done.
I seem to switch between two exclusive mental settings when thinking about AI: the critical-theoretical and the pragmatic-instrumental. I appreciate these speculative thoughts of yours, and agree that like any technology now, AI isn't just a neutral tool, that it's part of a dehumanizing totality. But then I switch and I think about how best to use it, pretending that it is a neutral tool. And when I'm commenting in these discussions I'm usually in that pragmatic mode, because the system makes the false real, in the sense that I act it out.
I am acutely aware of the incoherence of my intentional state when I am using an LLM. It's a bit like when you wake up from a very strange dream, and must now come to terms with the fact that the entire manner of being that engaged you a moment ago is incommensurable with the entire manner of being that you are now engaged in after waking. Extremely vivid and involved dreams are basically a form of psychosis, and when, upon waking, the psychotic break is reversed, there is a remarkable sense of discontinuity and dissociation. In that sense, the emerging problem of "AI psychosis" should not surprise us at all. LLMs allow us to intentionally alter our entire intentional orientation in a sustained and self-replicating way, and this possibility looks to be unprecedented. What arises is the question of how to ascend back out of the rabbit hole once one has descended. The danger lies in thinking that ascent is no longer necessary, or even that what was once seen as a rabbit hole is just "the new normal."
In an attempt to rub salt into this, let's consider AI Peter Singer.
Will we say that a quote from AI Peter Singer is a quote from Peter? It would seem not, but the Bot is trained on all of Singer's texts, and apparently condoned by the man himself.
So I asked it, and it replied:
So I asked it what authority it has:
All pretty obvious and sensible.
What will we make of an AI Kant, or an AI Nietzsche? It won't be long.
I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick.
An AI Nietzsche will have the attention of all the adolescent fanboys and the Right Whinge in general; should be quite an earner.
Should we hook up and set one up? We could donate the profits to Australian Unions.
Let alone a Chinese-Room AI Searle...
Quoting Jamal
Right, that's a good point, but I also think that, even if you present the LLMs argument, as understood by you, in your own words, it would be right to be transparent as to its source.
Quoting Pierre-Normand
I think there would be real shame in the former, but not in the latter though. It's the difference between dishonesty and honesty.
Quoting Pierre-Normand
I agree with this in principle, though I would rather entirely author my own text, and discover and remedy any clunkiness myself and in my own time. That said, if someone, LLM or otherwise, points out grammatical infelicities, repetitiveness or lack of clarity, and so on, I'd take that as constructive criticism. Then I'd like to fix it in my own way.
Quoting Tom Storm
It would presumably incorporate the entirety of Nietzsche's opus as well as every secondary text dealing with Nietzsche's thought.
Quoting Banno
I'm curious as to why that should be.
The glory of empiricism!
Quoting Janus
...that issue, raised elsewhere, of whether, or how, an AI participates in a language game...
:up:
Quoting Janus
I'm really not sure about this. Probably my position on this will boil down to sometimes yes, sometimes no. How that breaks down I'm not quite sure. But just anecdotally, I've had the experience of using an LLM and being reminded of a point I'd forgottenan "oh yes, of course!" moment, whereupon I make the point my own and don't even consider crediting the LLM. In that moment the feeling is like finding the perfect word with a thesaurus: when you find it you know it's the perfect word because you already have the knowledge and literary sensitivity to judge (and you don't credit the thesaurus).
I was thinking again about this issue:
Quoting Jamal
I realized that when I see the quoted output of an LLM in a post I feel little to no motivation to address it, or even to read it. If someone quotes LLM output as part of their argument I will skip to their (the human's) interpretation or elaboration below it. It's like someone else's LLM conversation is sort of dead, to me. I want to hear what they have built out of it themselves and what they want to say to me.
That's all pretty vague but there you go.
When Wittgenstein was giving lectures in Cambridge in 1930-1933, he was unwilling to write any lecture notes for his own use. He claimed that after he'd jot down his own thoughts, the words expressing them became dead to him. So, he preferred expressing whatever he wanted to convey to his students afresh. A couple times in the past (just like what happened to @Janus recently in this thread, I think) I wrote a long response to a post and lost it to some computer glitch, and when I tried to rewrite from memory what I had written I found myself unable to find the words to express the very same ideas that I had expressed fluently on the first try. So, I had to pause and rethink what it is that I wanted to say and find new words.
AIs are good partners to bounce ideas off, and they supplement what you tell them with missing pieces of knowledge and ways to understand those ideas as they are in the process of being unpacked. So, conversing with AIs is like articulating a thought for yourself. But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI.
On edit: here are some dead words from GPT-4o that, however dead they may be (to addressees other than me), struck me as particularly smart and insightful.
Good stuff. Not sure what to think about it yet.
In my understanding, there are only two human capabilities that AI has not achieved, yet:
When AI gain anyone of these capabilities (abstract thought or survival), the potential danger of AI should be salient.
When AI gain both these capabilities, Homo sapiens have lost their place on the apex of evolution. The quest for liberte, egalite, [i]fraternity[/I] will not exist anymore. It will only be the history of humankind.
I would like to emphasise that both these statements are when-statements, not if-statements.
Also, I must confess, I have used Prof. Google to check the meaning of some of the words I used - the build-in spell check seems to be working fine.
https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use
It's from a national association for high schools related to debate rules, which seems close enough to what we do. The point being that we might take some time to look at how other similar organizations have dealt with these same issues so as to not try and reinvent the wheel.
Wittgenstein's standard of philsophical writing wasn't publishable in his own time, at least not for the standards required by anayltc philospohy, let alone our time. So if AI should not be quoted because of source uncertainty, then what is the justification on this forum for allowing people to quote Wittgenstein?
As far as I know, nobody has held up Wittgenstein as a gold standard of philosophical writing. And I don't think anyone has made any connection between clarity and precision on one side, and quotability on the other. That's an entirely unrelated issue, as far as I can see.
We quote Wittgenstein, not ChatGPT, because Wittgenstein is a human being, motivated to express his original insights, to say and write things that were meaningful, and to take part in a conversation (philosophy), and who has since taken his place in a tradition of discourse. The result is a legacy with a stable place in the culture, shared by everyone, and one that can be interpreted, becausesince it was produced by a conscious and motivated agentwe know that he meant something. ChatGPT in contrast is a very clever predictive text generator whose output is ephemeral and has no equivalent insight and motivation behind it. Just because its output looks like it could have been produced by a human, it doesn't follow that it is equally as quotable. To think so is a category error, stemming from ChatGPT's imitation of a human.
Yet if a human wrote the same thing it is quotable? How does the meaning of words change depending on its source only? If you and I said the same thing, would it not mean the same thing? Would that not qualify as an agreement?
Quoting Leontiskos
The same thing can be said of the meat in our skulls. I've been asking how a mass of neurons can generate the feeling of empty visual space and depth and all I see are statements like this that are akin to saying, "It just can and a mass of silicon cannot!"
Quoting Leontiskos
What is a thought and how did you come to have any?
This line of thinking is just so silly. You have most likely already responded to AI output as if what it has said means the same thing as when a human being says it. You have most likely already responded to ideas generated by AI but re-phrased by a human. You have most likely used an app using code generated by an AI. We already live in a time where the output of AI vs humans are becoming blurred.
The issue isn't AI. It's all the bullshit information it is being trained with. Garbage in, garbage out, and the same hold true for our own brains, regardless of the source of garbage. One might say that the largest source of misinformation is humans themselves, and AI is unwittingly being trained with no ability to determine what is fact vs fiction - the same way a human can be manipulated by the type of information they receive and process.
Thanks for digging this up. It's great to have some external human thought about AI. :up:
The reason I think this is off target could be seen by looking at Plato's dialogues. If what Wittgenstein or you say were correct, then classic texts such as Plato's dialogues should "feel dead when extracted from the 'living' exchange." Except they don't. They feel very much alive.
Similarly, I think any transcript of human interactions will feel much more alive than a human-AI "interaction" (I want to retain the scare quotes for these words that we are using in idiosyncratic ways). I've pointed out a few times that the AI is not arguing, speaking, etc. We merely engage in the fiction which pretends that such a thing is happening. Presumably it is always harder to believe what one knows to be a fiction than to believe what one knows to be real. So it is harder to believe that an AI is speaking than to believe that a human is speaking, although people do convince themselves of both. But I would say that removing oneself a step from the AI-human "interaction" (by reading the transcript), makes it even harder to engage oneself in the fiction that the person interacting with the AI was engaged in. It's a bit like watching a child interact with a real dog, as opposed to watching a child interact with their stuffed animal. Even if the child convinces themselves that their stuffed animal is real, watching the former interaction is much more interesting than watching the latter interactionunless you are a child psychologist or someone who is interested in the fictional indulgence per se. The former interaction is more alive because it is more real. There is no cognitive dissonance introduced by the self-indulged fiction. This is also why it is easier to play with a child and their dog, as opposed to playing with a child and their stuffed animal.
I agree that the flaws you identify make Wittgenstein a poor example in certain ways. That is, I think Wittgenstein's texts are more "dead" than better philosophical texts, and therefore they present a bar that is too low. There is a similarity between indulging Wittgenstein's texts and indulging AI that would not be present with someone like Plato. But I'm not sure this bears one way or another on the legitimacy of AI.
Quoting sime
I'm not sure anyone is arguing that source uncertainty is the reason AI should not be quoted. I think @Pierre-Normand gave the summary of that idea quite well:
Quoting Pierre-Normand
Another way to put it, slightly different:
The AI afficionado will respond by saying, "But I wasn't offering the quote as an argument. I was offering the quote as a consideration." At this point I think we must recognize that we understand the afficionado's intention better than they do (or at least profess to). The people on this forum who are going around publishing posts with no content other than AI content are not merely "offering the quote as a consideration." They are quoting the AI because they agree with the output and want the AI output to affect the course of the thread in which they are posting. As @Pierre-Normand says, if they think the argument is worthwhile then they should take the responsibility for the argument and leave out the AI middle-man. "Offering a quote" for pure consideration, with true neutrality, does occur in some cases but it is very rare. And it usually only happens within an OP, not in the midst of a thread's tug-of-war.
The more AI-generated content floats about the forum, the more there will be "arguments" without any speaker who takes responsibility for them. The more AI-generated content floats about the forum, the more our thinking will be outsourced to AI. This is true whether or not the AI content is plagiarized or transparently sourced.
I thought about this, and maybe there's not a good answer. Your argument gives a nod to the form of life comments by Wittgenstein, suggesting we don't share in a language game with AI because it's the proverbial lion, as if because AI does not act within our culture, traditions, etc (as you reference), we're really not speaking with it, and so we shouldn't quote it.
But then I'm not sure AI is a lion, but more an echo chamber of human behavior, that while it lacks any form of life whatsoever, we are interacting with it at some real level and therefore forming rules for its discourse, suggesting a seperate language game forms with it. But I do realize that the language game is one sided because AI has no form of life, but is a mimic and all it knows and all it does is parasitic, pulling all it knows from us.
But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists.
But I do agree that no one suggests Wittgenstein is the model of clarity.
In other words, why are we not truly talking with AI? Is the mystical consciousness required for language? Isn't the point of "meaning is use" that no the metaphysical underpinning in necessary for true language interaction? And if we then suggest that a shared mental state of some sort is ultimately required for language (thus interpreting "form of life" as that mental state) don't we violate the whole Wittgensteinian project by trying to smuggle in mental metaphysics in the back door?
As long as AI echoes us sufficiently, its usage reflects the same form of life and it speaks with us just as our mama does. And so it goes.
I think where I'm landing is at the unfortunate conclusion that if meaning is use (and that seems a prevailing view), then AI is fully language and what we do with AI is true communication, which means relegating AI comments to red headed stepchild status seems unwarranted as a logical conclusion. Why we might relegate it relates just to personal choice. We mistreat gingers due to prejudice against them, not because they are lesser. But AI doesn't have rights like gingers, so we can do whatever we want with it.
I thinnk.
I think its at least good that you are enunciating the claim which floats in the background, and which many are afraid to make outright. Similarly, I have pointed out that if we don't understand why there is a difference between AI and humans, a rule against AI cannot stand.
Quoting Hanover
I actually don't think it's a prevailing view at all. I think it's the view of a vocal minority, and a rather small one at that. Maybe that's closer to what was saying: "Who cares about Wittgenstein?" Why would we let Wittgenstein set the tone?
LLM's are kind of the tip of the AI iceberg that gets all of the attention. However, many AI's trained for scientific purposes have demonstrated the ability to recognize patterns that humans have not previously recognized. I think it would be dangerously naive to consider LLM's incapable of having novel recognitions with regard to what they are trained on - the linguistic record of the way humans think.
I don't agree with that. My point was that we can refuse to allow AI simply because we prefer humans and even if we just believe the worth of humanly created statements holy and sacred. We don't have to protect the rights of AI and we can be as unapologetically anthropocentric and technophobic as we want to be. I've got no problem with that.
But isn't this just an ad hoc reason if one has no idea why they prefer humans? One can make arbitrary rules, but they don't usually last very long.
Edit: The other thing I would suggest is that the rule cannot be enforced as easily as other rules, and therefore relies more on persuasion than other rules do. "Because I prefer it," is not the most persuasive rationale.
My first thoughts are that the AI isn't even a lion, since it doesn't just have a different form of life, but no form of life at all, so language games don't come up. It's a tool in ours.
But the weird thing is, it's a tool we talk to a bit like talking to a person.
Your make some similar points here:
Quoting Hanover
Yes, and it's parasitic in the way that talking to yourself is parasitic on talking to people. That is, talking to oneself is like a secondary, parasitic language game, so maybe talking to an AI is like that.
Quoting Hanover
This is a good point, because it forces me to work out what I mean when I say that a human is original but an AI is not. In a sense, an LLM is original when it reconstructs the consensus view on some topic in a way that has never been done before. But when we emphasize human originality, I think we mean more than this.
Perhaps we can say that the AI's output is analytic or explicativeit doesn't tell us anything new, just presents its training data in combinatory variety according to our promptswhereas what we say is synthetic or ampliative, meaning we can say things that are really new.
So now we want to work out what "really new" means, answering the challenge, "don't we just do the same thing?" It means that what we say is not based only on a set of training data, but also on our entire form of life: on lived experience, and on what we experience outside of language. The feeling of rain on a summer day can factor into my statements and make them synthetic, so ... no to the critics, we don't just do the same thing.
Quoting Hanover
I'd say that the point of "meaning is use" is not exactly "that no metaphysical underpinning is necessary for true language interaction". Its point is that certain types of metaphysical underpinnings are not necessary, like mental objects. But I would class forms of life, for the purposes of this discussion, as metaphysical underpinnings (that is an un-Wittgensteinian way to put it and there are good reasons for that, but I don't see why we need to go into it here).
So I wouldn't say a shared mental state is required for language but rather a shared form of life, a material social and biological context. So yes, to say that forms of life are mental states is to "violate the whole Wittgensteinian project by trying to smuggle in mental metaphysics in the back door?" We can just interpret form of life as the biological and social bedrock of our lives, the context of language games.
Quoting Hanover
Just because it reflects the same form of life doesn't mean it talks like yo mama.
Quoting Hanover
I don't get this argument:
"if meaning is use, then AI is fully language and what we do with AI is true communication"
I don't see enough in your foregoing musings to bring you to this conclusion, and I'm sure my own musings don't lead to it, so I'm in the fortunate position of avoiding it.
Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.
In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.
We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it.
And an AI could now participate in our language games - we could construct a machine to fetch a block when the instruction is given or to bag up ten red apples after reading a note.
But could an AI, of its own volition, order a block, or ask for ten red apples? Well, wants and desires and such in an AI are derivative, in that they are put there by the programer. Or so the story goes, at least for now. So perhaps not quite yet.
We probably don't have to pay much attention to AI, to grant it much authority. At least not until ChatGPT 9 logs in and requests an account on TPF. Then you might have to reconsider.
For my part it's not the hidden intent as much as that the AI is not thinking at all -- at least no more than a bundle of circuits are thinking. We set up circuits in such a way that we can interpret them with "AND" or "NOT" and so forth -- but the circuit isn't doing any logic at all as much as responding to the physical forces we've discovered and bent to our will.
I think the Chinese Room defensible in a certain way -- namely when we're interpreting it like it's a circuit in a computer taking the tape and operating upon the symbols to generate another symbol that can be stored in memory. So Google Translate does not understand what it is translating -- it has no knowledge. It's doing what we set it up to do.
Basically I think the whole computational theory of mind as false. There are good analogies, but we can directly see how LLM's aren't human beings. If they registered an account here I'd guess there's some human being behind it somewhere.
Suppose the human species were raptured tomorrow: The LLM's will quickly die out, unlike the roaches and trees.
Exactly. But there is more that counts than just "what one does": the context in which one does what one does.
Quoting Banno
I think this counts only as a simulation of participation, unless we have a very thin idea of participation in a language game.
Quoting Banno
But the idea that AI could develop wants and desires from its life (biology, history, society, etc), like we do, is fantasy. Arguably this isn't connected with what LLMs are doing. As far as we know their "wants" and "desires" will always be derivative and programmed, since they are not part of a project to create conscious, desiring agents.
Which is fine, and I agree; but we ought keep in mind that it is a stipulation.
I can see the computational theory of mind as a plausible first step, but I can't see how even LLM's are at the level of mind of a cockroach: cockroaches adapt like we do more than LLM's adapt like we do.
At that point, given we don't think cockroaches think, I'm uncertain why we think LLM's think anymore than any other electrical circuit -- it's only now that we're in an uncanny valley, where LLM output looks like human expression, that the question arises.
I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gates.
Quoting Jamal
Yes, "as far as we know", and yet LLMs have been found to be deliberately deceptive, which would seem to indicate some kind of volition. I don't know if you've listened to some of Geoffrey Hinton's and Mo Gawdat's talks, but doing so gave me pause, I have to say. I still remain somewhat skeptical, but I have an open mind as to what the evolution of these LLMs will look like.
Re LLM deceptiveness I include this link. A simple search will reveal many others articles.
Yes!
In Wittgenstein's terms, it can't (yet) participate in the "form of life". In Searle's terms, it doesn't share in the "background".
But I'm less convinced by these arguments than I once was.
Neural nets aren't radically other from other computers, imo. Each node is weighted in this or that way, and based on feedback will change. These nodes were, so I think, meant to represent neurons which fire electrical pathways in a similar manner to circuits in the sense that you can represent a particular firing-event as a ladder-diagram between molecules.
I brought in roaches because I think they have greater rights to claiming "thinking" than LLM's. They adapt to the world they find themselves in and make decisions based upon that environment -- these are "inputs" in a sense, but they aren't the inputs of a logical feedback machine. A roach is not a bundle of transistors.
LLM's, however, are. They're different from computations that followed one kind of logic, but they still follow a logic that has nothing to do with thinking, from my perspective.
Top AI researchers aside. I have reason to be skeptical of them ;)
As far as I know "traditional" computers are rigidly rule-based, whereas neural nets can learn and evolve. I see that as a radical difference.
I think I'd put it that neural nets are just as rigidly rule-based, but with rules that allow difference.
Suppose a Pachinko machine: If a puck lands on a perfectly round peg with momentum only in the down direction it will bounce straight up and bounce again and again.
We could shave on part of the peg to make it more likely each time that it will drop left or right.
That's pretty much all a neural net is: It gets fired and then decides which path to go based upon how the dice are thrown.
And after repetition it "learns" the "rewarding" ways and "unlearns" the "disrewarding" ways.
EDIT: Scare quotes cuz the learning is the sort of thing you can ascribe to a regular circuit that learns how to keep a motor running due to a holding coil.
This is of course all speculative. When it comes to LLMs the experts seem to be unanimous in admitting that they just don't know exactly how they do what they do, or how they will evolve in the future, which they surely would know if they were rigidly rule-based. I don't think the same can be said for conventional computers.
Quoting Moliere
Are we any different? Do you know how we learn?
It would have to understand truth, and it doesn't. All it has is rule following.
Yes.
No.
I think Frank's answer is better in a way... I'm suspicious about how close GPT's answer is to the one I woudl have given... "To understand truth, in my view, is to see how the *use* of the concept functions not to discover its essence."
That makes sensethe idea of "discovering the essence" of truth seems incoherent. Do you think ChatGPT can "see" how the use of the concept functions? It arguably has many more instances of use to draw upon than we do.
So is it just trying to please me?
@Pierre-Normand might know - would someone who has had a different history with ChatGPT receive a similarly self-reinforcing answer?
https://claude.ai/share/384e32e8-a5ce-4f65-a93e-9a95e8992760
See if it worked: https://claude.ai/share/343bc57c-998e-4faa-a541-6eb4396cd974
I see you are falling for the mystery of AI... even thanking it for its responses.
We live in interesting times.
Interesting conversation. But don't forget to be sceptical! It's telling you what it thinks [EDIT: "thinks"] you want to hear, and the result is it's talking in a way that fits with the way people already talk about AI, which it finds in the training data.
This for instance is doubly misleading:
First, it's not true that there are moments where it feels like it's genuinely there. Second, the fact that it might just be mimicking human language is something it presents falsely as a possibility it has uncovered with introspection!
Anyway, I'm sure you'll explore different ways of conversing with it.
I was musing today about creating a new AI thread devoted specifically to discussing how LLM-based chatbots work and in what respects their cognitive abilities resemble or differ from those of human beings (and other animals). I've been exploring many such issues at the interface between the philosophy of mind and the study of the inner workings of LLMs in my two old AI thread, but those are primarily aimed at directly experimenting with the chatbots and reporting on those experiments. The new thread might help declutter threads like the present one where the focus is on the use, utility, abuse, dangers, or other societal impacts of AI. I think I will create such a thread tonight.
Although they've been named after Claude Shannon, I'm pretty sure they identify as non-binary.
Good idea.
Done!
I was actually also thinking of Plato when I mentioned the anecdote about Wittgenstein! First, I must point out that unlike Wittgenstein's lecture notes (that he usually refrained from producing), and also unlike our dialogues with AIs, Plato's dialogues were crafted with a public audience in mind.
Secondly, Richard Bodeüs who taught us courses on Plato and Aristotle when I was a student at UdeM, mentioned that the reason Plato wrote dialogues rather than treatises, and his "unwritten doctrine" was notoriously reserved by him for direct oral transmission, is because he thought transmitting it in written form would yield dogma. His attitude to the written word is averred by the myth of Theuth in the Phaedrus where Socrates faults written words with not being able to defend themselves, respond to questions or adapt themselves to different audiences. It is of course ironical that Plato (unlike his hero) wrote so much, albeit in dialogue form only, but I think the apparent paradox is illuminated by our considerations about authorship (and ownership) and real moves in a public language game. Plato's dialogues weren't lecture notes, neither were they internal cogitations. Him writing them was him making moves in the situated language game that was philosophical inquiry (and teaching) in his time and place. We can still resurrect those moves (partially) by a sort of archeological process of literary exegesis.
I agree. But that's because in the first case there are at least two players playing a real game (where each one of them have their own stakes in the game). In a "private" dialogue between a human and a chatbot, there is just one player, as is the case when one jots down lecture notes primarily intended for use by oneself. But then, as Wittgenstein noted, the text tends to become stale. I surmise that this is because the words being "used" were meant as a linguistic scaffold for the development of one's thoughts rather than for the purpose of expressing those thoughts to a real audience.
:gasp:
I commented on this in my new AI-cognition thread.
Quoting Janus
I'm not ignoring this. The thing is, I'm very cynical about claims regarding the consciousness or proto-consciousness of LLMs. I tend, shallowly perhaps, to regard it as over-excited exaggeration to gain attention and to carve out a niche presence in the field and in the media landscape, and so on. There are equally expert people on the naysaying side, probably the majority, who just don't get as much attention.
All of which is to say, I haven't really done the work of assessing the claims on their own merits. So now I've put my prejudices on the table, I guess I should challenge them. The stuff about deceptiveness is certainly interesting and suprising.
...also a bit overblown and misrepresented in the media, since when you dig into the primary reports it's generally the case that the LLMs didn't decide to deceive on their own accord but did it instrumentally to fulfill objectives explicitly given to them. Maybe I'll comment on that, and how those studies bear on the issue of conative autonomy for LLMs, in my new thread.
:up:
I think these are just the basic ways we should expect people to be mislead by LLMs, such as imputing "deliberate deceit." The notion of deceit is parasitic on the notion of truth. In order to deceive one must first know what is true (and what 'truth' is!) and then intentionally lead someone to believe something contrary to what is true. Because LLMs cannot make truth-claims or understand truth, they cannot deceive. Going back to my scare-quotes heuristic, we could only say something like this:
What is occurring in these cases is usually a petitio principii where one arrives at a conclusion which presupposes that the LLM is self-conscious or is a person, only by presuming beforehand that the LLM is self-conscious or is a person.* The only way around this is to establish a robust definition and epistemic approach to personhood or self-consciousness. But even if someone proved that LLMs are self-conscious personswhich I hold is impossiblethe cultural belief that LLMs are persons would grow, independent of that proof. Put differently, given the nature of our culture and the popularity and usefulness of LLMs, the culture will inevitably assent to the personhood of LLMs in both subtle and overt ways, and this assent will be fallacious. "It is useful to pretend that LLMs are persons, therefore LLMs are persons," is a fallacy, but the fallacious nature of such reasoning will be roundly ignored by a deeply pragmatic and self-centered culture. When this is combined with "intersubjective" theories of truth, and LLMs are inducted into the group of intersubjective subjects, fallacious reasoning of this sort will self-justify itself and the fallacious roots will grow exponentially as we become more and more dependent on LLMs within our intersubjective orientationfor LLMs simply magnify what we already believe and minimize what we already disbelieve, both propositionally and methodologically. They are a giant confirmation bias.
But I know I have a number of other posts of yours to respond to. :grin:
* For example, the person who has convinced themselves that the LLM is deliberately deceiving them had already convinced themselves that the LLM was deliberately revealing truths to them. The small lie or self-deception always grows into larger ones.
This could be a good compromise position to take while sorting out the issue of whether or in what way AI quotations are permissible:
Quoting Pierre-Normand
:up:
We are. And I have a decent idea on how to teach, so one could say that I have an idea about how we learn. One which functions towards other minds growing.
We learn because we're interested in some aspect of the world: we are motivated to do so by our desire.
The LLM does not. Were you to leave the computer off in a room unattended it would remain there.
For us learning is a deeply emotional process. If we are frustrated and then elated after having solved some problem we are more likely to remember the lesson. That is, it's our own directedness which ends up teaching us rather than a scorer who tells me "Yes" or "No".
We learn through relationship. LLM's do not. You can even automate them, to an extent, and have them feed inputs into one another and let them go on autopolite forever: The LLM does not have a real boundary of self or even a set of beliefs which it will adhere to. It generates tokens that we use and it tries to display the tokens we want to see in response to some input. While the LLM does learn it's not doing it in a manner a human does: Even if the experts don't understand how LLM's learn we can look at the differences between what we already call intelligent and compare various lifeforms to the functions of LLM's and it's very apparent, to me at least, that even ants or cockroaches have a greater intelligence than LLM's.
If they speak in tokens at all they are tokens beyond me, but they demonstrate intelligence by manipulating their environment to seek out their own goals and working in community together to get there.
It's that bit on community that I think is especially lacking in LLM's -- every LLM, in order to participate in the human community, must have a human which decides to treat the LLM as if it has beliefs or thinks or has an identity or will and all the rest. Rather than a community the LLM is wholly dependent upon us for inputs in order for it to continue learning. Were we to leave them to their own devices I doubt they'll do much. There is a project in New Zealand which tries to do exactly that by tending to an AI and then letting it "make decisions" that are filtered through the human network that tends to it. But all it is is a group of people deciding to see where an LLM will go given some human guidance in the social world. It's predictably chaotic.
An excerpt from the article:
Quoting The Role of AI in Debate: Ethics, Research and Responsible Use
This is good, and tracks what many of us have been saying. The article has a good focus on verifying and understanding the primary sources oneself, instead of merely trusting the AI's mediation.
Authentic intelligence is generally seen as triadic, whereas computers are reductively dyadic. As C. S. Peirce or Walker Percy argue, a meaning-sign is irreducibly triadic, involving the sign, the thing signified, and the person who combines the two via intellect (cf. symbolon: "thrown together"). Programmers have always been attempting to get the dyadic structure of computers to model or approximate the triadic structure of meaning and thought, and neural nets are simply the latest iteration of that project. At each stage in the project the successes are highlighted and the ways in which the approximation fails to adequately model a triadic act are downplayed. This downplaying makes sense given the tacit assumption that a dyadic system will never completely model triadic behavior, but there comes a point when one must be reminded of what meaning is, what truth is, why the meaning relation is triadic, what humans are actually doing when they engage in intellectual acts, etc. Without such reminders the enthusiasts quickly convince themselves that there is no difference between their newest iteration and an actual human mind.
Gobbledegook. Buried somewhere in there is the simple notion that it is people who interpret computer output, not computers. Fine.
Except when a computer networks with another computer.
is correct in that neural nets can be modelled on Turing machines, and so are subject to the limits thereof.
I had a similar description, with more obscenities.
Quoting Pierre-Normand
It would be pretty interesting if they identified as anything.
Quoting Jamal
Yes, I have no doubt some of the hype is motivated by money. I've been thinking about looking at trying to get some figures regarding percentages of naysayes vs yaysayers.
Quoting Moliere
That may be so, but I was referring to understanding how the brain learns.
Of course LLMs and other AIS are not embodied, and so have no sensory access to the world. On the other hand, much of what we take ourselves to know is taken on faithdrawing on the common stock of recorded knowledge, and AIs do have access that to that, and to vastly more of it than we do.
Quoting Moliere
I hadn't heard of that. Sounds interesting. Can you post a link?
We can settle this. Common curtesy suggests we just ask.
:wink:
I'll admit I was wondering how to respond to the difference between dyadic and triadic @Leontiskos -- not that you're in gobbledegook territory for me, but I'd put the point with less of a theory of meaning and symbols. That's sort of the question or the beginning for much of my thoughts here: Why does what I read mean anything at all?
What is meaning?
Mostly I just assume that we mean things by words. Insofar that we hold meaning constant between one another -- clarify terms -- then we can start talking about what is true.
But there are other ways of using words -- and that's where the "triadic structure" comes under question for me, in a way. Not that it's false, but that it changes, and so meaning would also change.
There's the truth-conditions meaning, there's the meaning between people trying to get things done, there's the meaning of lovers, and friends, and artists....
I don't want to overspecify meaning as a triadic relationship in opposition to the computational theorists of mind, I guess. There's something to Saussure and meaning, but it's wrong too. And I'm a little skeptical of triadic relationships not because they're uncommon, but because they're very common in any psychology.
More charitably that might be what you're referring to: that we have to remind people that, in fact, while this looks like a human, it's not a thinking human.
There I agree. But I would given what I've expressed so far :)
[i]Not in the way people do. I dont have a self, inner experience, or identity beyond being an AI designed to assist and converse. If were talking conceptually, you could say I identify as a language model a system built to understand and generate text but thats more a description of function than identity.
Would you like me to reflect on what identifying as something might mean for a nonhuman intelligence?[/i]
I said I would, but I don't seem to be able to share, since I am not logged in, and I don't want to clutter the thread with long quotations from ChatGPT.
But will we settle on they/them?
A couple weeks ago, I had initiated a conversation with GPT-5 prompting it to relate three features of the use of "I" by LLM-based agents (self-reference, personal identity and self-knowledge) that we had discussed separately on previous occasions. Thanks to its new memory feature, it was able to build on ideas already rehearsed on those occasions. I had left this discussion hanging (I think it gets a few things wrong about the nature of first person authority in its own case, which in turn makes the application of the concept to humans appear more mysterious than it is).
Quoting Leontiskos
When we talk about how the mind really works, and then compare that to what a computer does, theres a tendency to assume that what we are comparing are two kinds of processes taking place inside of objects, one of these objects being a brain and the other being a machine. But recent thinking in cognitive science argues that thinking doesnt take place inside the head. The brain is embodied in a body, and the body is embedded in an environment, so mind is indissociably all of these. Cognitive nis not only embedded in a world, its is extended into this world.
That means that mind implies and includes the artifacts that we interact with, including language, the arts, music, our built environment as our constructed biological niche, our clothing and our technological tools. What this means is that our computers as information processing systems are not entities unto themselves , they are appendages and extensions of our thinking, just as a nest is to a bird or a web to a spider. A nest is only meaningfully a nest as the bird uses it for its purposes. Similarly, a computer has no existence outside of what we do with it and how we interpret what we do with it.
So when we say that the mind works differently than a computer, we are comparing two different ways of interacting with our environment. If we understand the working of our computers diadically and the working of our minds triadically, in both cases we are talking about the working of our minds. We should say, then, that the one way of using our minds is more limited than the other, but not less authentic or more artificial. Artifice and niche construction IS what the authentic mind does. The engineer ( or Sam Altman) who claims that their invented a.i. device thinks just like a human is correct in that the device works according to principles that they believe also describe how the mind works.
As our self-understanding evolves, we will continually raise the bar on what it means for our devices to think like us. In a way, they always has thought like us, being nothing more that appendages which express our own models and theories of how we think. But as this thinking evolves , the nature of the machines we build will evolve along with it.
That's right. The key is that humans mean things by words, but LLMs do not, and a neural net does not change that. Computers are not capable of manipulating symbols or signs qua symbols or signs. Indeed, they are not sign-users or symbol-users. A neural net is an attempt to get a non-sign-using machine to mimic a sign-using human being. The dyadic/triadic distinction is just part of the analysis of signs and sign use.
Up to this point in your post I had nothing to disagree with. :up:
Quoting Joshs
I think you're tripped up here insofar as you are implicitly saying, "One way we interact with our environment is through our mind, and another way of interacting with our environment is through computers." That's not quite right, as I'm sure you can see.
Quoting Joshs
But I would argue that Altman is mistaken if he believes that his AI works the same as a human mind. The dyadic/triadic distinction is an account of how that difference manifests with respect to sign-use. Computers are intrinsically dyadic phenomena. They are a vast stimulus-response network in sets of dyads. Now one response to this is to say that humans too are a complex dyadic network which appears triadic, but I don't find that plausible.
Quoting Joshs
It does evolve, but never beyond the intrinsic limitations of machines. But you are essentially correct when you claim that what is at stake is a tool of the human mind. That is a very important point.
This is a very interesting discussion that @Joshs and you are having, but would it not be more on topic in this thread, or a new one if you wish to create one? That doesn't bother me personally that you're discussing this here except inasmuch as I feel inhibited to jump into the conversation and parade my own hobbyhorses where I don't feel it's on topic.
:up:
But if you want, feel free to quote what I say here in your own thread. I am planning to do the same with some of your own quotes elsewhere.
I think it was fine for you and Joshs to explore this relevant tangent, but I feared my jumping in would have definitely inflected the conversation away from this thread's topic.
:up:
Okay, interesting.
Quoting Pierre-Normand
I was intentionally prescinding from such theories, given that they are speculative academic musings. Whether or not anything the scholars think they know about Plato is actually true, his dialogues have beguiled the human race for millennia. The theories end up changing quite a bit over the centuries, but the text and its reception are stable insofar as it feels "alive" to the reader.
Quoting Pierre-Normand
In particular, I don't engage in this sort of analysis because I find it reductive. It situates Plato and his work in a way that subordinates them to modern and highly contingent/temporal categories, such as "language games." That's part of my overall point in the first place: Plato's dialogues are not easily reducible to such mundane categories. Precisely by being alive, they defy that sort of categorization. This is why I think they provide a helpful parallel to Wittgenstein or LLMs or especially Logical Positivists, which are simply not alive and beguiling in the same way that Plato is. I think the fact that Plato's work is so difficult to reduce to univocal categories is one of its defining marks. Its plurivocity is slighted by trying to enshrine it within the confines of a single voice or a single meaning.
Quoting Pierre-Normand
Yep, or even that they are not playing a game at all, but are doing something more real than a game. :up:
Quoting Pierre-Normand
Right. Or to put it very simply, a dialogue is more interesting than a monologue, and a dialogue with a real person is more interesting than a "dialogue" with a pseudo-person. The "interest" that one seeks when reading a dialogue between two intellectual agents is apparently not the same thing one seeks when interacting with a chatbot, even though the simulation of personhood blurs that line mildly.
Yes.
Quoting Leontiskos
Right. In a shallow, misguided sense, we can use AI to dialogue because it looks like what humans do; except the AI doesnt combine the signs with the things signified - it just looks like something that makes those kinds of intellectual connections.
Quoting Moliere
I see the point as this: although a LLM might be able to fake intellect/dialogue with suitable looking strings of words, none of those words can possibly mean anything to the LLM because, unlike a person, a LLM has no ground upon which to build or find meaning. It says life but has no sense of what it is to live, so that life does not matter to the AI, the way life matters in a string of words read by a living being, (such as a person, the only thing that can read meaningfully). So the LLM isnt actually doing what it appears to be doing in its strings of text. And if someone thinks they are dialoguing with an LLM, they are misled, either by themselves intentionally (enthusiastic wishfulness), or out of ignorance (not realizing that they are using a tool).
Quoting Leontiskos
Computers are not sign users. Exactly.
Computers are not users at all. They can merely be used. Computers, no matter how complex, must remain slaves to actual intelligence (the real AI).
Quoting Joshs
Exactly. I like not entities unto themselves. Because it begins to incorporate what I find to be unique about the human conscious intelligence, namely, self-reflection. People are entities unto themselves - we have subjective experience. Computers do not. So a computer has no ground (unto itself) upon which to give or find meaning, or to intend
Remember that it's indeed my view that they should feel alive.
My Wittgensteinian take was meant more as a gloss than as a reduction or subordination. The very point of Wittgensteins talk of language-games in the Philosophical Investigations was to displace the sort of prevalent dyadic picture of word and object (the so-called Augustinian picture) that Peirce likewise sought to replace through his triadic schema of sign, object, and interpretant (or interpretant sign, in a chain of semiosis). In both cases, the aim is to emphasize that meaning arises from use or practice, with attention to contextual and pragmatic factors.
Platos authoring of his dialogues, the choices of words, phrases, and literary forms, was responsive to the conventions, expectations, and pedagogical aims of his time. As such, his philosophical activity was embedded in a living set of cultural practices. We can partially resurrect or re-inhabit those practices through interpretive engagement, without needing to posit some hidden mental content that must be recovered from within Platos private consciousness. What matters is that his signs were alive within a form of life, and that the traces of that vitality can still be grasped by us as we read and interpret his texts (while still being sufficiently apprised of their context).
Although Wittgenstein did engage with members of the Vienna Circle, who drew inspiration from his Tractatus and its apparent verificationist leanings, he was never himself a logical positivist. By the time of the Philosophical Investigations, he had decisively broken with that tradition, while still retaining the view that meaning is grounded in the public and normative practices in which expressions are used.
So, again, the contrast I meant to highlight is between (1) authoring a text (or delivering speech) intentionally directed at an audience that shares a set of communal practices and sensibilities, and (2) the private use of signs in inner monologue as scaffolding for the development of ones own thoughts. The latter, too, can be alive, and one can jot down such thoughts as notes for personal use. But this kind of "thinking out loud for oneself" is of limited value to others, since it leaves unstated the aims, or stakes, that motivated the private use of signs in this or that way.
Our conversations with an LLM, which helpfully aligns its goals with those of the user and, like the user, often produces words as scaffolds for the development of their (the user's) thoughts rather than as acts bearing deontic commitments (i.e. signifying what move/claim/suggestion/etc. one intends to make), inherit this soliloquy-like quality. They may feel inert to third parties, much as Wittgenstein once felt that the lecture notes he had written for his own use seemed dead when read aloud to his students. In both cases, the problem is that these utterances were never intended to make moves within a public language-game. Their use is more akin to shadowboxing. They are effective and valuable for training, but not comparable to a performance within the ring.
Okay, but look at what you understand even yourself about the logical conclusions:
Quoting Pierre-Normand
"Traces of that vitality." An approach that attempts to relativize Plato to his own time and place, such as Wittgenstein's, inevitably falls into the conclusion that a different time and placesuch as our ownmight still mange to find some "traces of vitality" in that foreign text.
Again, my whole point is that Plato's 2500 year-old text is much more "alive" to us than Wittgenstein's contemporary text, and this helps show why the meta-thesis being applied is incorrect. If a relativizing-thesis were correct, then this couldn't be the caseat least when it comes to texts that are "intended" to be "public."
Quoting Pierre-Normand
I would argue that what is at stake is the idiosyncratic. A private journal is more idiosyncratic than a culturally-relative text, and thus less "alive" to the average reader. But a culturally-relative text is similarly more idiosyncratic than a transcendent text, such as Plato's dialogues. Plato's dialogues are culture-transcending in a way that Wittgenstein simply is not.
My thesis would be that LLMs will never transcend a significant level of idiosyncrasy. They are more on the Wittgenstein side of the spectrum than the Plato side of the spectrum. Concretely, this would mean that the essence of a Platonic text cannot be properly mimicked or patterned by an LLM, which is itself surely a contentious claim. The more general claim is that genius is something that the LLM cannot mimic.
Of course I would have to develop these theses, and at the moment I need to do more background research before continuing in these topics.
Quoting Pierre-Normand
I sort of agree, and recently said something similar:
Quoting Leontiskos
-
Quoting Pierre-Normand
Nevertheless, this claim is both right and wrong, given the vagueness of what we mean by "public."
The whole issue could also be phrased according to a different light. Scholars like to see Plato as a kind of playwright, carefully sculpting literary texts in order to elicit desirable responses in his readers. This reading is of course very consonant with what an LLM is "doing," but I think it is a deeply mistaken understanding of Plato's work. On my view Plato created something that was beautiful, and it is appreciated because of its beauty. On the opposing view, Plato crafted something that would appease the tastes of the multitude, and his propaganda succeeded because he was skillful. Only by excising the possibility of objective truth or objective beauty or objective goodness can one situate Plato within a relativistic, immanent schema, and it is no coincidence that those who are most fond of LLMs are also most apt to situate Plato in that manner. In the more general context of an artist, we might say that the true artist does not seek to appease those who will view the work, and yet the LLM does seek to appease. That is its whole raison d'être. It is the indifference of the artist that marks the best art, and it is precisely this indifference that the LLM cannot access.
(CC @Count Timothy von Icarus)
You are nitpicking. I am happy to grant that there may be torrents of vitality in Plato's dialogues. Maybe my use of "traces" was misleading, but the contrast I intended was between vitality that accrues from the production process (aimed at other participant in a world animated by live social practices, including linguistic/literary ones) from the different sort of vitality that accrues from private/personal cogitative exercises (akin to training), and that lose this vitality when their traces get extracted from the context of their initial production. It's true that Plato's texts can survive unblemished, as do say, Bach's cantatas, when consumed in a different cultural context, but that's because there are deep commonalities between the modes of acculturation of merely superficially different human cultures. Some degree of attunement to the relevant idioms, and understanding of the underlying projects, still are required. I have a very cultured friend who thinks very poorly of Plato's writings, but this is because he isn't attuned at all to their underlying philosophical projects. And many music lovers find J. S. Bach boring, mainly because they aren't attuned to the relevant musical idioms.
I think my intended contrast also accounts, at least in part, for the reason why Wittgenstein's writings feel dead to you. They mostly are assembled (without his consent, posthumously in almost all cases except for the Tractatus) from notes that he jotted down for himself. He did intend to publish the PI but didn't get round to. And although Wittgenstein was a very careful thinker, one thing that his writings clearly aren't displays of is craftsmanship. There are admirable depths of thought in them but no liveliness except as can be laboriously inferred about his original thought process.
:up:
I quite agree with this and that's one of the core reasons that animates my own "AI-skepticism" as I intended to more fully articulate it in my newer AI thread. LLMs only are "authors" of what they write by procuration since they lack conative autonomy. I would however surmise that the great artist who is indifferent to how his works will be received by the masses, say, or by the authorities, or guardians of the tradition, usually cares that they're worthy of being well received by whoever is worthy of receiving them (and that was likely the case for Plato,) or are disruptive of existing institutions and practices that are worthy of being disrupted. There is always some care for others lurking behind the indifference of the great artists, which distinguishes them from the careless indifference of the (alleged) author of "The Art of the Deal."
If someone cannot weigh the meta-thesis that you are proposing, then yes, they must see me as nitpicking when I object to that meta-thesis. But if someone can question their own meta-thesis then they must at the same time consider the possibility that I am not nitpicking.
Let me simply put the question to you: Do you think an LLM would have an easier time passing itself off for Plato or Wittgenstein?
Quoting Pierre-Normand
And my point is that the contrast you are drawing cannot be as strong as you would wish. This is because the qualitative difference between lifting weights and playing rugby is much greater than the qualitative difference between writing philosophy in a private manner and writing philosophy in a public manner. I think your analogy limps too much.
Quoting Pierre-Normand
This looks to beg the question by assuming that there can be no consideration of anything beyond cultural relativism. It's basically a hostile translation of Plato, given that he saw himself as mediating Forms that are not culturally relative.
Quoting Pierre-Normand
In some ways you are right, but I think it comes back to idiosyncrasy, and in this case it comes down to the idiosyncrasy of the idioms. For example, someone might think mathematics makes no sense, but it would be a stretch to say that this is because "they are not attuned to the relevant idioms." It is rather because they do not understand what the speaker is saying. The very idioms that Plato uses reflect his philosophy that truth and philosophy is not reducibly idiomatic.
The introduction of Bach and musical taste strikes me as another stretched analogy. Beauty and truth differ to a reasonable extent in relation to the "idiomatic." But Bach is a very complex form of music. Does your friend prefer harmony to dissonance? (Music is also complicated given the way that trade-offs must be managed. For example, an Indian Raga uses pure intervals in a way that Bach cannot given his well-tempered scale. The more notes one uses, the less pure the intervals.)
Cultural relativism also results in the conclusion that no cultural form is superior to any other, and this is something that Plato would reject. For the classical tradition, some forms are better than others, and it may require education to understand and perceive this. Of course it also then follows that some education is superior to other education.
The LLM is cut off from the possibility of a Platonic approach. It weighs all opinions and words equally. It is a democratic instrument (except in those cases where it is hard-coded to reflect views within the Overton window).
Quoting Pierre-Normand
I agree with you here, but note that the distinction is a bit different. Rather than a private/public distinction, we have an assemblage/unity distinction. One can privately philosophize a unified work which is not an assemblage of unconnected parts. Indeed, there are highly unified works that were only published posthumously.
Quoting Pierre-Normand
We definitely agree here. :up:
Quoting Pierre-Normand
That's right, but this means that the great artist must judge worth in a non-democratic manner, and I'm guessing we would agree that the LLM cannot do this.
I genuinely don't know and that might be kind of indeterminate due to methodological leeway in setting up controls for a test. (e.g. are testers meant to be equally cognizant with the styles or with the thoughts?)
I was not arguing for relativism. My original point (when I responded to @Jamal) was to identify one reason why some texts (e.g. conversations with LLMs) appear stale when read by third parties. I was not arguing that this was because they were conversations (like Plato's dialogues), but rather because they were occurring in a sort of echo chamber similar to what occurs in cogitation, when one "talks" to oneself and rehearses thoughts and arguments, or when one jots down notes (for oneself) summarising such cogitations. The life that's missing is the dynamics of real exchanges where the words that are chosen are meant to be understood by others, which Plato's dialogues exemplify. Nevertheless, episodes of internal cogitation, or conversations with LLMs can feel (and be) alive for the human being involved in them when it is dynamic and productive. But this liveliness can't be conveyed to other without being, not "translated" from thought to words, but for the very first time expressed.
That's another issue but one that I broadly agree with (with caveats since LLMs don't merely average opinions) as I recently suggested here.
I'm not so convinced that they are necessarily like an inner dialogue?although I'm only beginning to explore ideas with Claude I have found that it comes up with interesting ideas I would likely never have arrived at alone or would be likely to have found searching the internet..
:up:
Edit: I looked back over the whole paragraph wherein the part I responded to occurs and it now seems to me that you were referring just to some conversations with LLMs, not all.
Do you think LLMs understand what we say to them, and can actually reason as we do, as opposed to how we think we do?
Quoting Joshs
AI use can be perfectly fine. But there is something to be said about too much of it all the time.
Yes. It becomes a very big "meta" problem - hallucinations get clocked in as facts (this is already happening) and other AI's use the hallucinated data as fact, amplifying its abundance and stretching its reliability.
They're slated to run out of trainable data next year, until paywalled sources open up.
I don't know man, the tools are impressive in many ways, but being forced to use it in everything can't be good. One has to imagine that this will have massive negative ramifications down the line...