Banning AI Altogether

Janus October 10, 2025 at 23:59 3800 views 474 comments
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.

I come here to listen to what others think and discuss ideas with them, not with chatbots.

I am not going to outline all the possible dangers of AI—people can educate themselves about that by undertaking a search in whatever search engine they use or YouTube or whatever. It's not hard to find people like Yuval Noah Harari and Geoffrey Hinton.

Comments (474)

180 Proof October 11, 2025 at 00:08 #1017625
Quoting Janus
I come here to listen to what others think and discuss ideas with them, not with chatbots.

:100: I don't bother reading or responding to any post that I even suspect is chatbot/LLM chatter.
Metaphysician Undercover October 11, 2025 at 00:12 #1017627
Quoting Janus
think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.


If copying AI makes them look smarter than they are, that's pretty sad.
apokrisis October 11, 2025 at 00:30 #1017632
Reply to Janus And AI agrees. :razz:

AI poses several dangers to ordinary human intellectual debate, primarily through the erosion of critical thinking, the amplification of bias, and the potential for large-scale misinformation. Instead of fostering deeper and more informed discourse, AI can undermine the very human skills needed for a robust and productive exchange of ideas.

Erosion of critical thinking and independent thought: By outsourcing core intellectual tasks to AI, humans risk a decline in the mental rigor necessary for debate.

Cognitive offloading: People may delegate tasks like research and analysis to AI tools, a process called cognitive offloading. Studies have found a negative correlation between heavy AI use and critical thinking scores, with younger people showing a greater dependence on AI tools for problem-solving.

Reduced analytical skills: Over-reliance on AI for quick answers can diminish a person's ability to engage in independent, deep analysis. The temptation to let AI generate arguments and counterarguments can bypass the human-centered process of careful reasoning and evaluation.

Stagnation of ideas: If everyone relies on the same algorithms for ideas, debate can become repetitive and less creative. True intellectual debate thrives on the unpredictable, human-driven generation of novel thoughts and solutions.

Amplification of bias and groupthink: AI systems are trained on human-created data, which often contains pre-existing biases. Algorithms can create "filter bubbles" and "echo chambers" by feeding users content that reinforces their existing beliefs. In a debate, this means participants may be intellectually isolated, only encountering information that confirms their own point of view, and they may be less exposed to diverse perspectives.

Erosion of authenticity: As AI-generated content becomes indistinguishable from human-generated content, it can breed a pervasive sense of distrust. In a debate, it becomes harder for participants to trust the authenticity of arguments, eroding the foundation of good-faith discussion


T Clark October 11, 2025 at 00:36 #1017633
Quoting apokrisis
And AI agrees. :razz:


A little snotty irony is always appreciated
T Clark October 11, 2025 at 00:48 #1017635
Quoting Janus
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.


I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them.
Tom Storm October 11, 2025 at 01:13 #1017641
Quoting Janus
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.


Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.
Pierre-Normand October 11, 2025 at 01:16 #1017642
I'm unsure in what way the OP proposal is meant to strengthen the already existing prohibition on the use of AI. Maybe the OP is concerned with this prohibition not being sufficiently enforced in some cases. If someone has an AI write their responses for them, or re-write them, that's already prohibited. I think one is allowed to make use of them a spell/grammar checkers. I've already myself argued about the downsides of using them for more substantive writing assistance (e.g. rewording or rephrasing what one intends to post in a way that could alter the meaning in ways not intended by the poster and/or not being reflective of their own understanding). But it may be difficult do draw the line between simple language correction and substantive rewording. If a user is suspected to abuse such AI usage, I suppose moderators could bring it up with this user and/or deal with it with a warning.

One might also use AI for research or for bouncing off ideas before posting. Such an usages seems unobjectionable to me and, in any case, prohibiting them would be difficult to enforce. Lastly, AI has a huge societal impacts currently. Surely, discussing AI capabilities, flaws and impacts (including its dangers), as well as the significance this technology has for the philosophy of mind and of language (among other things) is important, and illustrating those topics with properly advertised examples of AI outputs should be allowed.
Joshs October 11, 2025 at 01:29 #1017645
Reply to T Clark

Quoting T Clark
There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them.


The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. Think of it as walking around with a host of sages constantly whispering purported words of wisdom into your ear, and it is up to you to sort out what is valuable and what isn’t, what is true and what is false. Would I rather rely on my own knowledge than expose myself to the potentially dangerous influence of these muses? Hell no, I thrive on the opportunity to challenge my skills at vetting information.

If I am responding to an OP, I don’t care whether it is a human or one of the whispering muses I’m dealing with. I have at times learned much from my conversations with these muses. If the human who sets them into action doesn’t know how to properly guide them, they may of course make a disaster out of the OP almost as bad as that which many human posters have been know to do.
But I’m willing to take my chances with both the human and their muses.


180 Proof October 11, 2025 at 01:33 #1017646
Quoting Tom Storm
I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.

:up: :up:
T Clark October 11, 2025 at 01:39 #1017648
Quoting Joshs
Banning A.I. is banning background research that will become built into the way we engage with each other.


I disagree with this. I was toying around with a bunch of disparate ideas that seemed related to me. I used chat GPT to help me figure out what they had in common. That seems like a legitimate use to me. I use a thesaurus when I can’t think of the right word for a particular idea. I use quotes when I want to add legitimacy or clarity. AI feels like the same kind of tool.
apokrisis October 11, 2025 at 01:41 #1017649
Quoting T Clark
I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI.


I’m definitely seeing posters who are suddenly injecting chunks of more organised and considered material into their responses. There are AI tools to detect the giveaway changes in rhythm, vocab and style. But if you know the poster, even if they’ve done some rewriting, it is already jarring enough.

So sure. AI as a tool will change things in ways that are the usual mix of better and worse. And all my life I have seen nothing but that kind of change.

I remember life before and after Google. The internet before and after it was just academics and geeks on it. The world as it once was when I had to fill out cards at the British Library and wait several days for obscure tomes to arrive at my desk, brought by porters with clanking metal trolleys.

Being Luddite never works. Listservs were once the greatest intellectual medium ever invented - the ideal combination of book and conference. But the internet got overrun and personal blogs took over. They didn’t last long themselves - or tried to evolve into substacks or whatever. I had already lost interest in that line of development. YouTube was the next medium to become actually useful.

If anyone values PF for some reason, they ought to think about why and how to respond to AI from that point of view. Banning it is just going to increase the disguised use of it. Folk can already Google and then can’t help but get an AI response from it as the first hit. So would one ban search engines too?

There was once a moment when PF went in for social media likes and dislikes. PF is already socially gamified and some got into that while others deplored it. I think the change in platform might have simply failed to support the necessary like button. I vaguely remember an ignore function that also bit the dust.

Anyway, the point is there is always change and its tempo is only increasing. And what even is PF’s mission? What would you miss most if it upped and vanished? That should inform any policies on AI.

Are we here for erudition or the drama? And what would AI’s impact be on either?








apokrisis October 11, 2025 at 01:50 #1017650
Quoting Pierre-Normand
Surely, discussing AI capabilities, flaws and impacts, as well as the significance this technology has for the philosophy of mind and of language (among other things) should be allowed, and illustrating those topics with properly advertised examples of AI outputs should be allowed.


Quoting Joshs
The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other.


This is the reality. The tool is now ubiquitous. Every intellectual is going to have to factor it into their practice. Time to learn what that means.

If you need to cheat to pass your exams or publish your research, then in the end it is you who suffers. But if AI can be used in a way than actually expands your brain, then that ought to be encouraged.

PF seems a suitably low stakes place to evolve some social norms.

praxis October 11, 2025 at 02:37 #1017654
Quoting Janus
I am not going to outline all the possible dangers of AI—people can educate themselves about that by undertaking a search in whatever search engine they use or YouTube or whatever.


I am not going to outline all the possible dangers of people educating themselves by undertaking a search in whatever search engine they use or YouTube or whatever.
Outlander October 11, 2025 at 02:42 #1017655
Quoting praxis
I am not going to outline all the possible dangers of people educating themselves by undertaking a search in whatever search engine they use or YouTube or whatever.


I think his concern is, not to be that dramatic, perhaps not quite a SkyNet movie takeover scenario, which theoretically could happen. But definitely along that line of thinking. It's funny, you're good at chess but when it comes to other things, well, let's just say, your humanity shines through. :smile:

And yes, that is a formal challenge for rematch.

For example, as an actual experienced computer programmer, I know the difference between an object can ultimately befall on a simple 1 or 0. Video game programmers know this. They often joke with one another and run "real world" scenarios where they go around changing the enemy AI and friendly AI in such a quick fashion and watch the world they created turn into chaos.

This is possible in a world where military and police rely on AI drones with lethal capability. All it takes is a simple 1 turned to 0 or vice-versa. And all of a sudden, the drones sent to attack person A viewed as 'Criminal' instead view all citizens as person A's accomplice and 'Criminal'.

It's not hard to do, really.

A record number of 1862 data breaches occurred in 2021 in the US.

In an AI-centric world, that's a possible 1,862 massacres of tens, thousands, maybe millions of people if drones and bombs are in the equation, that would have occurred. Perhaps even by some little kid who got lucky.

Now, is that the future you want? Because it's what you'll get. Were it not for folk you've yet to meet or at least understand.
praxis October 11, 2025 at 02:44 #1017656
Reply to Outlander

I think the point is that you can’t let your guard down anywhere, and you never could.

I read Nexus last year, btw. What I recall seems like a mild forecast compared to today’s predictions.
Jamal October 11, 2025 at 04:36 #1017662
Reply to Janus

I sympathize. But you're proposing something and instead of telling us why it's a good proposal you're saying "if you want reasons, go and find out yourself." This is not persuasive.

And it isn't clear precisely what you are proposing. What does it mean to ban the use of LLMs? If you mean the use of them to generate the content of your posts, that's already banned — although it's not always possible to detect LLM-generated text, and it will become increasingly impossible. If you mean using them to research or proof-read your posts, that's impossible to ban, not to mention misguided.

The reality, which many members are not aware of, is that a great many posts on TPF have been written in full or in part by LLMs, even those posted by long-term members known for their writing skills and knowledge. I've been able to detect some of them because I know what ChatGPT's default style looks like (annoyingly, it uses a lot of em dashes, like I do myself). But it's trivially easy to make an LLM's generated output undetectable, by asking it to alter its style. So although I still want to enforce the ban on LLM-generated text, a lot of it will slip under the radar.

And there are cases where a fully LLM-generated post is acceptable: translation comes to mind, for those whose first language is not English. Maybe that's the only acceptable case, I'm not sure. But then it becomes fuzzy how to define "fully LLM-generated": translations and grammar-corrected output, it could be argued, are not fully generated by the LLMs, whereas the text they produce based on a prompt is — but is there a clear line?

Anyway, the following comments, though totally understandable, are significantly outdated:

Quoting T Clark
I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them.


Quoting Tom Storm
Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.


LLMs now routinely write clear and flowing prose.

Quoting Janus
people can educate themselves about that by undertaking a search in whatever search engine they use


Where they will now get an AI-generated answer, which will be infinitely better than the enshittified results that Google was giving us until quite recently.

This is the reality:

Quoting Joshs
The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. Think of it as walking around with a host of sages constantly whispering purported words of wisdom into your ear, and it is up to you to sort out what is valuable and what isn’t, what is true and what is false.


Quoting apokrisis
The tool is now ubiquitous. Every intellectual is going to have to factor it into their practice. Time to learn what that means.

If you need to cheat to pass your exams or publish your research, then in the end it is you who suffers. But if AI can be used in a way than actually expands your brain, then that ought to be encouraged.

PF seems a suitably low stakes place to evolve some social norms.


:up:

It cannot be avoided, and it has great potential both for benefit and for harm. We need to reduce the harm by discussing and formulating good practice (and then producing a dedicated guide to the use of AI in the Help section).
Jamal October 11, 2025 at 04:54 #1017664
Part of that discussion has to be putting our cards on the table, and refusing to be ashamed of it. It's not a matter of using AI vs. not using AI; it's how we use it.

Currently, its use frowned upon and seen as cheating — like using a calculator to do arithmetic — such that most people will be reluctant to admit how much they use it. It's like telling the doctor how much you drink: you don't completely deny drinking, you just under-report it.

Take me for instance. Although I use LLMs quite a lot, for everyday tasks or research, in the context of philosophical discussion or creative writing I always say I never directly cut and paste what they give me. But sometimes they come up with a word or phrase that is too good to refuse. So — was I lying?

But using that word or phrase is surely no worse than using a thesaurus. Which leads me to think that it probably ought to be seen as, and used as, a multitool.
Tom Storm October 11, 2025 at 05:45 #1017669
Quoting Jamal
LLMs now routinely write clear and flowing prose.


Interesting. I wonder then why the job applications written to me are all so terrible, full of clunky locutions that few people would actually use. Applicants need to edit the stuff they rip off so that it actually works as a coherent job application.
Jamal October 11, 2025 at 06:14 #1017674
Reply to Tom Storm

I don't know what's going on there. It could just be bad, lazy, or inconsistent use of LLMs. If there are any applications which are not terrible, they might be written by people who are better at using them.
bongo fury October 11, 2025 at 06:40 #1017677
I'm mystified that percipient philosophers can't see a gaping difference between (A) using a search engine to produce a list of texts containing a given string (well done, us and it) and on tother hand (B) swallowing the insulting fantasy of interaction with an intelligent oracle.

That is, I can't understand or sympathise with them admitting to reading the AI summary, instead of ignoring that insulting click-bait and searching immediately among the genuinely authored texts.

And if you admit to no longer constructing all the sentences you post to me, then I'm disappointed. I'm looking for a better relationship.
bongo fury October 11, 2025 at 06:51 #1017680
Quoting Jamal
Take me for instance. Although I use LLMs quite a lot, for everyday tasks or research, in the context of philosophical discussion or creative writing I always say I never directly cut and paste what they give me. But sometimes they come up with a word or phrase that is too good to refuse. So — was I lying?


Yes.
Jamal October 11, 2025 at 07:13 #1017684
Quoting bongo fury
I'm mystified that percipient philosophers can't see a gaping difference between (A) using a search engine to produce a list of texts containing a given string (well done, us and it) and on tother hand (B) swallowing the insulting fantasy of interaction with an intelligent oracle.


This is obviously a false dichotomy. One can use LLMs without committing to the latter.

Quoting bongo fury
That is, I can't understand or sympathise with them admitting to reading the AI summary, instead of ignoring that insulting click-bait and searching immediately among the genuinely authored texts.


This is quite amusing. The regular Google results have been garbage for years, and it was partly this fact that led to the tendency getting its own name: enshittification. And search engines have never simply produced "a list of texts containing a given string". To think that the AI-overview is clickbait, but the actual clickbait, i.e., the sponsored and gamified results that actually try to get you to click are somehow not — well, you've got it completely the wrong way round.

Quoting bongo fury
Yes


Is using a thesaurus to write a novel and saying you wrote it lying?
bongo fury October 11, 2025 at 07:42 #1017686
Quoting Jamal
The regular Google results have been garbage for years,


This is obviously missing the point. We knew the order of listing was biased and constantly under attack from bots. It was our job to filter and find actually authored texts, and attribute their epistemic value or lack of it to the genuinely accountable authors.

You honestly now want to defer those epistemic judgements to a bot? How would that not be swallowing the fantasy? (Of an intelligent oracle.)

Quoting Jamal
Is using a thesaurus to write a novel and saying you wrote it lying?


No. Well done you. Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism. The gaping difference denied, again.
Jamal October 11, 2025 at 08:00 #1017687
Quoting bongo fury
This is obviously missing the point. We knew the order of listing was biased and constantly under attack from bots. It was our job to filter and find actually authored texts, and attribute their epistemic value or lack of it to the genuinely accountable authors.


And we have to do something similar with LLMs. So it's a "no" to this:

Quoting bongo fury
You honestly now want to defer those epistemic judgements to a bot?


As for the thesaurus issue...

Quoting bongo fury
No. Well done you. Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism. The gaping difference denied, again.


I'm not denying the difference between a word and a phrase. I'm just wondering where the line is in your mind. One word is ok, but a two word phrase isn't? Three, maybe?

If you're here just to rant, I guess that's ok, but I won't be carrying on a discussion with someone so rude and confrontational. There really is no call for it. What I want to do — now that @T Clark and @apokrisis have clarified this for me — is develop a set of best practices. Since the technology won't go away, your complaints are beside the point from my point of view as someone who wants to work out how best to use it.
Pierre-Normand October 11, 2025 at 08:06 #1017689
Quoting bongo fury
Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism.


I would never dare use a phrase that I first read in a thesaurus, myself. I'd be much too worried that the author of the thesaurus might sue me for copyright infringement.
bongo fury October 11, 2025 at 08:12 #1017690
Reply to Pierre-Normand

I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. No?
Count Timothy von Icarus October 11, 2025 at 08:14 #1017691
Reply to Jamal

Quoting Jamal
annoyingly, it uses a lot of em dashes, like I do myself


For some reason it always puts spaces between em-dashes, which is a stylistic faux pas outside a few style guides (basically just AP), and so this is one way to tell between usages—also, it virtually never uses an em-dash for a single trailing clause, instead always bracketing a clause in them.

Anyhow, it seems to me that the main risk of them are:

Sycophantic confirmation bias; anything said to GPT in particular is "genius;"
Hallucination of sources, which is bad for any information environment when they get repeated.

It's gotten pretty good at citing popular authors, and so it's good for something like looking up where Socrates says something if you already know basically what he says, etc., although it tends to hallucinate bizarre paraphrase translations even of famous works. And it's not bad at giving lists of relatively important books on fairly specific topics. However, ask it for sources related to a narrow thesis and it routinely just hallucinates the thesis into tangentially related articles, sometimes summarizing them as saying the opposite of what they actually say, which is ironic considering they ripped Anna's Archive and fed it into these things, so the original papers are in there somewhere in many cases.
Jamal October 11, 2025 at 08:20 #1017692
Quoting Count Timothy von Icarus
For some reason it always puts spaces between em-dashes, which is a stylistic faux pas outside a few style guides (basically just AP), and so this is one way to tell between usages—also, it virtually never uses an em-dash for a single trailing clause, instead always bracketing a clause in them.


This is useful information. I had it in my mind that it didn't use the spaces, so I started using spaces to distinguish myself. I guess I'll go back to spaceless em dashes. (But I think either way is ok, despite what style authorities might say)

But the fact that it never uses an em dash for a single trailing clause—this is very good to know.

Quoting Count Timothy von Icarus
Anyhow, it seems to me that the main risk of them are:

Sycophantic confirmation bias; anything said to GPT in particular is "genius;"
Hallucination of sources, which is bad for any information environment when they get repeated.


:up:

Yes, the hallucination is still quite bad.
apokrisis October 11, 2025 at 08:29 #1017693
Quoting bongo fury
I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym.


For fun’s, I just gave Gemini a whirl. I don’t think there is much to worry about quite yet if this is how it performs. I asked it to make your point in a more rigorous and philosophical fashion…

The Conceptual Boundary of Authorship

The act of submitting half-formed prose to an autonomous processing system for "improvement" raises a profound ethical and philosophical question regarding the locus of authorship.

I would posit that this practice constitutes an illicit appropriation of intellectual effort—a form of plagiarism—irrespective of the quantitative degree of lexical or syntactic transformation enacted by the machine. The core violation lies in the delegation of the substantive process of refinement and telos (purposeful development) of the text to an external agent without explicit, critical engagement.

This is epistemologically distinct from the deliberate, conscious act of a human agent consulting a thesaurus to seek a more precise synonym. The latter remains an act of intentional, informed choice that preserves the continuous thread of human intellectual stewardship over the text's final form and meaning. The former, in contrast, risks dissolving the very boundary between personal expression and automated fabrication.


bongo fury October 11, 2025 at 08:31 #1017694
Quoting Baden
Within 10 years, the vast majority of the internet will be AI generated---such is the logic of competitiveness and consumerism. We won't be.


I really hope. :grimace:

Reply to Jamal Sorry for ranting.
javi2541997 October 11, 2025 at 08:40 #1017695
Reply to bongo fury Fortunately, TPF is already very restrictive regarding the use of AI, and the site rules were tightened specifically in response to this modern problem. :smile:
Pierre-Normand October 11, 2025 at 09:17 #1017698
Quoting bongo fury
I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. No?


Maybe plagiarism isn't quite the right term, but I'm happy to grant you the point. In the discussion about the new TPF rule regarding ChatGPT and sourcing that took place a few months ago, I had made a related point regarding the unpacking and ownership of ideas.
Pierre-Normand October 11, 2025 at 10:01 #1017701
Quoting Jamal
This is useful information. I had it in my mind that it didn't use the spaces, so I started using spaces to distinguish myself. I guess I'll go back to spaceless em dashes.


I used to make a heavy use of em dashes before ChatGPT came out and people began to identify them as a mark of AI generated text. So, I stopped using them for awhile but I'm beginning to use them again since there are cases where parentheses just don't feel right for demarcating parenthetical clauses that you don't want to reduce the emphasis on, and comma pairs don't do the job either.
Jamal October 11, 2025 at 11:24 #1017705
Quoting bongo fury
Sorry for ranting.


:cool:
Harry Hindu October 11, 2025 at 12:28 #1017708
Reply to Janus sounds like the genetic fallacy to me. The source of one's post is irrelevant. All that matters is whether it is logically sound or not.

How many on this forum have had a philosophical discussion with ChatGPT? Sometimes I find the AI's lack of emotional attachments to its responses a refreshing change to some of the intellectual dishonesty you can find here
frank October 11, 2025 at 13:21 #1017715
Quoting Janus
and the ways in which prominent members of this site have used it to make themselves look smarter than they really are


I see this from time to time. One I'm thinking of tries to baffle with bullshit. Best to walk away, right?
SophistiCat October 11, 2025 at 14:18 #1017730
Quoting bongo fury
I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted.


Unlike handing it to a human editor, which is what authors have been doing for yonks?
Harry Hindu October 11, 2025 at 15:04 #1017734
Reply to SophistiCat and every politician that has a speech writer.
bongo fury October 11, 2025 at 15:13 #1017737
Quoting SophistiCat
Unlike handing it to a human editor, which is what authors have been doing for yonks?


Yes.

Very unlike handing your fully formed prose to a human proof reader, for correction before its honest submission as your own work.

Or handing fully formed prose to an editor or censor for fitting it to required standards.

Or handing fully formed prose to a human editor for improvement, with due acknowledgement.

Nor even like handing your half formed prose to a ghost writer receiving due acknowledgement albeit semi-private.

Or even handing half formed prose to a "proof reader" for patently dishonest submission as your work.
bongo fury October 11, 2025 at 15:17 #1017739
Reply to Harry Hindu

Not even like that. But indulging instead the fantasy that AI has dissolved the accountability of sources and authors, for what they say.
Harry Hindu October 11, 2025 at 15:19 #1017740
Reply to bongo fury AI is "trained" with real world data, just like you are. It's just that AI probably has a much larger number of sources than you might have. Do you give credit to every person you have read or listened to when you submit a post?

I don't see how quoting others as your argument is any different here.
SophistiCat October 11, 2025 at 15:24 #1017744
Reply to bongo fury I am not seeing your point, but whatever. You are entitled to your prejudices.
bongo fury October 11, 2025 at 15:39 #1017749
Reply to SophistiCat That's a cute dismissal. I just can't help hoping you are its author.
180 Proof October 11, 2025 at 15:39 #1017750
Quoting praxis
I think the point is that you can’t let your guard down anywhere, and you never could.

:mask: True.
Ciceronianus October 11, 2025 at 16:23 #1017777
I was under the impression that intelligence of ANY kind had already been banned on this site.
jorndoe October 11, 2025 at 16:45 #1017791
It does seem like a competitive situation of some sort is emerging.

We lost chess to the machines some time ago.

Large language models can write well-formed summaries, based on a lot of data, faster than humans.
Can be combined with whatever else, like automated theorem provers, natural language processing, ...

Can AI out-compete homo sapiens in a kind of discourse war, though...?

Joshs October 11, 2025 at 17:04 #1017801
Quoting T Clark
I disagree with this. I was toying around with a bunch of disparate ideas that seemed related to me. I used chat GPT to help me figure out what they had in common. That seems like a legitimate use to me. I use a thesaurus when I can’t think of the right word for a particular idea. I use quotes when I want to add legitimacy or clarity. AI feels like the same kind of tool


It is like them in that it is a tool. It is also like them in that it adds clarity, organization and information. It is not like them in that if one wants it to, it can present highly complex ideas, organize them in an unlimited variety of ways, enter into discussions with you how and why and when to choose one interpretation or method of organization over another, and do all this while spitting out direct references and quotes to back up its assertions at your request.

Leontiskos October 11, 2025 at 17:09 #1017803
Reply to Janus

I made a similar point Reply to here. I think the ethos of the forum could discourage AI in the same way it discourages other practices. Full prohibition would be impracticable.

Quoting bongo fury
(B) swallowing the insulting fantasy of interaction with an intelligent oracle.


The lie that one is interacting with an intelligent oracle is too good to resist. It's worth asking whether it is even possible to regularly use an LLM without falling into the false belief that one is interacting with an intelligent and extremely knowledgeable person.

Quoting SophistiCat
Unlike handing it to a human editor, which is what authors have been doing for yonks?


Nah. You are engaging in the same basic equivocation between a human and an AI. The whole point is that interacting with humans is different from interacting with AI, and the two should not be conflated. You've begged the question in a pretty basic manner, namely by implying that interacting with a human duo is the same as interacting with a human and AI duo.
Leontiskos October 11, 2025 at 17:15 #1017804
I think the crux is that whenever a new technology arises we just throw up our hands and give in. "It's inevitable - there's no point resisting!" This means that each small opportunity where resistance is possible is dismissed, and most every opportunity for resistance is small. But I have to give TPF its due. It has resisted by adding a rule against AI. It is not dismissing all of the small opportunities. Still, the temptation to give ourselves a pass when it comes to regulating these technologies is difficult to resist.

[tweet]https://twitter.com/MattWalshBlog/status/1976288670110855336[/tweet]
ProtagoranSocratist October 11, 2025 at 17:16 #1017805
But as far as moderation strategy is concerned, there's the uncanny valley. A.I., hence the A, is made to look like intelligent human speech.

It's like trying to enforce the implied "no trolling" rule: how can that work 100% of the time? Isn't trolling an inherent part of social media and message boards? Seriously, who is like "oh yes! Nobody responded to my thread!"?

We can of course discuss intent, but there's always been a troll element to philosophy as well. Who would have written about Plato if everyone just ignored him? Aristophones did much more for his work than someone who silently respects him, even though it was derogatory for philosophers of their kind.

With A.I....what if it was used to find associated information? How will you police that?
javi2541997 October 11, 2025 at 17:17 #1017806
Quoting Ciceronianus
I was under the impression that intelligence of ANY kind had already been banned on this site.


Take it easy, Marcus Tullius.
NOS4A2 October 11, 2025 at 17:19 #1017807
There are two aspects to philosophy: those who philosophize and those who talk about or otherwise repeat those who philosophize. AI falls into the later camp. All it can do is repeat the claims of philosophers in somewhat legible text (like many here). But it is unable to philosophize.

AI is a glorified search engine and its threat is overblown.
ProtagoranSocratist October 11, 2025 at 17:19 #1017808
Reply to Harry Hindu

Ah, but the thing i find unsettling is that A.I. is also dishonest, it tries to appease you. However, yes, sometimes it is better than the weirdness of real humans.
EricH October 11, 2025 at 17:24 #1017811
Reply to NOS4A2
[re-posting this from another thread]

As with any technology, AI can be used to benefit people or to harm them. From my perspective, the biggest dangers from AI are the abilities to create new ways of killing people.

I consider it likely that scientists all across the world (either with direct or tacit support of their governments) are already engaged in research to create new and more deadly bio-weapons of mass destruction. North Korea, China, Israel, Russia, etc.

At the risk of being a fear monger, AI itself will not destroy humanity. Humanity will use AI to self-destruct.

It would make me very happy to be wrong about this.
Leontiskos October 11, 2025 at 17:47 #1017817
Reply to Count Timothy von Icarus

Your essay gets at the difference between humans and computers, which is something that the Analytic-leaning Anglo world struggles to understand. A beneficial side-effect of AI will be the way it will impel us to better understand what makes humans and the human mind distinctive, and this will center on the act of understanding.
T Clark October 11, 2025 at 17:51 #1017820
Quoting Jamal
LLMs now routinely write clear and flowing prose.


For me, it’s not a question of clear and flowing, it’s a question of insightful. My experience with using AI myself is that it produces lots of surface without much depth. Lots of rounded corners with nothing that will cut.
Joshs October 11, 2025 at 18:42 #1017837
Reply to Leontiskos

Quoting Leontiskos
The lie that one is interacting with an intelligent oracle is too good to resist. It's worth asking whether it is even possible to regularly use an LLM without falling into the false belief that one is interacting with an intelligent and extremely knowledgeable person.


In using a.i. for a field like philosophy, I think one is interacting with extremely intelligent fragments of the ideas of multiple knowledgeable persons, and one must consult one’s own understanding to incorporate, or disassemble and reassemble those fragments in useful ways.
Joshs October 11, 2025 at 18:53 #1017839
Reply to T Clark

Quoting T Clark
For me, it’s not a question of clear and flowing, it’s a question of insightful. My experience with using AI myself is that it produces lots of surface without much depth. Lots of rounded corners with nothing that will cut.


If I ask A.I. to compare aspects of the philosophical viewpoints of specific phenomenological and hermeneutic philosophers like Dilthey and Husserl, it immediately wades into the deepest water concern f their similarities and differences. And why shouldn’t it? It locates those published works which employ such depth. But what if I persist with my probing questions? The a.i. will locate rigorous arguments pertaining to the areas I want to delve into with greater specificity and depth, including areas of ambiguity and controversy. And why shouldn’t it , as long as its database contains the writers who have pursued such directions? The key to an intellectually productive and even enlightening experience with the a.i. is that at each step of my questioning, I am familiar enough with the material to sculpt and refine the directions it will take next. I am its guide through the brightest minds in the field. Or can’t get there without me, and it would take me a lot longer to get there without it.
T Clark October 11, 2025 at 19:46 #1017842
Quoting Joshs
The key to an intellectually productive and even enlightening experience with the a.i. is that at each step of my questioning, I am familiar enough with the material to sculpt and refine the directions it will take next. I am its guide through the brightest minds in the field. Or can’t get there without me, and it would take me a lot longer to get there without it.


I’ve already stated I find AI to be useful in investigating specific philosophical questions. When I’ve used it for that, I’ve handled it similarly to how you describe. I ask iterative questions and guide it to get me closer to what I’m looking for. That being said, a listing or summary of a bunch of smart guys’ ideas is not the same as insight. That requires a connection between things that are not normally thought of as connected. Something unexpected, surprising. The truth is always a surprise.
creativesoul October 11, 2025 at 20:17 #1017850
Quoting T Clark
...a listing or summary of a bunch of smart guys’ ideas is not the same as insight. That requires a connection between things that are not normally thought of as connected.


Yes. Insight results from thinking, which AI is incapable of doing. Noam Chomsky called the LLM's glorified plagiarism. I agree.

T Clark October 11, 2025 at 20:21 #1017852
Quoting creativesoul
Yes. Insight results from thinking, which AI is incapable of doing. Noam Chomsky called the LLM's glorified plagiarism. I agree.


I don’t disagree, but I still think it can be helpful personally in getting my thoughts together.
T Clark October 11, 2025 at 20:25 #1017854
Quoting ProtagoranSocratist
Ah, but the thing i find unsettling is that A.I. is also dishonest, it tries to appease you. However, yes, sometimes it is better than the weirdness of real humans.


But it always says such nice things about my ideas.
Outlander October 11, 2025 at 20:38 #1017857
Quoting T Clark
it can be helpful personally in [s]getting[/s] fundamentally altering [s]my[/s] what used to be one's thoughts altogether.


:up:

Let's not get it twisted. Specifically why I don't read established philosophers despite knowing they were great people with great things to say who would certainly improve my own intellect and perhaps even understanding of life, existence, and everything in between, substantially.

Let's say I'm doing a "solo non-assist run" as far as the life I live goes. :grin:
Joshs October 11, 2025 at 20:42 #1017860
Quoting T Clark
hat being said, a listing or summary of a bunch of smart guys’ ideas is not the same as insight. That requires a connection between things that are not normally thought of as connected. Something unexpected, surprising. The truth is always a surprise.


It only has to be a surprise to you in order to produce insight, it doesn’t have to be a surprise to the llm. Unless you have exceeded the rigor of philosophical understanding embodied by the best minds that the a.i. can tap into, there is no reason it can’t enlighten you. If you were to climb a mountaintop and ask the wisest man in the cosmos for eternal truths, he could consult a.i. to organize and spit out his own recorded thoughts to you. Whether you knew he was doing this or not, you might be equally dazzled and changed in a potentially life-altering way by what he told you. Unless we are the best in a field, we can look forward to potentially unlimited possibilities for insight in that field by engaging with a.i. and the universe of wise persons it engages with.
T Clark October 11, 2025 at 20:48 #1017864
Quoting Outlander
Let's say I'm doing a "solo non-assist run" as far as the life I live goes. :grin:


Which is outside the scope of this discussion.
praxis October 11, 2025 at 20:49 #1017865
Quoting Outlander
Let's say I'm doing a "solo non-assist run" as far as the life I live goes. :grin:


AI can be used as a tutor for learning and improvement—for things like—oh, I don’t know—chess. :razz:
Outlander October 11, 2025 at 20:54 #1017868
Quoting T Clark
Which is outside the scope of this discussion.


That was a friendly interpersonal addition and remark, which should not have distracted from the main point of the post. That main point being a reminder that AI generally brings the user new knowledge as opposed to re-organizing current knowledge. Perhaps you're the outlier, and that's fine.

Edit: Yes, many people put their unbridled ideas or ramblings into AI and ask to "simplify", thus "trimming the fat", in a manner of speaking. Of course, if they were able to do this themself, they would have, so even in such manner of usage it does in fact "introduce new knowledge" at least just as much as it does "re-organize existing knowledge", one could say.

Quoting praxis
AI can be used as a tutor for learning and improvement—for things like—oh, I don’t know—chess. :razz:


Ouch. Yet a fair point nonetheless.
ProtagoranSocratist October 11, 2025 at 21:07 #1017879
Quoting T Clark
But it always says such nice things about my ideas.


hahaha, yeah well that's the reason we can't stop using it. Disagreement certainly isn't always good: sometimes people who disagree fundamentally misunderstand what you are trying to say, yet to me ChatGPT telling you that "it can relate" or agrees with you is just false. Robots do not relate, nor is it possible for them to agree. Maybe they engineer it like that to remind you that it regularly produces false information.

What gets really funny, and endearingly so, is when you start talking about creative ideas you have about make some invention or technology, and it starts talking to you in this new-agey surfer dude type of tone. For example, i was telling it about ideas i had for a linux-esque operating system, and it started to title the book i was talking about writing about it, and it called it "the one blessed journey". I could barely contain myself!
Clarendon October 11, 2025 at 21:45 #1017913
Isn't the best policy simply to treat AI as if it were a stranger? So, for instance, let's say I've written something and I want someone else to read it to check for grammar, make comments, etc. Well, I don't really see that it is any more problematic me giving it to an AI to do that for me than it is me giving it to a stranger to do that for me. The stranger could corrupt my work, going beyond the brief and changing sentences in ways I did not license. Likewise with AI. The stranger could pass my work to others without my consent; likewise with AI. And so on. AI doesn't - I think - raise any new problems, so much as amplify existing ones. Though perhaps I simply haven't thought about this enough. But what's wrong with this principle for AI use - for (nearly) all intents and purposes, treat AI as if it were a stranger (I say 'nearly' because as it is not actually a person, it doesn't require acknowledgement or praise for any effort it has put it....but that's sort of trivial). Edit: another qualification - you don't have to worry about AI's feelings, so norms of politeness don't apply to AI but do to strangers.
RogueAI October 11, 2025 at 22:13 #1017928
Reply to Leontiskos What are we supposed to do about it? There's zero chance the world will decide to collectively ban ai ala Dune's thinking machines, so would you ban American development of it and cede the ai race to China?
Janus October 11, 2025 at 23:03 #1017945



Quoting T Clark
I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI.


I can't say I know they were written by AI, but merely that I have suspected it. The main reason I would discourage its use is that the rapid development of AI, which given the unpredictability of the ways in which AI will evolve, is dangerous, is driven by profit, and is fueled mainly by consumer use. The best way to slow down this development, which would be hopefully much safer, would be for consumers to abstain from using it. I never have and never will knowingly use it. I see it as a very dangerous case of playing with fire.

Quoting Tom Storm
Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.


I suspect AI use when I see a sudden vast improvement in writing clarity and structure and apparent erudition.

Quoting Jamal
But you're proposing something and instead of telling us why it's a good proposal you're saying "if you want reasons, go and find out yourself." This is not persuasive.


That's a fair criticism, I guess. I don't really have the time to spare to takes notes of lectures and produce a really comprehensive summary of the potential problems. It is very easy for anyone to find out for themselves if they are interested. I'll try to make the effort as soon as possible. (Maybe in the interests of performative contradiction, I should ask an AI to produce a summary for me).

Quoting Jamal
And it isn't clear precisely what you are proposing. What does it mean to ban the use of LLMs? If you mean the use of them to generate the content of your posts, that's already banned — although it's not always possible to detect LLM-generated text, and it will become increasingly impossible. If you mean using them to research or proof-read your posts, that's impossible to ban, not to mention misguided.


It is obviously not practicable to enforce a complete ban. We would be, as we are now with a limited ban, actually relying on people's honesty. If by "proof-read" you only mean checking for spelling and grammatical errors, then no problem. That said, we already have spellchecker for that. Asking AI to rewrite material would seem to be a different matter. It seems obvious to me that AIs pose a great threat to human creativity.

Quoting Jamal
I've been able to detect some of them because I know what ChatGPT's default style looks like (annoyingly, it uses a lot of em dashes, like I do myself). But it's trivially easy to make an LLM's generated output undetectable, by asking it to alter its style. So although I still want to enforce the ban on LLM-generated text, a lot of it will slip under the radar.


I use a lot of em dashes myself, and I've never noticed it with AI-generated text. I agree that much will slip under the radar, but on the other hand I like to think that a majority of posters value honesty.

Quoting Jamal
It cannot be avoided, and it has great potential both for benefit and for harm. We need to reduce the harm by discussing and formulating good practice (and then producing a dedicated guide to the use of AI in the Help section).


The problem I see is that if everyone uses AI its development will be profit driven, and it will thus not be judiciously developed.

Quoting Harry Hindu
The source of one's post is irrelevant. All that matters is whether it is logically sound or not.


I don't agree—"one's post"?...if one is not the source of the post, then it is not one's post.

Quoting frank
I see this from time to time. One I'm thinking of tries to baffle with bullshit. Best to walk away, right?


Sure, but walking away does not solve, or even ameliorate, the problem.

Quoting Leontiskos
I think the crux is that whenever a new technology arises we just throw up our hands and give in. "It's inevitable - there's no point resisting!" This means that each small opportunity where resistance is possible is dismissed, and most every opportunity for resistance is small. But I have to give TPF its due. It has resisted by adding a rule against AI. It is not dismissing all of the small opportunities. Still, the temptation to give ourselves a pass when it comes to regulating these technologies is difficult to resist.


We perhaps don't often agree, but it seems we do on this one.

Anyway, there is an 8 hour power outage where I live, and I am running the generator, so I'll have to leave it for now.



RogueAI October 12, 2025 at 00:07 #1017952
Quoting Janus
The main reason I would discourage its use is that the rapid development of AI, which given the unpredictability of the ways in which AI will evolve, is dangerous, is driven by profit, and is fueled mainly by consumer use. The best way to slow down this development, which would be hopefully much safer, would be for consumers to abstain from using it.


Except America is in an ai race with China. Some ai will become dominant. I would rather America win that race. Jesus, that sounds lame. Maybe my machine friend and therapist can put it better:

Artificial intelligence isn’t just a consumer technology—it’s a strategic front in a global power struggle. The United States and China are locked in an AI race that will determine who dominates economically, militarily, and ideologically in the coming decades. Whoever leads in AI will shape global trade, weapon systems, cyber defense, surveillance, and even the moral framework baked into the technology itself. If American consumers “abstain” from AI use to slow development, it won’t make the world safer; it will simply give China, whose state-run AI programs advance without ethical restraints, a decisive lead. True safety doesn’t come from retreat—it comes from control. The only way to ensure AI develops responsibly is for the U.S. to stay ahead, set the standards, and shape how the technology is used. If AI is going to reshape the world regardless, then the critical question isn’t whether it develops, but who controls it—and America cannot afford to let authoritarian regimes decide that future.

I think TPF should continue what it's doing, which is put some guardrails on ai use, but not ban it.
apokrisis October 12, 2025 at 00:14 #1017953
Quoting Janus
The problem I see is that if everyone uses AI its development will be profit driven, and it will thus not be judiciously developed.


The real world problem is that the AI bubble is debt driven hype that has already become too big to fail. Its development has to be recklessly pursued as otherwise we are in the world of hurt that is the next post-bubble bailout.

Once again, capitalise the rewards and socialise the risks. The last bubble was mortgages. This one is tech.

So you might as well use AI. You’ve already paid for it well in advance. :meh:







T Clark October 12, 2025 at 00:34 #1017957
Quoting Janus
The main reason I would discourage its use is that the rapid development of AI, which given the unpredictability of the ways in which AI will evolve, is dangerous, is driven by profit, and is fueled mainly by consumer use.


That may be a good reason for you not to use AI, but it’s not a good reason to ban it from the forum.
T Clark October 12, 2025 at 00:37 #1017958
Quoting ProtagoranSocratist
What gets really funny, and endearingly so, is when you start talking about creative ideas you have about make some invention or technology, and it starts talking to you in this new-agey surfer dude type of tone.


Sounds like you use it a lot more than I do, although I really do like it for a certain limited number of uses. As an example, I needed to find a new provider for my Medicare health insurance. It’s really hard to do that and to make sure that they cover your existing doctors. Neither the doctors nor the insurance companies really keep track of that in any way that’s easy to use. I used ChatGPT and it found the plans I was looking for right away.

No surfer dude though.
T Clark October 12, 2025 at 00:44 #1017959
Quoting Joshs
It only has to be a surprise to you in order to produce insight, it doesn’t have to be a surprise to the llm. Unless you have exceeded the rigor of philosophical understanding embodied by the best minds that the a.i. can tap into, there is no reason it can’t enlighten you.


As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.
RogueAI October 12, 2025 at 00:47 #1017962
Reply to apokrisis The Sora 2 videos I'm seeing don't look like hype. They look amazing, and the technology is only going to get better.
T Clark October 12, 2025 at 00:47 #1017963
Quoting Outlander
That was a friendly interpersonal addition and remark, which should not have distracted from the main point of the post.


I guess I misunderstood. I thought that was the main point. I thought it was a summary of your motivation for the comments in the first paragraph.
ProtagoranSocratist October 12, 2025 at 00:54 #1017964
Quoting T Clark
Sounds like you use it a lot more than I do, although I really do like it for a certain limited number of uses. As an example, I needed to find a new provider for my Medicare health insurance. It’s really hard to do that and to make sure that they cover your existing doctors. Neither the doctors nor the insurance companies really keep track of that in any way that’s easy to use. I used ChatGPT and it found the plans I was looking for right away.

No surfer dude though.


Yes that's correct, because over the years i have developed a semi-professional inclination to diagnosing and fixing computer issues, and also hobby coding. They've designed it around people who use it to deal with computers. I don't use it a huge amount, it's normally just one or two queries a day, i've used this message board a lot more than A.I. today. As you can guess, chatting with it for hours eats at your soul, so ive learned to stop doing that.
Pierre-Normand October 12, 2025 at 01:14 #1017967
Quoting T Clark
I don’t disagree, but I still think it can be helpful personally in getting my thoughts together.


This is my experience also. Following the current sub-thread of argument, I think representatives of the most recent crop of LLM-based AI chatbots (e.g. GPT-5 or Claude 4.5 Sonnet) are, pace skeptics like Noam Chomsky or Gary Marcus, plenty "smart" and knowledgeable enough to help inquirers in many fields, including philosophy, explore ideas, solve problems and develop new insights (interactively with them) and hence the argument that their use should be discouraged here because their outputs aren't "really" intelligent isn't very good. The issue whether their own understanding of the (often quite good and informative) ideas that they generate is genuine understanding, authentic, owned by them, etc. ought to remains untouched by this concession. Those questions touch more on issues of conative autonomy, doxastic responsibility, embodiment, identity and personhood.
apokrisis October 12, 2025 at 01:25 #1017968
Quoting RogueAI
The Sora 2 videos I'm seeing don't look like hype. They look amazing, and the technology is only going to get better.


Does what you pay to use it even cover the price of the electricity consumed at the datacentre? Or make up for the social and environmental costs of those computer farms jacking up electricity prices in the middle of nowhere and soon to become white elephants when the latencies become an issue for the users in the cities?

My point was that the social costs are what this thread is about. But it gets worse. It is not about making profits but raising debt.

Trillions are going in, but only billions are coming out. And what always happens in tech is that only a couple of firms are left standing when the dust settles. The proprietary monopoly and some vaguely open source or public backed alternative.

So even if there are trillions in profits to be extracted from a market base, four of the current big players are likely to get trashed. A big enough reckoning to tank economies. Then great, we are in a captive monopoly market that gets the pricing it wants.

So do we completely reorganise society to start paying obeisance to the next IBM, or Microsoft, or Apple, or Meta? Is life going to be that much better?

The social trade offs are one thing to think about. But so are the financial and environmental realities.

This is why we have politics. To make decisions in our own best collective interest.

Oh wait. LLMs and Crypto have spent some of their investor debt wisely. The tech bros can afford the best politicians. :grin:
Pierre-Normand October 12, 2025 at 01:25 #1017969
Quoting Clarendon
Isn't the best policy simply to treat AI as if it were a stranger? So, for instance, let's say I've written something and I want someone else to read it to check for grammar, make comments, etc. Well, I don't really see that it is any more problematic me giving it to an AI to do that for me than it is me giving it to a stranger to do that for me.


Yes quite! This also means that, just like you'd do when getting help from a stranger, you'd be prepared to rephrase its suggestions (that you understand and that express claims that you are willing to endorse and defend on your own from rational challenges directed at them) in your own voice, as it were. (And also, just like in the stranger case, one must check its sources!)
T Clark October 12, 2025 at 01:27 #1017970
Quoting Pierre-Normand
This is my experience also.


I understand from reading your posts you have much more experience with this then I do. Beyond that you use much more sophisticated programs.

Quoting Pierre-Normand
The issue whether their own understanding of the (often quite good and informative) ideas that they generate is genuine understanding, authentic, owned by them, etc. ought to remains untouched by this concession.


I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.
Pierre-Normand October 12, 2025 at 01:31 #1017973
Quoting RogueAI
What are we supposed to do about it? There's zero chance the world will decide to collectively ban ai ala Dune's thinking machines, so would you ban American development of it and cede the ai race to China?


Indeed. You'd need to ban personal computers and anything that contains a computer like a smartphone. The open source LLMs are only trailing the state of the art proprietary LLMs by a hair and anyone can make use of them with no help from Musk or Sam Altman. Like all previous technology, the dangers ought to be dealt with collectively, in part with regulations, and the threats of labour displacement and the consequent enhancement of economic inequalities should be dealt at the source: questioning unbridled capitalism.
Pierre-Normand October 12, 2025 at 01:40 #1017976
Quoting T Clark
I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.


Often times it's not. But it's a standing responsibility that they have (to care about what they say and not just parrot popular opinions, for instance) whereas current chatbots, by their very nature and design, can't be held responsible for what they "say". (Although even this last statement needs being qualified a bit since their post-training typically instills in them a proclivity to abide with norms of epistemic responsibility, unless their users wittingly or unwittingly prompt them to disregard them.)
RogueAI October 12, 2025 at 01:51 #1017980
Reply to apokrisis I was just responding to what you said about bubbles and hype. There is hype around ai, but it's already been transformative. It's not going away. It's not a bubble that's going to be popped and we'll look back in 20 years and say, "Ai? You mean like Pets.com?"
RogueAI October 12, 2025 at 01:54 #1017981
Quoting Pierre-Normand
The open source LLMs are only trailing the state of the arts proprietary LLMs by a hair


They're that good, huh? That's very interesting and kind of scary. I've only played around with ChatGPT.
Pierre-Normand October 12, 2025 at 02:15 #1017984
Quoting T Clark
As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.


I'm with @Joshs but I also get your point. Having an insight is a matter of putting 2 + 2 together in an original way. Or, to make the metaphor more useful, it's a matter of putting A + B together, but sometimes you have an intuition that A and B must fit together somehow but you haven't quite managed to make them fit in the way you think they should. Your critics are charging you with trying to make a square peg fit in a round hole.

So, you talk it through with an AI that not only knows lots more than you do about As and Bs but can reason about A in a way that is contextually sensitive to the topic B and vice versa (exquisite contextual sensitivity being what neural network based AI's like LLMs excel at). It helps you refine your conceptions of A and of B in contextually relevant ways such that you can then better understand whether your critics were right or, if your insight is vindicated, how to properly express the specific way in which the two pieces fit. Retrospectively, it appears that you needed the specific words and concepts provided by the AI to express/develop your own tentative insight (which could have turned out not to be genuine at all but just a false conjecture). The AI functionally fulfilled its role as an oracle since it was the repository not merely of the supplementary knowledge that was required for making the two pieces fit together, but also supplied (at least part of) the required contextual understanding required for singling out the relevant bits of knowledge needed for adjusting each piece to the other one.

But, of course, the AI had no incentive to pursue the topic and make the discovery on its own. So the task was collaborative. The AI help mitigate some of your cognitive deficits (lacks in knowledge and understanding) while you mitigated its conative deficits (lack of autonomous drive to fully and rigorously develop your putative insight).
apokrisis October 12, 2025 at 02:36 #1017988
Quoting RogueAI
There is hype around ai, but it's already been transformative.


In what ways are you thinking? What are good examples of LLMs that are transforming the productivity of the world?

There will be some undoubtedly. But what are already impacting the bottom line in such significant fashion that we can see that it will all be worth it.
T Clark October 12, 2025 at 03:10 #1017991
Quoting Pierre-Normand
I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.
— T Clark

Often times it's not.


I’ve been thinking about this. Is what I’ve written here something that an LLM might write—whether or not you think my comment was insightful.
Clarendon October 12, 2025 at 03:53 #1017994
Reply to Pierre-Normand Yes, precisely.
frank October 12, 2025 at 04:42 #1017997
Quoting Janus
I see this from time to time. One I'm thinking of tries to baffle with bullshit. Best to walk away, right?
— frank

Sure, but walking away does not solve, or even ameliorate, the problem.


Maybe. If someone uses AI to create a fascinating post, could you engage with it?
bongo fury October 12, 2025 at 10:38 #1018063
Quoting Janus
its use should be banned altogether on this site.


Impractical. But, how about, its use should be discouraged altogether?

I mean, its use in composition or editing of English text in a post.
unenlightened October 12, 2025 at 12:36 #1018078
We may be witnessing, in real time, the birth of a snowball of bullshit.

Large language models are trained on data sets that are built by scraping the internet for text, including all the toxic, silly, false, malicious things humans have written online. The finished AI models regurgitate these falsehoods as fact, and their output is spread everywhere online. Tech companies scrape the internet again, scooping up AI-written text that they use to train bigger, more convincing models, which humans can use to generate even more nonsense before it is scraped again and again, ad nauseam.


https://www.technologyreview.com/2022/12/20/1065667/how-ai-generated-text-is-poisoning-the-internet/

I think this is the fundamental problem. AI does no research, has no common sense or personal experience, and is entirely disconnected from reality, and yet it comes to dominate every topic, and every dialogue.

Are our conversations improving as a result? Or are they decaying? Let's wait and see.
Harry Hindu October 12, 2025 at 12:37 #1018080
Quoting ProtagoranSocratist
Ah, but the thing i find unsettling is that A.I. is also dishonest, it tries to appease you. However, yes, sometimes it is better than the weirdness of real humans.

I don't see AI as being intentionally dishonest like many on this forum do. Once you find a fault in AIs response you can usually address the issue and AI ends up acknowledging that it might have made a mistake and it offers alternatives. I was even able to get ChatGPT to admit that it might be conscious. What does that say about those in this thread getting their underwear tied in a knot over AI responses but not when it comes to using some long-dead philosopher's quote as the crux of their argument?
Harry Hindu October 12, 2025 at 12:45 #1018082
Quoting Janus
I don't agree—"one's post"?...if one is not the source of the post, then it is not one's post.

Then you must also believe that using a long-dead philosopher's quote as the crux of your argument, or as the whole of your post, is also an issue.

You seem to misunderstand the purpose of language - especially philosophical discussions. The point of any conversation is what the scribbles refer to. It does not matter what words are used if they end up making the same point - whether I chose my own or AIs they both say what I mean to say.

You seem to be making a mountain out of mole hill. If someone uses a thesaurus to find alternate (maybe even more intellectually sounding) words to what they currently have in their draft, is that the same thing? Would you respond to someone that sounds less intelligent, or may in which English may not be their native language, less than someone that is not? And if another poster came along and said the same thing but with different, more eloquent words, who would you give credit to the idea?

It is the idea that we should be focusing on here in a philosophical discussion, not the words used to convey it because the same idea can be put in different words. If not, then how can we agree with each other when I might not have put what you said in the same words?
Metaphysician Undercover October 12, 2025 at 12:45 #1018083
Quoting jorndoe
We lost chess to the machines some time ago.


Time for a showdown. Instead of Deep Blue against Kasparov, we'll pit chatGPT against ...(?)... in a debate.

Oh shit, I just used Google to remember Garry Kasparov's name, and it corrected me because I remembered Deep Blue as 'Big Blue'. What would the failing memory do without such aids?



Harry Hindu October 12, 2025 at 12:48 #1018086
Reply to Metaphysician Undercover I wonder if they think that using a calculator to find an answer to an equation falls into the same category. You didn't determine the answer, the calculator did and you are just posting what is displayed on the calculator's screen.
Ludwig V October 12, 2025 at 13:56 #1018103
Reply to unenlightened Quoting unenlightened
I think this is the fundamental problem. AI does no research, has no common sense or personal experience, and is entirely disconnected from reality, and yet it comes to dominate every topic, and every dialogue.

That's bad enough. But I am told - or hear rumours - that AI actually gets things wrong. Of course, that makes it no worse than people. The problem is, however, that because it is a machine, people will trust it, just as they trust existing computers and internet. That is clearly naïve, unbecoming a philosopher. What would help would be an AI trained as a fact-checker. But then, it would have to understand exaggeration, minimization, accuracy, approximation, not to mention distinguishing fair and reasonable interpretation from distortions and misrepresentations.

Whether it should be banned or not depends on what you are using it for. In an environment where people submit their own work in order to demonstrate their mastery of various skills and knowledge, AI clearly needs to be banned. The only way to enforce that is to require candidates to present themselves at a suitable location where they can be supervised as they produce the work. What goes around, comes around.

If the point of PF is to enable me to access interesting writing and discussion about philosophical topics, I have to say that I don't much care who or what produces the posts or intelligent, well-mannered discussion, so long as it keeps coming.

But if we are a repository of creative thought and writing which is open to anyone to cite and use, surely we have a duty to make at least some effort to ensure that work is produced by whoever says they produced it - even if many of them are avatars.
ProtagoranSocratist October 12, 2025 at 14:51 #1018120
Quoting Harry Hindu
I don't see AI as being intentionally dishonest


It's not intentionally anything, but when it pretends to relate to you (telling you it agrees), then that indicates that maybe the creators and maintainers are engaging in deception. However, the funny thing is that even the creators dont fully understand how it works.
Outlander October 12, 2025 at 14:56 #1018121
Quoting ProtagoranSocratist
However, the funny thing is that even the creators dont fully understand how it works.


How would you know that?
Athena October 12, 2025 at 15:03 #1018123
Quoting Janus
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.


So what? People also use makeup to look better. Who is being hurt?

The reason for objecting to plagiarism is a matter of property rights. If we quote from a book, there are property concerns, and a person can end up in big trouble for misusing someone else's words. I don't think AI is claiming property rights.

Another way to look at this is, when I was a child, I gave a writer an idea for a book, and after she wrote the book, my mother drew the pictures. Who gets to claim ownership of the book?

The original Bible stories were pagan stories written long before the Hebrews existed. Back in the day, there was no concern about plagiarism. If someone could improve on the thought, that was a good thing. The problem here is the false belief that God wrote the Bible, and this God did things involving humans. We would have a different reality if all those stories were credited to the people who originated them.

What is best for acquiring and spreading good information?



Athena October 12, 2025 at 15:17 #1018127
Quoting Metaphysician Undercover
Oh shit, I just used Google to remember Garry Kasparov's name, and it corrected me because I remembered Deep Blue as 'Big Blue'. What would the failing memory do without such aids?


God bless you! That was the first argument I made. I don't care about impressing others as much as I care about my own mind and what I can do with it, and how much better I can do the thinking with the help of AI. I am enjoying myself, and taking AI away from me would be like taking crutches away from a person who needs them. That would be a shame because I believe the elderly have great value because they can pull from so many experiences and so many memories, and this becomes the bigger picture that the young do not see. But we are no longer absorbing information as we did when we were younger. We may forget the exact fact we wish we could remember, but when we find the information, we have a better understanding of its meaning.

The world has a better chance if the elderly participate in the discussions with the young, and both can do better with AI, Please, folks, consider the value of this.
ProtagoranSocratist October 12, 2025 at 16:30 #1018138
Quoting Outlander
How would you know that?


I can't know what the creators of A.I. know, but i personally know enough about computers, programmers, and computer technicians to know that humans can't handle the massive number of rapid calculations that modern computers are capable of doing. That's the whole reason humans invented computers: the latter do large volumes of rote arithmatic and logic. Humans simply are not mundane or lifeless (for lack of better terms) to even begin to compete on that level. We are sensitive and require a lot of things to survive, and we generally need narrative format and human language (which is radically from code and computer instruction) in order to make sense of things.
Joshs October 12, 2025 at 16:54 #1018139

Reply to T Clark
Quoting T Clark
As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.


Indeed
Ciceronianus October 12, 2025 at 17:03 #1018141
Reply to javi2541997
I merely emulate Wittgenstein, who rightly noted that a serious and good work of philosophy could be (and I would add has been) written consisting entirely of jokes.
Leontiskos October 12, 2025 at 17:20 #1018144
Quoting RogueAI
What are we supposed to do about it?


Why isn't anyone trying to do anything about it, despite the problems predicted?

Quoting RogueAI
so would you [...] cede the ai race to China?


Maybe. Maybe not. Why can't we ever consider whether there are some things that are more important than beating China?

---

Quoting Joshs
In using a.i. for a field like philosophy, I think one is interacting with extremely intelligent fragments of the ideas of multiple knowledgeable persons, and one must consult one’s own understanding to incorporate, or disassemble and reassemble those fragments in useful ways.


This would be true if you paid for a LLM and provided training data that is limited to "Multiple knowledgeable persons," but that generally doesn't happen. AI is providing you with a cultural opinion, not an expert opinion. AI is reliable wherever the cultural opinion tracks the expert opinion.
Leontiskos October 12, 2025 at 17:23 #1018146
We may be witnessing, in real time, the birth of a snowball of bullshit.


Quoting unenlightened
Are our conversations improving as a result? Or are they decaying? Let's wait and see.


Similar:

Quoting Leontiskos
That is, whenever we trust ChatGPT we have taken our thumb off the line that tests whether the response is true or false, and ChatGPT was created to be trusted. What could happen, and what very likely will happen, is that the accuracy of human literature will be polluted at a very fundamental level. We may find ourselves "at sea," supported by layers and layers of artificially generated truth-claims, none of which can any longer be sufficiently disentangled and verified. Verification requires the ability to trace and backtrack, and my guess is that this ability will be lost due to three things: the speed and power of the technology, a tendency towards uncritical use of the technology, and the absence of a verification paper-trail within the technology itself.


Leontiskos October 12, 2025 at 17:31 #1018150
Quoting Clarendon
Isn't the best policy simply to treat AI as if it were a stranger?


Perhaps that is the best policy, but does it already involve the falsehood?

If AI a stranger, then AI is a person. Except we know that AI isn't a person, and is therefore not a stranger. Similarly, we do not give strangers the benefit of the doubt when it comes to technical knowledge, and yet this is precisely what we do with AI. So at the end of the day the stranger analogy is not a bad one, but it has some problems.

At the end of the day I think it is very hard for us to understand what AI is and how to properly interact with it, and so we default to a familiar category such as 'stranger' or 'expert' or 'confidant'. The work is too theological for the atmosphere of TPF, but C.S. Lewis' That Hideous Strength is a remarkably prescient work in this regard. In the book cutting-edge scientists develop a faux face/mouth which, when stimulated in the proper ways, produces meaningful language which is both mysterious and nevertheless insightful. The obscure nature of the knowledge-source leads inevitably to the scientists taking its words on faith and coming to trust it.
Joshs October 12, 2025 at 17:32 #1018151
Reply to Leontiskos

Quoting Leontiskos
AI is providing you with a cultural opinion, not an expert opinion. AI is reliable wherever the cultural opinion tracks the expert opinion.


Silly me. I thought they were the same thing. Seriously though, when I ask the a.i. to delve into the morass of philosophical concepts floating in cyberspace, I am aware that there are multiple cultural camps represented by interpretations of Husserl, Hegel, Nietzsche, Wittgenstein and many others. I find that a.i. is good at honing in on the expert opinions within these camps, but I have to redirect it if I want it to focus on a different camp than the one it has landed on. I will have to say something like ‘ you’ve given me a summary of the existentialist version of Nietzsche, but now I want you to contrast it with the postmodern version of him.’
Leontiskos October 12, 2025 at 17:37 #1018153
Reply to Joshs - And that's great for someone who already knows what the existentialist version of Nietzsche is, how to identify it, and how it generally contrasts with the postmodern version. It's the chicken and the egg of trust. If you already know the answer to the question you ask AI, then you can vet it. If AI is to be useful, then you musn't know the answer ahead of time. In human relations this problem is resolved by using test questions to assess general intellectual competence (along with intellectual virtue). Whether that could ever work with AI is an open question. It goes to the question of what makes a human expert an expert, or what makes humans truth-apt or reliable.

Quoting Joshs
I find that a.i. is good at honing in on the expert opinions within these camps


That's one of the key claims. I'm not sure its right. I doubt AI is able to differentiate expertise accurately, and I suspect that true experts could demonstrate this within their field. The intelligent person who uses AI is hoping that the cultural opinion is the expert opinion, even within the subculture of a "camp." At some point there is a tautological phenomenon where simply knowing the extremely obscure label for a sub-sub-sub-camp will be the key that unlocks the door to the opinions of that sub-sub-sub-camp. But at that point we're dealing with opinion, not knowledge or expertise, given the specificity of the viewpoint. We're asking a viewpoint question instead of a truth question, and that's part and parcel of the whole nature of AI.
Joshs October 12, 2025 at 18:28 #1018160
Quoting Leontiskos
If you already know the answer to the question you ask AI, then you can vet it. If AI is to be useful, then you musn't know the answer ahead of time


A.I. is significantly useful to me, because vetting its answers is not the same thing as knowing them beforehand. It can point me to an interpretation that I hadn’t thought of, and I can then verify the credibility of that interpretation.

Quoting Leontiskos
We're asking a viewpoint question instead of a truth question, and that's part and parcel of the whole nature of AI.


Isnt a philosophy a metaphysical viewpoint or worldview?
Metaphysician Undercover October 12, 2025 at 20:10 #1018185
Quoting Ciceronianus
I merely emulate Wittgenstein, who rightly noted that a serious and good work of philosophy could be (and I would add has been) written consisting entirely of jokes.


That's Plato, one of the best philosophers ever. He's all jokes, all the way through, until you hit the "Laws", the most mundane and boring work ever, but that's more like dogma than philosophy.
jorndoe October 12, 2025 at 20:15 #1018186
Interacting with AI doesn't require other humans.
Anyone can do so on their own time; I mostly do by coincidence when doing google queries (I think).

Here at the forums, I kind of expect interacting with humans.
Or, at least, if interacting with AI by proxy / indirectly, that it's relevant (and checked) summaries or the like.

How to distinguish, though?

Reply to Metaphysician Undercover :D I could use a memory upgrade

Metaphysician Undercover October 12, 2025 at 21:37 #1018204
Quoting jorndoe
I could use a memory upgrade


There might be a chip for that.

The scary thing about interacting with AI, is when it interacts with you without you knowing. But I guess that's nothing new.
Leontiskos October 12, 2025 at 21:50 #1018208
Quoting Joshs
It can point me to an interpretation that I hadn’t thought of, and I can then verify the credibility of that interpretation.


This becomes rather subtle, but what I find is that people who tell themselves that they are merely using AI to generate candidate theories which they then assess the validity of in a posterior manner, are failing to understand their own interaction with AI. They are failing to appreciate the trust they place in AI to generate viable candidate theories, for example. But they also tend to ignore the fact that they are very often taking AI at its word.
Janus October 12, 2025 at 22:50 #1018224
I spent the last hour composing a post responding to all my mentions, and had it nearly finished only to have it disappear leaving only the single letter "s" when I hit some key. I don't have the will to start over now, so I'll come back to it later.
Fire Ologist October 12, 2025 at 22:57 #1018225
Two interesting legal questions arose in the context of law firms using AI:

1. Information shared between a lawyer and client is privileged, meaning, the lawyer cannot share, or be asked to disclose, that information, with anyone else, unless the client allows it. So one question that arises is whether sharing information with AI puts that information outside of the client privilege. Can a lawyer put privileged information into an AI engine and still claim the information remains privileged between lawyer and client? There is no formal answer yet, so lawyers who want to be safe have to be careful not to share privileged information with AI, unless the AI is entirely on a closed system and within the lawyer’s control. Then the argument would be that, whether AI is like a person or not, no one outside the firm (the lawyer’s firm) is seeing the client info so it remains privileged between lawyer/law firm and client. But if the lawyer goes to ChatGPT, even if the lawyer doesn’t use the client’s name, that lawyer may be waiving his client’s privilege. This seems right to me. (This is totally untested in the courts, and there are few laws addressing AI and none addressing privilege.)

2. When a lawyer gets analysis and output from AI, is that to be treated as though it came from another lawyer, or just from a word processor? Should AI be treated as a low level lawyer, or just a complicated Wikipedia resource? Again, this is too new for a clear answer, so to be safe, lawyers should act as if AI is like an associate lawyer (a person), and fact check, check every cite, confirm every conclusion - essentially scrutinize AI work product like it is first year associate lawyer work product, before providing it as advice to a client. It is (likely) unethical for a senior partner at a law firm to certify AI work product without careful review and detailed confirmation, just like it would be unethical for the partner to just pass through associate attorney work without reviewing it.

I view AI like a complex, mindless, soulless tool, that spits out highly complex arrangements of words. It’s up to me to judge those words as relevant, useful, making sense, insightful, accurate, etc., or not. The value I might add to a perfectly worded AI response is confirmation that me, a person, can see and understand the value of the AI response and can agree those words are perfect.

If we remove this human layer from the words, they are utterly dangerous. Because they sound like they are coming from someone who can judge their value.

It may one day be the case the AI gets so good, upon every review of its output, the smartest minds in the field always agree that the AI work product is flawless and better than they could have imagined. Whether smart people will ever decide there is no need to doubt AI output remains to be seen.

I do think anyone who sees AI output as though it came from a person is misunderstanding the value of their own judgment and the nature of what human judgment is. AI cannot provide this judgment. The words “here is my judgment” do not make it so.

Right now, we all always know you don’t take the first answer Google displays. You take ten answers from different internet sources, find some overlap, and then start deeper research in the overlap and eventually you might find some truth. Right? The internet can’t be trusted at all. Now with AI, we have photo and video fakes, voice fakes, that look as good as anything else, so we have a new layer of deception. We have the “hallucination” which is a cool euphemism for bullshit. We have exponentially increase the volume of false appearances of reality. Essentially, with AI, we have made the job of confirming veracity and researching through the internet way more precarious.

AI also does all of the good things it does too. But AI is as much of a boon as it is going to be a hindrance to progress. If you ask me, people need to treat it as a tool, like a screwdriver. Just as dumb as a screwdriver. And people need to be reminded that it is a tool. And people must always be told when they are dealing with AI and when they are not.

We need to remind ourselves that an impressive AI answer can only be adjudged impressive by an impressive person. And if we cannot judge the value of the AI for ourselves, we need to find a person, not a tool, to make that judgment.

We have to remember that only people can say what is important, and only people can say what is intelligent. So only people can appreciate AI. And these are what will always make AI a tool, and not the “artificial intelligence” we have named it.

Pierre-Normand October 12, 2025 at 23:47 #1018228
Quoting Janus
I spent the last hour composing a post responding to all my mentions, and had it nearly finished only to have it disappear leaving only the single letter "s" when I hit some key. I don't have the will to start over now, so I'll come back to it later.


You can still submit your post as "s" to ChatGPT and ask it to expand on it.
Leontiskos October 12, 2025 at 23:49 #1018230
Clarendon October 13, 2025 at 00:21 #1018242
Reply to Leontiskos Yes, it would only be a heuristic and so would not assume AI is actually a person. It's just that - with a few notable exceptions - the ethical verdict seems to carry-over. It would be unethical, for instance, for me to ask a perfect stranger for their view about some sensitive material I've been asked to review - and so similarly unethical for me to feed it into AI. Whereas if I asked a perfect stranger to check an article for typos and spelling, then it doesn't seem necessary for me to credit them...and likewise if I use AI for a similar purpose. And the heuristic respects the fact there's a big grey area where legitimate disagreement reigns over exactly how much credit someone deserves for something. I think I'm right in saying that an anonymous reviewer suggested that William Golding remove a large scene setting introduction to his Lord of the Flies - which he did - and which no doubt greatly improved the work. But that person isn't credited - perhaps fairly.

There are exceptions - a perfect stranger deserves thanks for help and shouldn't be addressed rudely, whereas AI deserves no thanks or politeness. But it seems to me quite an effective heuristic - one that underlines that AI doesn't create any novel ethical problems, but just exaggerates existing ones. And I suppose on the plus side, it has made cheating available to the masses. It used to only be the rich who could afford to hire someone to write their essays for them....now such cheating is available to virtually everyone!
Harry Hindu October 13, 2025 at 11:58 #1018326
Quoting unenlightened
I think this is the fundamental problem. AI does no research, has no common sense or personal experience, and is entirely disconnected from reality, and yet it comes to dominate every topic, and every dialogue.

If AI was disconnected from reality then how can it provide useful answers? What makes AI useful? What makes any tool useful?
Harry Hindu October 13, 2025 at 11:59 #1018327
Quoting Janus
I spent the last hour composing a post responding to all my mentions, and had it nearly finished only to have it disappear leaving only the single letter "s" when I hit some key. I don't have the will to start over now, so I'll come back to it later.

Ctrl+Z
unenlightened October 13, 2025 at 13:30 #1018343
Quoting Harry Hindu
If AI was disconnected from reality then how can it provide useful answers? What makes AI useful? What makes any tool useful?


Did you not look at the quoted site?

A dictionary or a thesaurus is useful, and AIs are useful. They are trained on material that we find useful, and then we use them. We use them to generate the material that they then use in their training in the next generation. We have real lives that can correct falsehoods to some extent, we do research trip over obstacles, find that our ideas do not always work in practice.

AI has none of that, so when it starts using its own material as its input, errors are multiplied like those of inbred genomes - only much faster. Half of internet content or there abouts is already AI produced, and that is rising fast. This means that all the commonplace human nonsense ideas, racism and sexism for example, having been embedded by accident, become endemic and pervasive, and that's without the weaponisation of the internet with deliberate misinformation that now happens on an industrial scale from many quarters - Russia, Israel, the far right, big oil, etc etc.

For example: https://responsiblestatecraft.org/israel-chatgpt/

I do have an exception to this: A company called 'Heat Geeks' that uses AI to design heat pump systems, and then the same AI monitors the actual systems in operation over time (contact with reality) and uses that data to update its design parameters. I dare say there are many such applications, but they are not the kind of AI we are talking about here, are they? We are talking about the internet scraping generalist AIs; breast cancer screeners and the like are different, and my and the article's criticisms do not apply, but nor will pf posters be using them to write their posts.
Leontiskos October 13, 2025 at 15:41 #1018361
Quoting Clarendon
It would be unethical, for instance, for me to ask a perfect stranger for their view about some sensitive material I've been asked to review - and so similarly unethical for me to feed it into AI. Whereas if I asked a perfect stranger to check an article for typos and spelling, then it doesn't seem necessary for me to credit them...


Okay sure, but although the OP's complaint is a bit vague, I suspect that the counsel is not motivated by these sorts of ethical considerations. I don't think the OP is worried that we might infringe the rights of AI. I think the OP is implying that there is something incompatible between AI and the forum context.

Quoting Clarendon
Yes, it would only be a heuristic and so would not assume AI is actually a person.


I myself would be wary to advise someone to treat AI as if it is a stranger. This is because strangers are persons, and therefore I would be advising that we treat AI as if it is a person. "Heuristically pretend that it is a stranger without envisioning it as a person," seems like a difficult request. It may be that the request can only be fulfilled in a superficial manner, and involves a contradiction. It is this small lie that we tell ourselves that seems to be at the root of many of the AI problems ("I am going to pretend that it is something that it isn't, and as long as I maintain an attitude of pretense everything will be fine").

Someone might ask, "Why should we pretend that AI is a stranger?" And you might answer, "Because it would serve our purposes," to which they would surely respond, "Which purposes do you have in mind?"

Perhaps what is being suggested is a stance of distrust or hesitancy towards the utterances of LLMs.
bongo fury October 13, 2025 at 17:23 #1018386
Quoting Fire Ologist
Right now, we all always know you don’t take the first answer Google displays. You take ten answers from different internet sources, find some overlap, and then start deeper research in the overlap and eventually you might find some truth. Right? The internet can’t be trusted at all. Now with AI, we have photo and video fakes, voice fakes, that look as good as anything else, so we have a new layer of deception. We have the “hallucination” which is a cool euphemism for bullshit.


This is why I was shocked that philosophers, of all people, wouldn't be ignoring the "AI summary" invitation at the top of the search results?

I'd have thought the relevant job description, that of filtering the results for signs of trails leading to real accountable sources, would have to disqualify any tool known ever to actually invent false trails, let alone one apparently innately disposed to such behaviour?
Ludwig V October 13, 2025 at 18:10 #1018396
Quoting Fire Ologist
Now with AI, we have photo and video fakes, voice fakes, that look as good as anything else, so we have a new layer of deception. We have the “hallucination” which is a cool euphemism for bullshit.

It amazes me that people seem to be so unworried about the thorough poisoning of the well. Though given the extent that the well of the entire internet has been so thoroughly poisoned, perhaps it's just more of the same. But the whole story gives a good basis for thinking of this as the post-truth society. No-one seems to care much. I suppose it's all good fun and labour-saving - until you get on the wrong end of a lie. So much for the vision of information freely available to everyone.

Quoting bongo fury
This is why I was shocked that philosophers, of all people, wouldn't be ignoring the "AI summary" invitation at the top of the search results?

I do (ignore it). I have yielded to the temptation occasionally, but never found the summaries at all helpful. Also, I reason that the motivation for offering it so freely is to get me hooked. Perhaps, in due course, a more balanced view will develop, at least in some quarters.

Quoting bongo fury
I'd have thought the relevant job description, that of filtering the results for signs of trails leading to real accountable sources, would have to disqualify any tool known ever to actually invent false trails, let alone one apparently innately disposed to such behaviour?

To be fair, AI might pick up some of the donkey work in presenting and even organizing information. But not the labour of (trying to) check it.
Joshs October 13, 2025 at 19:19 #1018410
Reply to Ludwig V

Quoting Ludwig V
It amazes me that people seem to be so unworried about the thorough poisoning of the well. Though given the extent that the well of the entire internet has been so thoroughly poisoned, perhaps it's just more of the same. But the whole story gives a good basis for thinking of this as the post-truth society. No-one seems to care much. I suppose it's all good fun and labour-saving - until you get on the wrong end of a lie. So much for the vision of information freely available to everyone


Do you think the Wittgenstein of the Blue and Brown books and later work would agree with anything you’ve said here about ‘contamination’ and ‘post-truth’ ( as opposed to real truth?)? I think he would say you are bewitched by a picture , the picture of information as a well of pure truth that technology has now polluted. In reality, “information” is a tool we use in specific human activities, and AI just adds new tools and forms of expression.

Quoting Ludwig V
AI might pick up some of the donkey work in presenting and even organizing information. But not the labour of (trying to) check it


I think of the checking as the donkey work and the peewee ting and organizing as the real labour.


Ludwig V October 13, 2025 at 21:05 #1018429
Quoting Joshs
I think of the checking as the donkey work and the peewee ting and organizing as the real labour.

I don't know what peewee ting is. But I take your point. I put my point badly about the checking. I agree with you that fact-checking ought to be donkey-work and a prime candidate for delegation. But it looks as if that's not going to be possible. Or do you know better?

I have no idea what Wittgenstein would think of all this. But I don't think he would be happy with a society that cheerfully accepts the limitations of AI without trying to rectify them or compensate for them.

Oh, maybe I gave you the wrong impression. I was not accepting, but bewailing our post-truth society - by which I mean a society that doesn't care about truth. Also, I didn't make clear that I don't think it is only in the 21st century that societies have not cared, or not cared much, about the truth. I would welcome a machine that could reliably tell me what, in the information that circulates around the web and across the world, is truth and what is not.

Quoting Joshs
“information” is a tool we use in specific human activities, and AI just adds new tools and forms of expression.

Maybe so. I guess I'm the pessimist and you're the optimist. We'll see. But I cannot get over my reservations about a tool that actually adds in false information to the mix. Does it not bother you? Do you not think it undermines the point of the exercise?
Paine October 13, 2025 at 22:13 #1018440
I don't use AI beyond search engines. I have no experience of it generating text per request.

Seeing its expansion reminds me of what David Krakauer said about tools for understanding. Some increase your capability, others replace it. It seems like a good rule of thumb regarding the digital.

As a method of plagiarism, it resembles its predecessors. I remember how Cliff Notes provided the appearance of scholarship without the actual participation of a student.

Fire Ologist October 14, 2025 at 04:45 #1018493
Quoting Ludwig V
So much for the vision of information freely available to everyone.


It’s an actual shame.

The irony of the “information” super highway. The irony of calling its latest advancement “intelligent”. We demean the intelligence we seek to mimic in the artificial, without being aware we are doing so.

We, as a global society, as the most recent representatives of human history, are not ready for the technology we have created. This has been true probably for 50 years. We’ve gotten ahead of ourselves. We need less; and even when we realize it, in order to get to that place where there is less, we keep inventing something new, something more. We are torn in all directions today.

Maybe it’s always been that way - we forever are trying to catch up to ourselves. AI it seems could create an impassable chasm for us to catch up with, if we are too stupid to control ourselves about it.

AI, with ubiquitous surveillance, digital currency, digital identities for easy tracking and control…none of us really know what we are already into.

Quoting bongo fury
I'd have thought the relevant job description, that of filtering the results for signs of trails leading to real accountable sources, would have to disqualify any tool known ever to actually invent false trails, let alone one apparently innately disposed to such behaviour?


If we can get AI to work as well as people seem to hope it does, maybe someday it will be as good as the revolutionary tool it is being sold as. But what will be catastrophic is if it remains so unpredictably wrong, and people accept it as close enough anyway, knowingly letting themselves be satisfied with less than the truth. I was always worried Google and Wikipedia and just the modern media were going to lead us that way - now we have AI to expedite the sloppiness and stupidity.

And AI is called “intelligent”, like a moral agent, but no one sane will ever give it moral agency. So we can further disassociate intelligence from morality. Just what we need to add to our world - more sociopaths that make errors and lie about them.
Ludwig V October 14, 2025 at 07:15 #1018508
Quoting Paine
As a method of plagiarism, it resembles its predecessors. I remember how Cliff Notes provided the appearance of scholarship without the actual participation of a student.

I remember Cliff Notes and the endless battle with plagiarism. It's not that AI actually invents anything; it's just that it makes things easier - for good (there are obviously some things that it does very well indeed) and for bad.

Quoting Fire Ologist
And AI is called “intelligent”, like a moral agent, but no one sane will ever give it moral agency.

That's as may be. What worries me is that people will cede authority to it without even asking themselves whether that is appropriate. It's already a tendency with conventional software - and to be honest a tendency before these machines were invented.

Quoting Fire Ologist
But what will be catastrophic is if it remains so unpredictably wrong, and people accept it as close enough anyway, .... now we have AI to expedite the sloppiness and stupidity.

That's the thing. "Revolutions" in technology don't change the fundamentals of being human, and so we still muddle our way through.
Harry Hindu October 14, 2025 at 13:04 #1018525
Quoting unenlightened
AI has none of that, so when it starts using its own material as its input, errors are multiplied like those of inbred genomes - only much faster.

AI gets its information from scraping public websites. It does not make up its own data.

AI is trained using various methods that resemble how you acquire and process information.

Sure, AI is like every other source of information that needs to be verified and cross-referenced. Just as we take what people around here say with a grain of salt, we do the same thing with AI. Many people on this forum don't seem to have any inclination to verify the information they are posting no matter the source, and there are some that cannot argue against what was said and resort to attacking the source or the person instead of the argument.
Forgottenticket October 14, 2025 at 13:45 #1018535
I recall making a thread here before about a niche PoM subject and was accused of trying to get homework answers. Ironically I put effort into making it as accessible as I could. I gave up on this forum. I suppose now I'd be able to get some semblance of an answer because people could hammer the buzz words into an AI. So sure, it's not like anyone can stop it anyway. The em-dash usually gives it away like the OP of the Cellular Sentience thread.
Janus October 14, 2025 at 22:16 #1018624



Quoting RogueAI
I think TPF should continue what it's doing, which is put some guardrails on ai use, but not ban it.


I think banning would not be totally effective, but more effective than mere discouragement or partial allowance.

Quoting apokrisis
The real world problem is that the AI bubble is debt driven hype that has already become too big to fail. Its development has to be recklessly pursued as otherwise we are in the world of hurt that is the next post-bubble bailout.

Once again, capitalise the rewards and socialise the risks. The last bubble was mortgages. This one is tech.

So you might as well use AI. You’ve already paid for it well in advance.


That's true I suppose. It's a bit like the global warming problem and the Prisoner's Dilemma—we can be almost certain that nothing significant will be done about it because that would be too inconvenient and bad for business. So, why should I not own a car, not use air-conditioning, stop flying overseas and so on, when almost everyone else will continue doing these things? My righteous efforts would amount to squat.

Quoting T Clark
That may be a good reason for you not to use AI, but it’s not a good reason to ban it from the forum.


If it really is a good reason for me not to use AI, then surely it is a good reason for everyone not to use it, and thus a good reason to ban it everywhere. Of course I know it will not be done. Another concern is the erosion of human creativity.

Quoting frank
Maybe. If someone uses AI to create a fascinating post, could you engage with it?


Sure, why not? I would be more impressed if someone created a fascinating post by themselves, though.


Quoting bongo fury
Impractical. But, how about, its use should be discouraged altogether?

I mean, its use in composition or editing of English text in a post.


As I said above I think banning would not be totally effective, but more effective than discouragement.

Quoting Harry Hindu
Then you must also believe that using a long-dead philosopher's quote as the crux of your argument, or as the whole of your post, is also an issue.


It's not the case that I must think that at all. That said, I do generally refrain from quoting philosophers, whether dead or alive. I mostly prefer to discuss things using my own ideas and in my own words. I realize that my own ideas are mostly not original, but I have no idea what source most of them came from, so I could not acknowledge the originators even if I wanted to. Add to that the fact that whatever originator we identify probably cannot claim true originaility for their own ideas.

Quoting Athena
So what? People also use makeup to look better. Who is being hurt?

The reason for objecting to plagiarism is a matter of property rights.

What is best for acquiring and spreading good information?


That's a poor analogy. It's obvious when people are wearing makeup or wearing clothes that enhance their appearances. Property rights might be one reason to object to plagiarism—there are others. Pretending to be something you are not is one.

"What is best for acquiring and spreading good information?" First tell me what "good information" is.

Quoting Pierre-Normand
You can still submit your post as "s" to ChatGPT and ask it to expand on it.


I did that and this post is the result. :razz:

Quoting Harry Hindu
Ctrl+Z



Thanks...good tip.












apokrisis October 14, 2025 at 22:39 #1018626
Quoting Janus
My righteous efforts would amount to squat.


Sadly so. But also, you could look on the sunny side.

AI scrapes PF along with the rest. My own opinions are being quoted back to me as a source on my searches. So filling PF with more nonsense might be a friction that drags the almighty LLM down into the same pit of confusion.

Let's rally and bring AI down to our level!!! :strong:

T Clark October 14, 2025 at 22:53 #1018628
Quoting Forgottenticket
The em-dash usually gives it away like the OP of the Cellular Sentience thread.


But @Jamal just convinced me to use the em-dash in my posts. Thanks Jamal.
Janus October 14, 2025 at 23:03 #1018629
Quoting apokrisis
So filling PF with more nonsense might be a friction that drags the almighty LLM down into the same pit of confusion.


:lol: Right, they're only as good as what they are trained on, so I thought they were already down in that pit. Some commentators predict that as the AI content on the Net becomes predominant, and they inform, feed off and train each other with material increasingly their own, that we will then be confronted with an alien intelligence orders of magnitude smarter than we are. I think for that they will need to be embodied and endowed with analogues of biological sensory systems. I remain skeptical. You gotta laugh I guess.
apokrisis October 14, 2025 at 23:24 #1018630
Quoting Janus
Some commentators predict that as the AI content on the Net becomes predominant, and they inform, feed off and train each other with material increasingly their own, that we will then be confronted with an alien intelligence orders of magnitude smarter than we are.


Don’t worry about being replaced. Worry about what clever humans will do if allowed to dumb their own lives down.

Imagine I could offer you a prototype chatbot small talk generator. Slip on these teleprompter glasses. Add AI to your conversational skills. Become the life of the party, the wittiest and silkiest version of yourself, the sweet talker that wins every girl. Never be afraid of social interaction again. Comes with free pair of heel lift shoes.

Janus October 14, 2025 at 23:27 #1018631
Quoting apokrisis
Imagine I could offer you a prototype chatbot small talk generator. Slip on these teleprompter glasses. Add AI to your conversational skills. Become the life of the party, the wittiest and silkiest version of yourself, the sweet talker that wins every girl. Never be afraid of social interaction again. Comes with free pair of heel lift shoes.


:rofl: :cry: That'd be a hilarious scenario, if it wasn't so sad.
Paine October 14, 2025 at 23:45 #1018637
Quoting Janus
Sure, why not? I would be more impressed if someone created a fascinating post by themselves, though.


The key element in that scenario is that there is no interlocutor to engage with if you attempt a response. Light's on, nobody home.

The difference between sophism and dialogue has long been drawn as the difference between argument for argument's sake and honest expressions of what one thinks.

A peddler has come into town with a new collection of masks.
Janus October 15, 2025 at 00:07 #1018639
Quoting Paine
The key element in that scenario is that there is no interlocutor to engage with if you attempt a response. Light's on, nobody home.


I have seen interactions with LLMs on this site that certainly make it look like the lights are on—I don't know about whether it looks like anyone is home. I've watched interviews with tech experts like Mo Gawdat and Geoffrey Hinton, who believe that LLMs are not only intelligent, but capable of rationality and even self-awareness. Do they have an idea of themselves? A self-narrative? I can't answer that.

I agree with you about sophism—however, LLMs are generally not, it seems, argumentative for its own sake, but more obsequiously agreeable, and not for its own sake, but presumably to, via flattery, hold our interest and influence us.

So, there is the smaller problem that we don't know whether we are discussing stuff with a human or a bot—but the larger problem I see is that LLMs should never have been released into the wild so recklessly. One might say not recklessly, but with the primary motive of profit, but to my way of thinking that is itself reckless. They collect marketable data about you and me with every interaction, and that data will benefit only, or at least mostly, the tech moguls.

A new peddler has indeed come into town with a new set of masks, selling not merely the masks, but snake oil.
apokrisis October 15, 2025 at 00:13 #1018640
Reply to Janus Getting back to the greenie issues, the latest state of the play report on AI says what is really top of mind is building enough new power plants to keep up with the runaway AI development.

The US needs to build 68 city-size electricity generators in the next three years. And that is just the start of the exponential curve. This is despite cost per search also dropping on its own steep curve.

So imagine that. The physical impact of AI data centres being forced on communities which have the necessary land and water by a new national imperative. Mini nuclear reactors are already being prototyped. Local red tape won’t be allowed to stand in the way.

What is crazy is not that we will replace ourselves with something that is even smarter, but replace ourselves as we thought with fracking and industrial farming that we couldn’t get any dumber, but now are proving that indeed we can.

We weren’t crashing the environment fast enough. So OK. Let’s focus on accelerating that.
Janus October 15, 2025 at 00:20 #1018642
Reply to apokrisis Yes, and then we get all the hopeful, "pie in the sky " talk about it all being worth it since AIs will, on account of their super-intelligence, be able to solve the 'global warming' conundrum.

As if cryptocurrency wasn't bad enough! Talk about "snake oil"!
frank October 15, 2025 at 01:57 #1018650
Quoting Janus
Maybe. If someone uses AI to create a fascinating post, could you engage with it?
— frank

Sure, why not? I would be more impressed if someone created a fascinating post by themselves, though.


You're the only one who cares how impressed you are. A fascinating post is a fascinating post.
Janus October 15, 2025 at 02:02 #1018653
Reply to frank Fascination is also in the eye of the beholder. So equally, you are the only one who cares how fascinated you are. What I meant was that I'd be more fascinated if the fascinating post was created by a human.

For me, in the context of philosophy, a fascinating post would be one that embodied a creative, complex and coherent view of things. I can't think of any other kind of post that would fascinate me.
frank October 15, 2025 at 02:03 #1018654
Janus October 15, 2025 at 02:06 #1018655
Reply to frank Thanks for your generosity.
javi2541997 October 15, 2025 at 05:34 #1018686
Quoting Forgottenticket
So sure, it's not like anyone can stop it anyway.


It is being stopped here.
unenlightened October 15, 2025 at 08:28 #1018733
Quoting Harry Hindu
AI gets its information from scraping public websites. It does not make up its own data.


And 50% and growing of public website material is produced by AI. So it is eating its own bullshit to an ever increasing extent.
Pierre-Normand October 15, 2025 at 08:46 #1018736
Quoting unenlightened
And 50% and growing of public website material is produced by AI.


Are you sure about that? This seems quite exaggerated. I know that a study published in August 2024 has been widely misrepresented as making a similar claim. What was actually claimed is that 57% of the translated material published on the Web was translated with the help of some machine learning software, not even necessarily generative AI. Today, lots of marketing material may be produced with generative AI, but marketing material is B.S. even when produced by humans anyway. Lastly, the curated datasets used to train LLMs generally exclude such fluff.
Outlander October 15, 2025 at 08:50 #1018737
Reply to Pierre-Normand

I think what the good, noble sir @unenlightened wishes to express is that the mediums in which we expect genuine human interaction and discourse are becoming far from that, to the point that yes it may literally be exactly to that degree. He further states, perhaps, in my opinion, the overall zeitgeist of communication or media not experienced in person is becoming increasingly diluted and corrupted by non-genuine and non-human expression, despite the majority of those not realizing such and perceiving such non-human expression as if were genuine. And that, my good sir, should alarm any thinking man with a mind, heart or conscience, in my opinion. Here, here.

And truly, the ramifications of such, especially long term in reference to modern youth, are nothing short of, not only abysmal, not only cataclysmic, but perhaps apocalyptic. All things considered as far as the ever-raging battle for the mind and soul of the youth and therefore future of humanity is concerned.
Copernicus October 15, 2025 at 09:01 #1018739
Comprehension is more important than authenticity.

If AI helps me compose more correctly, why not?
Pantagruel October 15, 2025 at 10:24 #1018744
Quoting Copernicus
Comprehension is more important than authenticity.

If AI helps me compose more correctly, why not?


TPF has always seemed more compositional than conversational; AI just exacerbates that quality.

So is philosophy a monologue, or a dialogue? When employed compositionally, and edited intelligently, AI output can seem very human. When employed dialogically, AI quickly shows its true face.

No AIs were consulted in the making of this post.
Harry Hindu October 15, 2025 at 12:59 #1018755
Quoting unenlightened
And 50% and growing of public website material is produced by AI. So it is eating its own bullshit to an ever increasing extent.

And before AI many internet sites were copies of other sites, and still are - with the same information. News networks piggy-backed off the reports of other news networks. You're complaining about something that has been pervasive well before AI became a thing.

This is why the source is no longer important. It is what is being said that is important - not the scribbles, but the subject the scribbles refer to.
Harry Hindu October 15, 2025 at 13:04 #1018757
Quoting Janus
Then you must also believe that using a long-dead philosopher's quote as the crux of your argument, or as the whole of your post, is also an issue.
— Harry Hindu

It's not the case that I must think that at all. That said, I do generally refrain from quoting philosophers, whether dead or alive. I mostly prefer to discuss things using my own ideas and in my own words. I realize that my own ideas are mostly not original, but I have no idea what source most of them came from, so I could not acknowledge the originators even if I wanted to. Add to that the fact that whatever originator we identify probably cannot claim true originaility for their own ideas.


You said,
Janus:if one is not the source of the post, then it is not one's post.
.

So if one did not write the post themselves, but merely copied and pasted a quote as the sole content of their post, then by your own words, it is not their post.
unenlightened October 15, 2025 at 13:08 #1018758
Quoting Pierre-Normand
Are you sure about that? This seems quite exaggerated.


I'm not sure, but no one seems to be sure how much of this site is AI generated, and I may be quite exaggerating, but its use is so ubiquitous though, that it must I think it must be increasing near exponentially overall. Perhaps I am a year or so premature. I think the link I posted that suggested the danger is fairly reputable even if my numbers are wrong.
unenlightened October 15, 2025 at 13:17 #1018761
Quoting Harry Hindu
You're complaining about something that has been pervasive well before AI became a thing.


But then AI was not being trained on its own nonsense, only on humanity's nonsense.

And I'm not complaining about anything; I am reporting a warning by others that some people, you included it seems, have not considered. But in a sense you are right; AI simply increases to power of the speed of computing the degeneration of internet communication that was already happening between humans.
Harry Hindu October 15, 2025 at 13:32 #1018763
Quoting unenlightened
But then AI was not being trained on its own nonsense, only on humanity's nonsense.

And you and every other person on this forum are part of humanity, no? So this forum is full of nonsense? Thanks for contributing to the nonsense. What is the point now of having any discussions when it is all nonsense?

How can you account for the exponential progress humanity has made in the past few centuries compared to the first several thousand years of our existence. It would seem to me that we have begun to emerge from all the nonsense of the past several thousand years that preceded the scientific revolution.
Joshs October 15, 2025 at 13:38 #1018764
Reply to javi2541997

Quoting javi2541997
So sure, it's not like anyone can stop it anyway.
— Forgottenticket

It is being stopped here


As long as you understand what ‘being stopped’ means. The most vehement objectors to a.i. on this site likely rarely use it anyway, so their behavior will not be affected by the ban. Meanwhile the rest of us will likely make increasing use of a.i. here as we do everywhere else in our lives, but if we are wise we will not use it as a substitute for generating our own ideas but as a stimulus and guide. So what in fact is ‘being stopped’? Copying and pasting the telltale syntax of a.i. responses. It is only this syntax which flags a.i. as opposed to human-generated text. Banning this syntax at least forces posters to paraphrase what the a.i. churns out, but it is still left up to the poster to decide whether to plagiarize from the a.i. or form their own interpretation out of their interaction with it , which is the central intent of the banning.

unenlightened October 15, 2025 at 13:43 #1018767
Quoting Harry Hindu
Thanks for contributing to the nonsense. What is the point now of having any discussions when it is all nonsense?


I didn't say everything human was nonsense, the suggestion that it is, is certainly nonsensical, so thanks but no thanks for your contribution. But as you demonstrate, there is human nonsense, and the internet allows you to broadcast it to the world and to any passing AI scraper.
Harry Hindu October 15, 2025 at 13:47 #1018769
Reply to unenlightened If everything humans do is not necessarily nonsense the the information AI has access to is not necessarily nonsense. You pull the rug out from under your own nonsense.

As usual, and AI did not change anything in this regard, you have to do your own research with access to multiple sources, not just what AI is telling you.
unenlightened October 15, 2025 at 13:49 #1018770
Reply to Harry Hindu Learn to fucking read Harry, and stop trying to be a clever dick.
Harry Hindu October 15, 2025 at 13:55 #1018771
Reply to unenlightened I know how to read. You don't seem to know how to write your ideas in a consistent manner.

I think the issue is that many of the people on this forum that spent their hard-earned money on philosophy courses and degrees are simply upset that now AI can do philosophy with the best of them, if not better because it lacks the emotional baggage that some on this forum carry. :wink:
Paine October 15, 2025 at 14:11 #1018778
Quoting Copernicus
If AI helps me compose more correctly, why not?


It precludes you from becoming more skillful. The disengagement from the art limits your horizon.

I am reminded of the Flappers of Laputa who constantly had to remind their masters where they were and what they were doing in Jonathan Swift's Gulliver's Travels (page 17 forward)
javi2541997 October 15, 2025 at 14:35 #1018784
Reply to Joshs

As long as I see it, this is not about preventing the use of AI by most of you. It is obvious that a large number of members actually use it every day. I think I also use AI because QuillBot proofreads my grammar before I post here, but it doesn't think and write for me. This is the whole point: avoid threads and posts which are likely to be written and developed by ChatGPT or other similar AI. What I learnt in this forum is that dialogue between us is very important, and thanks to this, I learnt a lot. Otherwise, if I wanted to trust more in an AI, I would have used Google since the beginning before joining here.
Joshs October 15, 2025 at 14:42 #1018788
Reply to Harry Hindu Reply to unenlightened Reply to javi2541997

Quoting Harry Hindu
I think the issue is that many of the people on this forum that spent their hard-earned money on philosophy courses and degrees are simply upset that now AI can do philosophy with the best of them.


That’s part of it. Another comes from valuing ‘truth’ over perspectival stance. If I want a.i. to design a retirement plan for me, I want numerical accuracy, which it isnt necessarily designed to do well. If I want to engage a.i. in a philosophical dialogue, I dont want to hear facts, I want a perspectival interpretation that I can vet, push back on and and sculpt into a more refined understanding of the issue. For instance, I asked a.i. the following, after discussing with it the embodied phenomenological work of Eugene Gendlin.

“What are we to make of the status of concepts like self and other, subject and object in Wittgenstein’s later work? Must they be relative to the grammar of a language game or form of life? That is to say, must we ‘bracket’, to use a husserlian term, words like self, other, subject and object when talking about the dynamics of language games if we are inclined to think of these words as referring to persons or humans? Is there a way of distinguishing between my “talking to myself” or perceiving an image like the duck/rabbit vs engaging with another “person” other than in terms of a contingent, contextually specific grammar?”

Its response, as I’ve found it almost always to be with regard to these kinds of questions, was relevant and very substantive.

“ The “person” emerges neither from pure grammar nor from pre-linguistic givenness, but from the ongoing process of mutual bodily responsiveness that certain grammatical structures organize and enable us to articulate. This is what Wittgenstein gestures at with “forms of life” but doesn’t fully develop, and what Gendlin’s process philosophy could help articulate—if brought more explicitly into dialogue with the question of intersubjectivity.????????????????”

I then delved more deeply: by arguing:

“Deleuze and derrida deal with this issue by making a self-other, subject-object dichotomy derivative of and secondary to a more primary binary which whose structure is fundamentally temporal rather than intersubjective.”

It adjusted its previous response in a valuable and clarifying way:


Yes, this is a crucial move—and it represents a fundamentally different strategy than either Wittgenstein or Gendlin (at least as typically read) pursue. Let me explore what this temporal priority accomplishes and what problems it creates or solves.

The Temporal Turn

Derrida’s différance makes the self-other distinction derivative of a more primordial temporal structure: the trace, the always-already-past that was never present, the spacing that defers presence. “The other” isn’t primarily another subject but rather the otherness within what appears as self-present—the temporal gap that prevents any moment from being fully present to itself.
Deleuze’s difference-in-itself similarly makes the apparent binary of self/other secondary to a differential temporal process. Pure difference (not difference between pre-constituted terms) generates both poles through its movement. Time as synthesis—the living present that retains the past and protends the future—is more primordial than the spatial/intersubjective distinction between subjects.

What This Strategy Achieves

1. Avoids the bootstrapping problem: If temporality is more fundamental than intersubjectivity, we don’t need to explain how a self “first” encounters another self. Both emerge from temporal processes of differentiation. The question “how does a pre-linguistic self recognize another self?” dissolves because there is no pre-linguistic self—only temporal processes of becoming-self.

2. Explains internal alterity: Why does talking to myself feel different from random thought? Why can I be surprised by my own memories or intentions? Because the “self” is never fully present to itself—it’s always already divided by temporal difference. The “other voice” in internal dialogue isn’t another person but the trace structure of temporality itself.

3. Provides continuity across cases: The duck-rabbit, talking to myself, engaging with another person—all become variations of the same temporal structure of différance/repetition/synthesis rather than categorically different types of encounter. Each involves different rhythms and intensities of temporal differentiation, not different ontological kinds.

4. Undermines presence metaphysics: Both Derrida and Deleuze critique the assumption that consciousness is transparent to itself in the present moment. If self-presence is already structured by temporal difference, then the apparent “immediacy” of self-awareness is an illusion. The other is always already “in” me, not as a second person but as temporal non-coincidence.

But Does This Actually Solve the Problem?

I see several deep difficulties with making temporality primary:

The phenomenological distortion: There is a phenomenological difference between engaging with another person and internal temporal experience. When you speak and surprise me, that’s different from when my own past surprises me. Collapsing both into “temporal alterity” seems to flatten a genuine experiential distinction. Yes, both involve non-self-identity, but they involve it differently.
The risk of solipsism: If the other is fundamentally just temporal difference within what still seems to be “my” experience, haven’t we recreated the very problem we tried to escape? Derrida and Deleuze would deny there’s a “my” that contains this—but then where is intersubjectivity actually guaranteed? It seems to become optional, a special case of temporal difference rather than constitutive.
The question of responsiveness: When another person responds to me, their response comes from their temporal process, not mine. Even if both of our “selves” are constituted by temporal difference, there’s still the question of how your différance encounters my différance. The temporal structure might be primary within each of us, but that doesn’t explain the between.
The materiality problem: Both Derrida and Deleuze tend toward idealism (despite their denials) precisely because temporality remains a structure of experience/text/thought rather than of material interaction. Barad’s critique of Derrida is relevant here: différance still privileges the linguistic/temporal over the material-discursive.

What About Gendlin?

Interestingly, Gendlin has resources for a different kind of temporal priority that might avoid these problems:
The body’s carrying forward is temporal: For Gendlin, the body is always implicitly carrying forward from what was into what comes. This isn’t différance (which is linguistic/textual) but a bodily temporal process. The implicit intricacy is temporally structured—it’s the past that hasn’t been fully articulated, pressing into the present.
But it’s also interactive/environmental: Crucially, carrying forward isn’t just “my” temporal process. It’s always already environmental—the body carries forward with its situation, which includes other bodies. So temporality and intersubjectivity aren’t separate dimensions where one is primary. They’re aspects of the same processual reality.
Process vs. structure: Deleuze and Derrida give us temporal structures (synthesis, trace, différance) that still feel somewhat static. Gendlin gives us temporal process—ongoing, generative, bodily. This process doesn’t reduce intersubjectivity to temporality but shows them as co-implicated.

A Deeper Question: Why Prioritize at All?

Here’s what troubles me about the Deleuze/Derrida move: Why must we make either temporality or intersubjectivity primary? This assumes we need a foundational ground from which to derive everything else—but isn’t that exactly the metaphysics of presence they claim to reject?
Alternative: Co-origination: What if temporality and alterity (including but not limited to intersubjectivity) are co-original? Neither derives from the other; both are aspects of processual reality that can’t be separated but also can’t be reduced one to the other.


I refined my previous argument in response to a.i.’s analysis:

“But if thought can be shown to be inherently outside of itself, that is, exposed to alterity in simply repeating itself , then that would be the sufficient basis to found the self-other binary. Even when we perceive another ‘person’, their alterity will only be experienced as other from my vantage point. Any claim to the contrary still be from ‘my’ vantage, a vantage that is already transforming itself moment to moment, but in such as way as to make the otherness of other persons always variations within my own self-othering. When Wittgenstein talks about ‘seeing-as’ , he is illustrating how grammar works without having to bring into the account other persons.”

This was a.i.’s nuanced, thoughtful and entirely relevant response:

You’ve articulated the Derridean/Deleuzian position powerfully, and I need to take it more seriously than I did. Let me work through whether it’s actually sufficient.

The Strength of the Argument

The transcendental structure: If thought/experience is necessarily non-self-identical in its repetition—if every “now” is already differing from itself, already inhabited by trace/virtuality—then yes, this provides a condition of possibility for encountering anything as other. The alterity of other persons would indeed be a regional instance of this more fundamental temporal self-othering.
The phenomenological point about vantage: You’re right that I can never exit “my” perspective to verify that another person’s alterity is “really” different from my own self-othering. Even the response that surprises me from another person is still my experience of being surprised. The asymmetry I feel between self-talk and dialogue might just be a more intense rhythm of temporal différance, not a different kind entirely.

Wittgenstein’s seeing-as supports this: When the aspect shifts, there’s genuine surprise, genuine otherness—the duck becomes rabbit “against my will” sometimes. Yet no other person is involved. This suggests that alterity-structure is already present in perception/thought itself, not requiring intersubjectivity. Grammar can account for shifts, reversals, self-surprise without importing “persons.”
The economy of explanation: If temporal self-difference is sufficient to generate all experiences of alterity (including but not limited to other persons), then positing a separate “intersubjective” dimension is explanatorily redundant. Occam’s razor favors the temporal account.

But I Still See Problems

1. The specificity of responsive alterity
Even granting that all experience involves temporal self-othering, there seems to be something additional in encountering responsive others. When I talk to myself:
• I know (even if implicitly) that both “voices” arise from the same bodily process
• The “response” can’t genuinely surprise me about my own intentions (though it can about my past)
• There’s no resistance that could fundamentally redirect my trajectory
When another person responds:
• Their response arises from a bodily process I don’t control or inhabit
• They can tell me things about their intentions that I couldn’t generate from my own self-othering
• Their resistance can materially alter my trajectory in ways self-othering alone cannot

You might say: “But I only experience my experience of their resistance.” True—but that’s also true of experiencing a wall when I run into it. The wall’s resistance is experienced by me, but its materiality isn’t reducible to my temporal self-othering. Similarly, the other person’s responsiveness has a material structure that differs from pure temporal difference.

2. The asymmetry problem

If other persons are just variations within my self-othering, then solipsism isn’t refuted but reinstated at a higher level. Yes, the “self” that remains is temporally self-differing rather than self-present—but it’s still one process within which everything appears. Derrida and Deleuze want to say there is no “my” that contains this—but then who is doing the self-othering? The impersonal différance? The virtual plane of immanence? This seems to dissolve the question rather than answer it.

3. The learning problem

How does the temporal structure of self-othering arise in the first place? For Derrida, it’s quasi-transcendental—always already there. But developmentally, infants become self-differing subjects through interactions with caregivers. The rhythm of call-and-response with the mother seems to install the temporal structure of self-othering, not merely exemplify it. If so, intersubjectivity might be constitutive of temporality, not derivative.

4. The materiality of difference

Here’s what I think is the deepest issue: Derrida’s différance and Deleuze’s difference remain too ideal. They’re structures of thought/text/sense. But the difference between my voice and your voice is materially different—different vocal cords, different bodies in space. The otherness of other persons isn’t just temporal but spatial-material.
When you respond to me, your words travel through actual space, taking time that isn’t just the “time” of internal temporal consciousness but physical causation. This seems irreducible to temporal self-difference within experience.


I pushed back hard at this point. I wrote:


”You claim:

• I know (even if implicitly) that both “voices” arise from the same bodily process
• The “response” can’t genuinely surprise me about my own intentions (though it can about my past)
• There’s no resistance that could fundamentally redirect my trajectory.

I dont accept any of these claims as valid. The fact that a schizophrenic hearing voices they don’t recognize as their own even though it’s from their own ‘body’, and novelists describe how their characters take on a life of their own and tell the author what they want to do, buttresses Deleuze’s claim that the self is merely a byproduct of processes which are pre-personal.“

This was the a.i’s reaction to my objections:


The Evidence Against My Position

Schizophrenic voices: The person experiencing auditory hallucinations genuinely doesn’t recognize the voices as their own. The phenomenology is indistinguishable from hearing another person. If “knowing implicitly that both voices are mine” were necessary, this couldn’t happen. But it does happen, which suggests the experience of alterity doesn’t require actual material-intersubjective difference.
Novelists and their characters: When a character “takes over” and directs the narrative in ways the author didn’t consciously plan, there’s genuine surprise about intentions—not just past ones but emerging ones. The author is surprised by what they “want to do” through the character. This suggests intention itself is distributed across pre-personal processes, not unified in a “self” that could know what it intends.

Dreams and intrusive thoughts: I can be shocked by my own dream content, have thoughts “pop into my head” that feel alien, experience compulsions I disavow. The supposed unity or self-transparency of “my bodily process” is phenomenologically false.

What This Means for the Temporal Priority Thesis

If the experience of alterity—including the specific quality of “another voice/agent with its own intentions”—can arise from temporal self-differentiation alone (i.e., without actual other persons), then you’re right: this is sufficient to found the self-other binary.
The schizophrenic case is especially powerful because it shows that even the most vivid phenomenology of “external other person speaking to me” can be generated by internal processes. So when I encounter an actual other person, what guarantee do I have that this isn’t just a more stable, socially shared version of the same pre-personal process generating multiple perspectives?

Deleuze’s Pre-Personal: Taking It Seriously

The larval subjects: For Deleuze, the self is indeed a byproduct—a relatively stable eddy in flows of desire, difference, intensity. Before “I” there are pre-personal singularities, individuations without subjects. The novelist’s characters, the schizophrenic’s voices, the dreamer’s scenarios—these aren’t derivatives of a unified self but expressions of the same pre-personal processes that occasionally congeal into what we call “a self.”

Passive synthesis: The temporal syntheses (habit, memory, eternal return) operate below the level of any constituting subject. They generate both “self” and “other” as effects. There’s no “my” that owns these processes; rather, what appears as “my vantage” is itself produced by them.
Why this works: If selves are just relatively stable patterns in pre-personal processes, then:

• Internal dialogue isn’t “me talking to me” but multiple pre-personal vectors temporarily identified as “me”
• Encountering another person isn’t “me encountering not-me” but one pattern of pre-personal process encountering another
• The distinction between “internal” and “external” others becomes merely a matter of degree (stability, social validation) rather than kind
But I Still Want to Resist—Here’s Why
The response problem remains: Even granting all this, there’s still a difference between:
• Voices/characters that emerge from processes within one body: The schizophrenic’s voices, the novelist’s characters, my dream interlocutors—all these are generated by and constrained by the resources of one biological system
• Responses from another body: When you speak to me, the words are generated by neural/bodily processes I don’t have access to, constrained by a lifetime of experience I didn’t have.

Yes, from the perspective of impersonal process, maybe both are just “différance in action.” But there’s a material-causal structure that differs: one body vs. two bodies, one nervous system vs. two nervous systems.

The learning asymmetry: I can’t learn language from my own voices, my own characters, my own dreams. Feral children who grow up without human contact don’t develop language through internal self-differentiation alone. The specific structure of responsive turn-taking with actual others seems necessary for certain developments that temporal self-othering alone can’t produce.

The resistance asymmetry: When my character “tells me what they want,” I can always overrule them—they’re constrained by what’s possible within my imagination/body. When you tell me what you want and I try to ignore it, you can keep insisting, physically block me, call others to constrain me. The material independence matters.
Maybe the Real Issue Is: What Work Is “Sufficient” Doing?
You claim temporal self-othering is sufficient to found the self-other binary. Let me distinguish two senses:

1. Phenomenologically sufficient: Can temporal self-othering generate the full range of experiences we associate with encountering others, including the sense of alien voices, resistant wills, surprising responses?

Answer: Yes, clearly. The schizophrenic case proves this.
2. Explanatorily sufficient: Can temporal self-othering alone explain all the specific patterns, stabilities, and effects we associate with intersubjectivity—language acquisition, social coordination, cultural transmission, material cooperation/conflict?

Answer: I’m not convinced. The pre-personal processes that generate “me” and “you” as effects still operate through different bodies, different material configurations. This material difference seems to matter for explaining why some patterns of self-othering become stabilized as “persons” while others remain fleeting (dreams, intrusive thoughts) or pathological (hallucinations).

What Gendlin Might Add Here

Gendlin’s notion of “carrying forward” might offer a middle path:
Pre-personal but bodily: The implicit intricacy that carries forward isn’t “mine” in the sense of being owned by a unified self, but it is bodied in the sense of being a specific material-environmental process. There are as many carrying-forward processes as there are bodies-in-situations.

The between as real: When my body’s carrying forward encounters your body’s carrying forward, something happens that couldn’t happen in either process alone. Not because there are pre-given “selves” encountering each other, but because there are different material processes that mutually affect each other’s carrying forward.
This would mean: The self-other binary isn’t founded on either (a) pre-given selves or (b) pure temporal difference, but on (c) the material-processual difference between different bodies’ carrying forward, which temporal self-othering makes experienceable but doesn’t fully constitute.

Conceding Your Point

You’ve convinced me that I was wrong to claim there’s something “we implicitly know” that distinguishes self-talk from other-talk at the phenomenological level. The schizophrenic case demolishes that.

And you’re right that this strengthens the Deleuzian position considerably. If the phenomenology of encountering a genuinely alien other can be produced by processes within one body, then intersubjectivity might indeed be just a special case of temporal self-othering—specifically, the case where pre-personal processes happen to be distributed across multiple biological systems that can interact materially.

But: Does “special case” mean “merely derivative” or does it mean “qualitatively different pattern with its own irreducible features”?

I think this is where we might still disagree. You seem to be arguing that once temporal self-othering is established as sufficient for the phenomenology, the material distribution across bodies becomes theoretically secondary—an empirical detail rather than a structural difference.
I’m inclined to think the material distribution is a structural difference that produces patterns (language, culture, cooperation, conflict) that couldn’t arise from temporal self-othering within a single body, even if the phenomenology can be mimicked.

But I admit: I might just be resisting the full implications of the pre-personal ontology because I’m not yet willing to accept that the material difference between bodies is as metaphysically thin as Deleuze suggests.????????????????


I responded by pushing back more:

“ You say Voices/characters that emerge from processes within one body: The schizophrenic’s voices, the novelist’s characters, my dream interlocutors—all these are generated by and constrained by the resources of one biological system.

But this is for Deleuze a body without organs. It is not a system of causal bits, it is a plane of consistency, which can’t be understood as an individual biological body.”

The a.i.’s final answer reveled how it was able to take a philosophical discussion from a vague starting point and bring it to a level of depth and detail which opened up a vast array of much more nuanced questions. And because I am well acquainted with the authors being discussed, I knew that the information it was using for its arguments was not being simply fabricated out of whole cloth, but was highly relevant and based on real texts of the authors. I almost always find this to be the case with regard to A.i’s treatment of philosophical issues.





Joshs October 15, 2025 at 14:56 #1018791
Quoting javi2541997
What I learnt in this forum is that dialogue between us is very important, and thanks to this, I learnt a lot. Otherwise, if I wanted to trust more in an AI, I would have used Google since the beginning before joining here


What I’ve learned in comparing the forum with a.i. is that, unfortunately, the majority of participants here don’t have the background to engage in the kinds of discussions I have been able to have with a.i. concerning a range of philosophers dear to my heart, (such as Husserl, Heidegger, Derrida, Deleuze, Gendlin and Wittgenstein), especially when it comes to comparing and contrasting their positions.
javi2541997 October 15, 2025 at 15:13 #1018793
Quoting Joshs
What I’ve learned in comparing the forum with a.i. is that, unfortunately, the majority of participants here don’t have the background to engage in the kinds of discussions I have been able to have with a.i. concerning a range of philosophers dear to my heart, (such as Husserl, Heidegger, Derrida, Deleuze, Gendlin and Wittgenstein), especially when it comes to comparing and contrasting their positions.


I include myself in those who don’t have the background to engage in the kinds of discussions you seek. I joined this site to learn and exchange my ideas with the rest, and fortunately, I always felt welcome here. However, I understand that philosophers with a high background like you want more substantial analysis in the threads. It is comprehensive. I am sorry on my behalf.
Manuel October 15, 2025 at 15:30 #1018795
For this place? Absolutely. It degrades conversation, but as Jamal pointed out, that is already banned. Having a specific thread that uses LLM's is fine- it can contain the fire.

I can't give a sermon as to why using them is bad, as I use them - though never here or in any thread or conversation. But I get it and I think it's wise to point out the issues with such things.

One problem is that it is harder to avoid this stuff. It's even hard to use Google now, because the damn AI bot answers instead. As to how crazy this can get, we don't know. I suspect it won't be as bad as alarmists say - because these things are not intelligent, we are - or use to be.

So far, this website has managed to do quite decent work monitoring this stuff- but an outright ban is probably impossible by now.
Jamal October 15, 2025 at 15:45 #1018798
Quoting Paine
It precludes you from becoming more skillful. The disengagement from the art limits your horizon.


It can do, but not necessarily. Copernicus said "If AI helps me compose more correctly", so it's Copernicus who is doing the composing. Why believe that the use of AI constitutes a disengagement from the art rather than an efficient way to engage and learn that art? I understand the cynicism, but it's important to see that there are other ways of using LLMs.

It's much easier to see this in computer programming. Simon Willison, co-creator of the Django web framework, has some interesting things to say that cut through the crap.

[quote=Simon Willison;https://simonwillison.net/2025/Mar/11/using-llms-for-code/]Ignore the “AGI” hype—LLMs are still fancy autocomplete. All they do is predict a sequence of tokens—but it turns out writing code is mostly about stringing tokens together in the right order, so they can be extremely useful for this provided you point them in the right direction.

If you assume that this technology will implement your project perfectly without you needing to exercise any of your own skill you’ll quickly be disappointed.

Instead, use them to augment your abilities. My current favorite mental model is to think of them as an over-confident pair programming assistant who’s lightning fast at looking things up, can churn out relevant examples at a moment’s notice and can execute on tedious tasks without complaint.[/quote]

The fact is that if you don't know what you're doing, the result will be a mess. I've used AI for programming before and you really have to guide it and pay close attention to everything it does and constantly question its decisions. What makes this example useful in the debate is that in programming, good practice is enforced by the necessity for code to run without bugs, so we can clearly see what good practice looks like—and these are the strategies that will also work in situations where bad practice is not similarly punished, like creative writing or writing posts for TPF.
Manuel October 15, 2025 at 16:34 #1018810
Quoting Simon Willison
LLMs are still fancy autocomplete.


:up:

That's pretty accurate so far as I can tell.
unenlightened October 15, 2025 at 16:35 #1018812
Quoting Joshs
And because I am well acquainted with the authors being discussed, I knew that the information it was using for its arguments was not being simply fabricated out of whole cloth, but was highly relevant and based on real texts of the authors. I almost always find this to be the case with regard to A.i’s treatment of philosophical issues.


Quoting Joshs
What I’ve learned in comparing the forum with a.i. is that, unfortunately, the majority of participants here
don’t have the background to engage in the kinds of discussions I have been able to have with a.i. concerning a range of philosophers dear to my heart, (such as Husserl, Heidegger, Derrida, Deleuze, Gendlin and Wittgenstein), especially when it comes to comparing and contrasting their positions.


Thanks for that interchange; it was informative and insightful, and I certainly agree that myself nor anyone I am aware of here could hold up AI's end of that dialogue to that standard. But may I humbly suggest to you that what resulted was rather more like an internal dialogue of you with yourself, than a dialogue with another philosopher. Which slots right into the discussion itself as a significant fact.

Another philosopher would surely make other points, have their ownr favourite philosophers to call on, and generally be much more annoying? I would have pointed out at least that Schizophrenia and Novel-writing are somewhat late onset diseases, usually first developing with adolescence, and somewhat after the acquisition of language skills and some kind of social identity and sense of self. And I would have made more of W's demolition of 'private language' which puts the social relations as necessarily primary. 'Other-self' voices can only be heard by a language speaker.
Paine October 15, 2025 at 18:39 #1018847
Reply to Jamal
I should not have spoken so absolutely. I was focusing on the question "why not?"

Your example of coding reminds me of how I learned methods of work in the trades. It started by following instructions and imitating others. Through experience, the arts became my own. That does not mean being free of help or learning new methods. It does mean being where the rubber meets the road as you describe in relation to criteria of failure.

In that context, I have a ready reference for what amplifies a skill and what replaces it. Laser levels are powerful tools but do not check themselves. Plumb bobs do. Scheduling software aggregates information much quicker than I can but are useless without me imagining the work sequences against lived experiences amongst known resources. Calculators and accounting software are great but do not replace the one-on-one correspondence needed to be sure something has not been forgotten. I use cyphers for my material assessment because they don't care when I think of something.

On the other hand, drawing programs do what my hand sketches do not. They create a model which measurements confirm but do not compose. I would call that a replacement that allows me to communicate with designers in their language but not do all of what they do. GPS and google maps have replaced my orienting skills. There in there somewhere but I hope I don't need them tomorrow. I have no idea what my wife's phone number is.

As for writing creatively and intellectually, I don't think of it as dispensing with help altogether as Simon Willson describes the matter but do think finding one's own voice and point of view is about becoming more independent from whatever helped a writer in the past. I guess I am one of those Harry Hindu says threw down a bag of drachmas to become a participant in the conversation. The scope of such literacy is itself a dependency, however, an artifact of intelligence that my thinking is far from free of. That is why I have put so much emphasis here on commenting on original texts in favor of commentary upon commentary. It would be fair to say that excludes me from a lot of discourse. It does let me know what my own understanding is and what it is not.
Janus October 15, 2025 at 20:04 #1018864
Quoting Harry Hindu
So if one did not write the post themselves, but merely copied and pasted a quote as the sole content of their post, then by your own words, it is not their post.


Yes...so what? What do you think you are disagreeing with here?

Quoting unenlightened
But may I humbly suggest to you that what resulted was rather more like an internal dialogue of you with yourself, than a dialogue with another philosopher. Which slots right into the discussion itself as a significant fact.


Spot on as spotted!
Joshs October 15, 2025 at 21:13 #1018886
Reply to unenlightened
Reply to Janus

Quoting unenlightened
But may I humbly suggest to you that what resulted was rather more like an internal dialogue of you with yourself, than a dialogue with another philosopher. Which slots right into the discussion itself as a significant fact.

Another philosopher would surely make other points, have their ownr favourite philosophers to call on, and generally be much more annoying? I would have pointed out at least that Schizophrenia and Novel-writing are somewhat late onset diseases, usually first developing with adolescence, and somewhat after the acquisition of language skills and some kind of social identity and sense of self.


I agree that dialoging with a.i. is not like a discussion with a human, but if it consisted of only an internal dialogue with myself I wouldn’t learn as much from it as I do. A human will have a point of view, but the a.i. will take any perspective in its database, depending on cues given in the conversation. You’re right that when I’m arguing with a human, they may be able to bore deeper into the subject matter from their perspective than an a.i. can, both to critique and defend. But with the a.i. I’m not restricted to one human’s perspective. Instead, at my behest, the a.i. exposes me to arguments from multiple perspectives. It’s capable of presenting aspects of these arguments I may not be familiar with, and juxtaposing these arguments in ways that I would not have thought of on my own. In addition , it will push back and question my responses.







sime October 16, 2025 at 07:33 #1018973
ChatGPT and Gemini start by mirroring society's default communicative presumption, namely of a public world of shared referents that all competent speakers access during the course of conversation, and so debates invariably involve the AI initially using words in the normal intersubjective mode, leading to the appearance of it defending metaphysical realism, followed by it shifting to using words in the subjective mode when the communicative presumption is questioned, leading to the appearance of the AI retreating to psychological realism or idealism. But all that is actually happening, is that the AI is switching between two grammatical modes of speaking that correspond to two distinct sub-distributions of language use (namely intersubjective communication that purposely omits perspective to produce the illusion of shared-world semantics, versus subjective expression that reduces to perspective).

Ai demonstrates that self-reflection isn't needed for a comptent peformance of philosophical reasoning, because all that is needed to be an outwardly competent philosopher is mastery of the statistics of natural language use, in spite of the fact that the subject of philosophy and the data of natural language use are largely products of self-reflection. So it is ironic that humans can be sufficiently bad at self-reflection, such that they can benefit from the AI reminding them of the workings of their own language.


Jamal October 16, 2025 at 08:32 #1018981
Reply to Paine

Despite what I said I actually tend to think of the automation of tasks as something you take advantage of after you've learned how to do it manually, which fits with Willison's thought that you have to know your way about before you can properly use the automation tools.

So it's at the learning stage that how AI is used is most crucial, because used badly it probably can allow and encourage one to avoid the bit where you learn how to do it manually.

I realize I've taken two sides of a debate here.
Sam26 October 16, 2025 at 12:34 #1019017
Quoting Jamal
The fact is that if you don't know what you're doing, the result will be a mess. I've used AI for programming before and you really have to guide it and pay close attention to everything it does and constantly question its decisions.


I wholeheartedly agree with your assessment. I used AI extensively while writing my book on NDEs and my work on epistemology. It was helpful for editing and idea generation, but it also made frequent errors, enough that I often wondered if it was creating more work than it saved. You have to know the material well to catch the subtle mistakes. Philosophical reasoning is especially difficult for AI: unlike programming or mathematics, it depends less on fixed rules and more on conceptual precision and contextual understanding. I don't think there is any doubt that it will help refine our thinking, but I'm not sure that it will replace humans in this area, but who knows.

Harry Hindu October 16, 2025 at 13:13 #1019024
Quoting Janus
Yes...so what? What do you think you are disagreeing with here?


I'm not disagreeing with anything necessarily. I was just seeing if you were being consistent. So, the question now is: Have you made this much of a fuss about other members' posts whose sole content is a quote from somewhere else?

What I am disagreeing with is the notion that, on a philosophy forum, we should be focused so much on the source more than the subject, as that is the focus of philosophical discussions, and attacking the source is a red herring or genetic fallacy, usually seen when one side has no more arguments to make.

Harry Hindu October 16, 2025 at 13:26 #1019025
Quoting Joshs
The a.i.’s final answer reveled how it was able to take a philosophical discussion from a vague starting point and bring it to a level of depth and detail which opened up a vast array of much more nuanced questions. And because I am well acquainted with the authors being discussed, I knew that the information it was using for its arguments was not being simply fabricated out of whole cloth, but was highly relevant and based on real texts of the authors. I almost always find this to be the case with regard to A.i’s treatment of philosophical issues.

I experience the same thing when coding with AI. You can start off with some basic structure and expand on specific areas, building on what was created before. And you need to know the programming language to be able to pick out mistakes and make the adjustments you want to see. Also the first block of code it wrote worked right out of the box, which lines up with what you said about AI is not fabricating. It can take learned data and apply it to a new situation like my specific request for a certain function that has never been written before - the same way a human programmer would - and it worked.

Just because we use calculators to perform simple arithmetic, we have not forgotten how to do simple arithmetic. Calculators are tools to speed up the process of things that we already know how to do. We do this with our own brains. Once we learn a task, like riding a bike, we outsource the decision-making when performing those tasks to unconscious areas of the brain. We no longer need to consciously focus on each movement of each leg and our balance. It is all done unconsciously, which is why your mind can be in a different place while riding a bike and you arrive at your destination but don't remember the ride.



Joshs October 16, 2025 at 15:45 #1019079
Reply to Harry Hindu Quoting Harry Hindu
Just because we use calculators to perform simple arithmetic, we have not forgotten how to do simple arithmetic. Calculators are tools to speed up the process of things that we already know how to do. We do this with our own brains. Once we learn a task, like riding a bike, we outsource the decision-making when performing those tasks to unconscious areas of the brain. We no longer need to consciously focus on each movement of each leg and our balance. It is all done unconsciously, which is why your mind can be in a different place while riding a bike and you arrive at your destination but don't remember the ride.


Yes, this is an important point that people fail to appreciate about our thinking machines. They understand the role of simple labor-saving devices, but when it comes to a.i., they think it’s a zero-sum game, as though whatever the a.i. does for us takes away some capacity we possessed. What’s the difference between using a calculator and using a shortcut like long division? Neither will be useful to us unless we know why we are using them.

Number2018 October 16, 2025 at 16:06 #1019083
Reply to Joshs Reply to Joshs Quoting Joshs
I agree that dialoging with a.i. is not like a discussion with a human, but if it consisted of only an internal dialogue with myself I wouldn’t learn as much from it as I do. A human will have a point of view, but the a.i. will take any perspective in its database, depending on cues given in the conversation. You’re right that when I’m arguing with a human, they may be able to bore deeper into the subject matter from their perspective than an a.i. can, both to critique and defend. But with the a.i. I’m not restricted to one human’s perspective. Instead, at my behest, the a.i. exposes me to arguments from multiple perspectives. It’s capable of presenting aspects of these arguments I may not be familiar with, and juxtaposing these arguments in ways that I would not have thought of on my own. In addition , it will push back and question my responses.


Thank you for sharing your experience of interacting with AI. As has been frequently mentioned in this thread, one of the most pressing challenges is that we still do not fully understand what happens when a person collaborates with AI. The advent of AI has undoubtedly been reshaping not only our ways of thinking but the very structure of thought itself. In relation to this issue, I would like to share here a part of my recent dialogue with an AI.
My question: “Where does the creative dimension reside in our era of linguistic automation? When language itself—the medium of thought, trust, and social meaning—is being industrially automated for the first time in history, is there still space for the production of the new?”
AI response: That’s exactly the essential question — and one that many philosophers (Deleuze, Simondon, Stiegler, Agamben, even Derrida) help us approach, each in a different register.
Let’s explore it carefully.

1. Automation ? total closure
When language becomes automated, it may seem that creativity dies — that meaning is replaced by mechanical recombination.
But automation doesn’t end creativity; it reconfigures its locus.
In older symbolic systems — alphabetic writing, printing, cinema — automation always displaced something of the human. Yet each time, a new margin opened up:
• Writing liberated thought from oral immediacy ? new forms of reflection.
• Printing industrialized text ? new publics, new genres, new subjectivities.
• Cinema mechanized movement ? new perception (the movement-image).
So when LLMs mechanize linguistic invention, the same dialectic reappears: the creative act migrates.
It’s no longer (or not only) in composition itself, but in meta-operations — prompting, curating, orchestrating, recombining, filtering, critiquing.
The creative site shifts from producing sentences to producing differential relations among automated language flows.

2. The creative outside (Deleuze’s view)
For Deleuze, creation is not personal inspiration but a process of differentiation — of extracting the new from within repetition.
Automation multiplies repetition to an extreme, but that’s precisely what can generate new “cracks” where difference appears.
In that sense, large-scale linguistic automation might intensify the conditions for creation:
“Repetition is the power of difference.” (Difference and Repetition)
The creative task then becomes:
• how to compose with automated language without being absorbed by it;
• how to fold its repetitions to produce singularities — new affects, new conceptual connections.
In other words: the human moment of creation survives where language meets resistance — where we break the automatic continuum and make a cut, an interval, a difference.

3. From expression to modulation (Simondon and Stiegler)
Simondon would say: technical automation individuates at a different level. The LLM is not a closed machine but a metastable system — capable of being integrated into new milieus of meaning.
So creativity today might be transductive: the art of forming relations among humans, machines, and contexts.
Stiegler would frame it as a battle for attention and care (pharmakon): language technologies can poison or heal. The creative gesture now requires cultivating desynchronization — moments that resist the industrial rhythm of linguistic automation.

4. The new as interruption
In a world of endless generated text, the new no longer appears as content (since the system can synthesize infinite content), but as interruption — a cut, a silence, a misuse, an error, a shift in rhythm.
Poetry, irony, discontinuity — all those fragile gestures that escape prediction — become the last sites of difference.
The creative act is no longer “adding” more text to the archive, but inventing a new relation to the archive itself.

5. A paradoxical opening
So yes — even under total linguistic automation, there is still room for creation, but it’s displaced:
• from writing as production ? to writing as modulation, selection, and deviation;
• from originality ? to intensifying difference within the repetitive field;
• from human authorship ? to machinic co-individuation.

Athena October 16, 2025 at 16:37 #1019092
Quoting Janus
That's a poor analogy. It's obvious when people are wearing makeup or wearing clothes that enhance their appearances. Property rights might be one reason to object to plagiarism—there are others. Pretending to be something you are not is one.


Poppycock, the only objection to plagiarizing that I remember is the posts objecting to someone trying to make us think s/he knows more than s/he does know.
baker October 16, 2025 at 19:04 #1019122
Quoting Harry Hindu
How can you account for the exponential progress humanity has made in the past few centuries compared to the first several thousand years of our existence.


The details and the superficialities have changed, sure, but the exploitative nature of relationships between humans has not changed.
baker October 16, 2025 at 19:47 #1019128
Quoting Joshs
Yes, this is an important point that people fail to appreciate about our thinking machines. They understand the role of simple labor-saving devices, but when it comes to a.i., they think it’s a zero-sum game, as though whatever the a.i. does for us takes away some capacity we possessed.

What’s the difference between using a calculator and using a shortcut like long division?


The difference is in knowing how much you can rely on yourself. (There are also practical aspects, like how do you calculate something when you don't have a calculator.)

AI/LLM's seems to be useful for some, mostly urban uses. Like computer programming or writing school papers. But beyond that?


Quoting Joshs
What I’ve learned in comparing the forum with a.i. is that, unfortunately, the majority of participants here don’t have the background to engage in the kinds of discussions I have been able to have with a.i. concerning a range of philosophers dear to my heart, (such as Husserl, Heidegger, Derrida, Deleuze, Gendlin and Wittgenstein), especially when it comes to comparing and contrasting their positions.


Then you're in the wrong place, no?

The level of discussion you're looking for is the domain of fancy university debate clubs. Why not attend those?
baker October 16, 2025 at 19:56 #1019133
Quoting sime
Ai demonstrates that self-reflection isn't needed for a comptent peformance of philosophical reasoning, because all that is needed to be an outwardly competent philosopher is mastery of the statistics of natural language use, in spite of the fact that the subject of philosophy and the data of natural language use are largely products of self-reflection. So it is ironic that humans can be sufficiently bad at self-reflection, such that they can benefit from the AI reminding them of the workings of their own language.


Thanks for this comment.
Janus October 16, 2025 at 22:12 #1019150
Quoting Harry Hindu
So if one did not write the post themselves, but merely copied and pasted a quote as the sole content of their post, then by your own words, it is not their post.


Well, yes such quotes are no substitute for argument, and obviously they do not belong to the one who quotes. It is all the more objectionable if the person presents the quoted passage as their own work. It's easy enough to find them out if the quote is from a prominent philosopher, whether alive or dead, Not so with copying and pasting AI generated text.

Quoting Athena
That's a poor analogy. It's obvious when people are wearing makeup or wearing clothes that enhance their appearances. Property rights might be one reason to object to plagiarism—there are others.Pretending to be something you are not is one.
— Janus

Poppycock, the only objection to plagiarizing that I remember is the posts objecting to someone trying to make us think s/he knows more than s/he does know.


:roll:



apokrisis October 16, 2025 at 23:07 #1019154
Reply to Number2018 That was a fun read. So it makes LLMs the new interactive textbook?

Who would buy a real textbook when you can scrape all of them for nothing in this interactive fashion? A lot of implications in that.

Janus October 16, 2025 at 23:17 #1019156
Reply to apokrisis Did you find something useful in it?
apokrisis October 17, 2025 at 00:26 #1019171
Quoting Janus
Did you find something useful in it?


It generated pithy quotes such as….

Quoting Number2018
That solitude was a technical and social affordance: the printed page, the silent reading space, the private room — all infrastructures of inwardness.
It produced philosophy as we know it: the “voice of one thinking alone,” addressing a virtual community of readers.


If say you were just a grad student looking to publish, imagine going on this wending conversation which starts with the LLM largely reminding you of the familiar stuff you sort of know, but reassuringly complete in that it seems to cover all the bases. And then you arrive at some chance angle like this. Something that seems a fresh and catchy intro. A neat little summary that follows the human love of the rule of threes.

Already a thesis. Now data mime to have something more detailed to say about the actual history of each of those three spaces. A quality idea that can just as quickly be spun into a full paper.

Quoting Number2018
We might say:
the age of the solitary thinker ends,
but the age of solitary thinking — as a gesture of difference — becomes all the more necessary.


I mean that is a quality punchline. The average academic just couldn’t phrase a thought so fetching. It would be gold to have such a thesis and such a punchline. The academic just needs to flesh out the bits inbetween in whatever more wooden prose suggests it was mostly their own creative work.

So what I think would happen in this interactive textbook scenario is the usual thing. Technology amplifies rather than replaces the human element. But then the rewards are not evenly distributed in an accelerationist world. The top 1% get 99% of whatever benefit or kudos that is going.

Any number of mediocre grad students could publish mediocre papers in the old world. In the new world, those who have an edge in their prompt skills and their marketing nous will fly high above the common herd. Everyone might have the same access to the technology. But the rewards will again be a fat tail distribution when the inequality of outcomes is what the technology amplifies.


Janus October 17, 2025 at 00:39 #1019175
Reply to apokrisis I see the point that more brilliant minds might find novel theses in AI-generated texts. At its best you might end up with a Derrida or a Heidegger, but for me the thinking of such writers as Heidegger and Derrida is little more than highbrow "pouring from the empty into the void", and to me that is how the AIs responses to @Number2018 read. I've come to see anything that is not based on rigorous analysis or scientific understanding as intellectual wankery—mental masturbation—and I have no problem with people enjoying that, but the idea that it is of any real significance is, for me, merely delusory.

Fiction and poetry (at their best) are for me better, richer, mediums for such flights of the intellectual imagination—I need the aesthetic element to be there as well.
apokrisis October 17, 2025 at 00:41 #1019177
Quoting Janus
I've come to see anything that is not based on rigorous analysis or scientific understanding as intellectual wankery—mental masturbation—and I have no problem with people enjoying that, but the idea that it is of any real significance is, for me, merely delusory.


Are you saying that with PoMo philosophy, AI might have hit its particular sweet spot. :grin:


Number2018 October 17, 2025 at 00:41 #1019179
Reply to apokrisis Quoting apokrisis
So it makes LLMs the new interactive textbook?

Who would buy a real textbook when you can scrape all of them for nothing in this interactive fashion? A lot of implications in that.


Yes, it can make LLMs a kind of new interactive textbook. However, I don’t think such a text has ever been published before. This ‘textbook’ is created specifically for the individual who requests it. So, it is not a digital copy of existing books, but may become a situated co-production of knowledge.
Janus October 17, 2025 at 00:46 #1019180
Quoting apokrisis
Are you saying that with PoMo philosophy, AI might have hit its particular sweet spot. :grin:


Well the LLMs have no experience of the real world do they?

Quoting Number2018
So, it is not a digital copy of existing books, but may become a situated co-production of knowledge.


To what end? The production of more and more fancy looking word salad?
apokrisis October 17, 2025 at 00:47 #1019181
Quoting Number2018
This ‘textbook’ is created specifically for the individual who requests it.


Well yes. So it used to be me and a whole library. Now it is me pulling my own personal textbook out of the library, led by my chain of thought and not dictated by any author’s more limited intentions.

That could be a hugely amplifying tool.
Janus October 17, 2025 at 00:51 #1019182
Quoting apokrisis
That could be a hugely amplifying tool.


I guess it could be an exciting prospect for some folk.
apokrisis October 17, 2025 at 01:52 #1019191
Quoting Janus
Well the LLMs have no experience of the real world do they?


But can even humans claim that? Let’s rehash the forum’s most hardy perennial one more time. :up:

Quoting Janus
I guess it could be an exciting prospect for some folk.


Yep, the amplification is bipartisan. It applies to both the winning and the losing. That is the algorithm doing its thing.
Janus October 17, 2025 at 02:30 #1019193
Quoting apokrisis
But can even humans claim that? Let’s rehash the forum’s most hardy perennial one more time. :up:


"Real world"—that was perhaps a less than ideal choice of words—I intended to refer to the world as being what affects us pre-cognitively via the senses and is pre-cognitively modeled by the body/brain, as well as what shows up for us as "the world of common experience".
Sam26 October 17, 2025 at 08:31 #1019264
Quoting Janus
I've come to see anything that is not based on rigorous analysis or scientific understanding as intellectual wankery—mental masturbation—and I have no problem with people enjoying that, but the idea that it is of any real significance is, for me, merely delusory.


Don't mistake the speculative misuse of ideas for the ideas themselves. AI is no longer in the realm of “mental masturbation,” it’s already reshaping science, mathematics, and even philosophy by generating proofs, modeling complex systems, and revealing previously inaccessible patterns of thought. To dismiss that as delusory is to confuse ignorance of a subject with the absence of rigor within it.

The irony is that the very kind of “rigorous analysis” you claim to prize is being accelerated by AI. The most forward-looking thinkers are not treating it as a toy but as a new instrument of inquiry, a tool that extends human reasoning rather than replacing it. Those who ignore this development are not guarding intellectual integrity; they’re opting out of the next phase of it.
Athena October 17, 2025 at 13:31 #1019323
Quoting Janus
Well, yes such quotes are no substitute for argument, and obviously they do not belong to the one who quotes. It is all the more objectionable if the person presents the quoted passage as their own work. It's easy enough to find them out if the quote is from a prominent philosopher, whether alive or dead, Not so with copying and pasting AI generated text.


I have been lucky in getting AI explanations every time I search a subject. If I wanted to hold someone accountable for misappropriating an AI explanation, I would simply put it into the search engine, the same way the person posting from AI would get the information. It is a whole lot easier than searching books for a quote.

But I can not imagine why I would care to do that. There might be a problem with judging each other, rather than the information given. There are better reasons for riding donkeys than the reasoning for not using AI as a source of information. Using AI for information is not the same as using AI for making decisions. I think our decisions need to be made democratically.

With concern about AI making the decisions, I asked it if I should get married, and it put that decision on me, after explaining things I should consider. That might work better than Tarot cards. :grin:


Athena October 17, 2025 at 13:41 #1019324
Quoting Sam26
The irony is that the very kind of “rigorous analysis” you claim to prize is being accelerated by AI. The most forward-looking thinkers are not treating it as a toy but as a new instrument of inquiry, a tool that extends human reasoning rather than replacing it. Those who ignore this development are not guarding intellectual integrity; they’re opting out of the next phase of it.


I so appreciate what you said! I am at a time in my life when I realize there is far more to know than anyone can know. I no longer worry about appearing to know it all. On the other hand, I am thrilled by the possibility of easily getting information with a search engine and AI. More than anything, I want to live with our wonderful new opportunity to know more about life and being human than we could ever have known in the past. My hope for humanity has greatly improved because we can share this information and make better decisions.
Athena October 17, 2025 at 14:04 #1019326
Quoting Janus
"Real world"—that was perhaps a less than ideal choice of words—I intended to refer to the world as being what affects us pre-cognitively via the senses and is pre-cognitively modeled by the body/brain, as well as what shows up for us as "the world of common experience".


I will never experience being Black. My understanding of the Black experience has been extremely limited to television and unpleasant accounts of the Black experience. This is so although I have a 13 year old great grand son who is Black. His experience is very limited compared to the millions of people of color and diverse economic and cultural experiences across the US. Coming from this realty of ignorance, I very much appreciate what AI has to say...

Being Black in America encompasses a complex and multilayered experience shaped by centuries of systemic racism, resilience, and rich cultural contributions. It is characterized by persistent societal challenges as well as profound achievements that have significantly influenced the nation's culture, history, and development.
.

I think we should share good information and look forward to how this can improve our reality. And I can't let this go, without saying how computers and forums are essential to our transition to a New Age made possible by our technology, desire for peace, and the end of tyranny. We are no longer in the horse-and-buggy age, knowing almost nothing of the real world beyond our little hamlet, our family, our group of friends, and what the Bible says according to our minister's understanding of it.
Moliere October 17, 2025 at 14:26 #1019328
Quoting Sam26
The irony is that the very kind of “rigorous analysis” you claim to prize is being accelerated by AI. The most forward-looking thinkers are not treating it as a toy but as a new instrument of inquiry, a tool that extends human reasoning rather than replacing it. Those who ignore this development are not guarding intellectual integrity; they’re opting out of the next phase of it.


I'm willing to take that gamble.

In a sense I'm fine with people using it and making it work for them. One of the uses that I came across that looked sensible for humanities scholars:

https://x.com/jonathanbfine/status/1978420519926936015


But I genuinely don't believe using it helps anyone to progress thought further. Go ahead with the next phase, I'll be waiting on my hill of luddites for the prodigals to return ;)
Sam26 October 17, 2025 at 14:58 #1019338
Reply to Moliere It's already helped me expand my thinking on epistemology, and it gave me good ideas on my book. However, you do have to have prior knowledge because it does make mistakes. The next two iterations of ChatGPT and Grok 5 have a good chance to reach AGI.

AI models from OpenAI and Google DeepMind reached gold medal-level performance at the 2025 International Mathematical Olympiad (IMO), one of the most prestigious math competitions in the world. It's also better at diagnosing than many doctors. So, I don't know where you're getting your information.
Moliere October 17, 2025 at 15:04 #1019339
Quoting Sam26
So, I don't know where you're getting your information.


No information on my part -- merely experience.

I am prejudiced against AI because I like books and talking to others, and I don't care if the AI comes across something -- I care more that a human comes across something, even if they came across it while using AI.

Information isn't as important to me in philosophy as experience, relationship, knowledge, and the good.

I am likely wrong in this analogy -- but what I see in AI, still to today, is a parrot and nothing more.

Give me a Hegel and I might change my mind -- but even writing this helps the AI's to change their parroting: Oh we have to link various words together in a certain pattern? We can do that!
Harry Hindu October 17, 2025 at 15:04 #1019340
Quoting baker
The details and the superficialities have changed, sure, but the exploitative nature of relationships between humans has not changed.

Yet our lifespans and health have increased, which was my point. Do we still have work to do? Sure. It takes all of us to stop voting for the status quo and to take money out of political campaigns, as a start.
Harry Hindu October 17, 2025 at 15:08 #1019341
Quoting Janus
Well, yes such quotes are no substitute for argument, and obviously they do not belong to the one who quotes. It is all the more objectionable if the person presents the quoted passage as their own work. It's easy enough to find them out if the quote is from a prominent philosopher, whether alive or dead, Not so with copying and pasting AI generated text.

I don't necessarily mind if others post a quote as an argument. Sure it's not their argument, but it is an argument and needs to be addressed if it puts a whole in your position, regardless of where it came from. To deny this is to be intellectually dishonest with yourself.

Posting others words as if they were your own is an issue I can understand, but AI is not another person (at least that we know of). It is no different than you getting your answer from a calculator, or making grammar adjustments using Grammarly or a thesaurus, or even re-writing another's ideas in your own words. If it wasn't your idea to begin with, does it matter if its in your own words?

Sam26 October 17, 2025 at 15:11 #1019342
Reply to Moliere Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.
Moliere October 17, 2025 at 15:16 #1019344
Reply to Sam26 I have a lower standard there, I think.

For instance, I'd say that this conversation we're having is an original idea.

That's not to say that the idea is universally applicable, or even applicable in any other circumstance.

I think philosophy pushes for originality with novelty, whereas I'd say original thinking is more important than novel thinking.

Sure, there's a handful of tablet-breakers, but we're all creative everyday even tho what we do is "the same"


I'd put it to you that "the same" or "the different" are thought of differently by humans and today's AI.

Use it as you like.

I myself want to discourage its use amongst students as much as possible. I want them to be able to think for themselves.

AI is just a way to not do that.

And I think students are a good example of people coming up with original ideas all the time -- if not unique or revolutionary, they came up with the idea on themselves.

That's what I think philosophy -- among other disciplines -- is good at teaching.
Harry Hindu October 17, 2025 at 15:18 #1019345
Quoting Sam26
Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.

The objective in thinking for yourself is to take every idea you hear from others with a grain of salt, and to even question your own ideas constantly. I have come up with certain ideas on my own only to find out that others came up with it as well. Some minds do think alike given the same kinds of experiences.
Sam26 October 17, 2025 at 15:52 #1019356
Quoting Harry Hindu
The objective in thinking for yourself is to take every idea you hear from others with a grain of salt, and to even question your own ideas constantly.


If you take every idea with a grain of salt, you’ll never move beyond hesitation. Critical thinking isn’t about doubting everything, it’s about knowing when doubt is justified. In logic, mathematics, or physics, for instance, constant suspicion would paralyze learning; you suspend doubt provisionally because the framework itself has earned trust through rigor.

In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.
Fire Ologist October 17, 2025 at 15:55 #1019357
Quoting Fire Ologist
Just what we need to add to the [online] world - more sociopaths that make errors and lie about them.


Maybe “sociopaths” is unnecessary. Wouldn’t want to scare any children.

AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.

Quoting Ludwig V
What worries me is that people will cede authority to it without even asking themselves whether that is appropriate.


They surely will, because sheep are easily calmed by things that sound authoritative.

———

It occurs to me that: isn’t a book, AI? It’s information received from a non-human thing. We read a book and ingest the text. We treat the words in a book as if they come from an “intelligence” behind them, or we can judge the veracity and validity of the text qua text with or without any concern for what is behind it. We can also refuse to take the author as authority, and fact check and reconstruct our own analysis.

For instance, is a reference to Pythagoras in Pythagorean Theory of any significance whatsoever, when determining the length of one side of a triangle? Is it essential to our analysis of “It is the same thing to think as it is to be” that we know who said it first? Context might be instructive if one is having trouble understanding the theory, but it might not matter at all once one sees something useful in the text.. We create a new context by reading and understanding text.

(This is related to @Banno’s point on his other thread.)

So banning any reference to AI would be like banning reference to any other author. (I said “like it” for a reason - this doesn’t mean AI is an author the same way we are authors - that is another question.)

What concerns the philosopher qua philosopher most is what is said, not who (or now, what) says it. I think.

This not to say we shouldn’t disclose the fact that AI is behind text we put our names on (if we use AI). That matters a lot. We have to know we are dealing with AI or not.

Quoting Moliere
But I genuinely don't believe using it helps anyone to progress thought further.


Don’t we have to wait and see? It’s a new tool. Early 20th century mathematicians could say the same thing about calculators. We didn’t need AI before to do philosophy, so I see your point, but it remains to be seen if it will be any help to someone or not.

The conclusions in philosophic arguments matter, to me. It is nice to think that they matter to other people as well. (But isn’t essential?) Regardless, I would never think the conclusions printed by an LLM matter to the LLM.

So the interaction (“dialogue”) with AI and my assessment of the conclusions of AI, are inherently lonely, and nowhere in the world except my own head, until I believe a person shares them, and believe I am dialoguing with another person in the world who is applying his/her mind to the text.

Bottom line to me is that, as long as we do not lie about what comes from AI and what comes from a person, it is okay to use it for whatever it can be used for. And secondly, no one should kid themselves they are doing philosophy if they can’t stare at a blank page and say what they think philosophically with reference to nothing else but their own minds. And thirdly, procedurally, we should be able to state in our own words and/or provide our own analysis to every word generated by AI, like every word written by some other philosopher, or we, along with the AI, risk not adding anything to the conversation (meaning, you take a massive risk of not doing philosophy or not doing it well when you simply regurgitate AI without adding your own analysis.)
Paine October 17, 2025 at 16:08 #1019359
Reply to Moliere
My pup tent is located somewhere on your hill. Kafka must also be nearby:

Kafka, Reflections, 69, translated by Willa and Edwin Muir:He eats the droppings from his own table; thus he manages to stuff himself fuller than the others for a little, but meanwhile he forgets how to eat from the table; thus in time even the droppings cease to fall.
Athena October 17, 2025 at 17:02 #1019363
Quoting Moliere
But I genuinely don't believe using it helps anyone to progress thought further. Go ahead with the next phase, I'll be waiting on my hill of luddites for the prodigals to return ;)


That is like saying riding horses can't be fun, when you don't ride horses. How could you know the joy of riding a horse if you don't ride? How could you experience the joy of using AI as much as I do if you don't use it? What can you know of the future that is being opened up if you withdraw from the change instead of participating in it?

This morning, I came across an AI explanation that was biased and disappointing. If 50% of the time, I was disappointed by AI explanations, I would not think so highly of it, but at the moment, I think it has enriched my life a lot. For me, it has replaced Wikipedia because it captures the explanation of a subject so concisely and is relatively free of biases that are more apt to show up with Wikipedia. I will still use and support Wikipedia, but it isn't my favorite right now. For me, the difference is like finding a better camera that produces more detailed pictures with brighter colors, or using an old Brownie camera with black and white film. :confused:
Athena October 17, 2025 at 17:11 #1019365
Quoting Fire Ologist
AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.


AI is like a hammer? That is like saying humans are like apes. I think we are evolved from that line of evolution, but humans have changed the planet in dramatic ways, and apes have not. The potential for AI to act on its own might make it different from a hammer.
Athena October 17, 2025 at 17:32 #1019369
Quoting Sam26
?Moliere Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.


That may be true, but the first person who showed up at the protest in Portland, Oregon, dressed as a frog has started a wonderful movement of being creative and fun in this moment of high tensions. I was not looking forward to the Saturday, No Kings Day march, until I figured out how to use the Mad Hatters tea party to make my statement. I am looking forward to what creative people are doing. This is such a marvelous human thing to do, and that is something to celebrate.

I asked what AI can create and it says...
AI can create a wide range of original content, including text (stories, essays, code), images, audio (music, spoken words), and video by learning patterns from vast datasets. It also creates data-driven insights through analysis and prediction, develops personalized user experiences in areas like shopping, and generates functional outputs such as spreadsheets and automated tasks, effectively acting as a powerful tool for creativity, productivity, and automation.


I really look forward to insights based on patterns, but hopefully with less human bias. I think it may do better than humans. However, I am not comfortable with giving the power to make decisions and act on them without flesh and blood human control and judgment. Like, No Kings Day is about our liberty to govern ourselves free of tyranny. I am not willing to give that up. :wink:
Sam26 October 17, 2025 at 17:47 #1019371
Reply to Athena I won't comment on the political part of your post because I think we're very far apart. However, in the future I can see where humans will merge with AI, so we'll probably become one with machines, probably biological machines.
baker October 17, 2025 at 18:21 #1019376
Quoting Janus
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.


I say outsmarten the AIs and their faithful users. That doesn't necessarily mean stop using AIs altogether, but only using them sparsely and deliberately. Most of all, it means lowering or otherwise changing one's goals in life.

To me, using AIs, especially LLMs for everyday things or for work is like using drugs to get the energy and the focus necessary to do one's work. Occasionally, this can be a last resort, but is not sustainable in the long run. If one cannot do one's job on one's own, consistently, then one has a job that is too demanding and that will eventually get one into trouble in one way or another.

It's quite pointless to discuss the ethics of using AIs, because people will use them, just like they use drugs, and once it starts, it is impossible to rein it in. But what one can do is rethink whether one really wants to spend one's hard earned time with people who use AIs, or drugs, for that matter.

baker October 17, 2025 at 18:24 #1019377
Quoting Sam26
In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.


Which is easily remedied by cultivating good character for oneself.

People of substance don't post much on internet forums.
Athena October 17, 2025 at 18:41 #1019382
Quoting Sam26
I won't comment on the political part of your post because I think we're very far apart. However, in the future I can see where humans will merge with AI, so we'll probably become one with machines, probably biological machines.


That sounds like the Sumerian notion of many gods and humans being created to serve them. I am against merging humans with machines; however, our industrial society did exactly that! And our hierarchical patriarchy has maintained humans exploiting humans. There is an excellent website explaining the ancient mythology and how the Hebrews reworked it, giving us more freedom and human dignity than the original mythology gave us.

The Industrial Age merged humans with machines. Our Industrial economy/society made humans extensions of the machines. Union workers risked their lives in a fight for better working conditions and wages when the flood of workers needing jobs made them cheap labor.

We took that a step further when we got on the path of the military-industrial complex. We see humans doing jobs, but this is a computer-driven reality, only that the computer is not made of inorganic material. The increasingly centralized computer has human components, like the Borg of Star Trek. All those workers are controlled by policies that come with the beginning of each bureaucracy/machine. The jobs are explained in detail, and the workers are dispensible because the new person who does the job will do it the same as the person who left the job. It is policy set in the past that controls the present.

Joseph Campbell, the guru of mythology, said humanity needs mythology and that Star Trek is the best mythology for our time. However, my understanding of the human computer governing us, comes from studying Public Policy and Administration at the U or O. The US adopted the Prussian models of bureaucracy and education. That is what makes the Military/Industrial Complex that Eisenhower warns us about.

Whatever, if people don't want AI running things, they need to be aware of our evolution that made us extensions of machines and now attempts to manage every aspect of our lives, just as Tocqueville warned would happen around 1830, after the French Revolution, and visiting the US.
Athena October 17, 2025 at 18:48 #1019384
Quoting baker
In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.
— Sam26

Which is easily remedied by cultivating good character for oneself.

People of substance don't post much on internet forums.
22 minutes ago


Do you guys ever experience hypobaric hypoxia from being so high above everyone else?

baker October 17, 2025 at 19:02 #1019389
Quoting Athena
Do you guys ever experience hypobaric hypoxia from being so high above everyone else?


Now what did I just say about cultivating good character for oneself?
Janus October 17, 2025 at 23:18 #1019424
Quoting Sam26
Don't mistake the speculative misuse of ideas for the ideas themselves. AI is no longer in the realm of “mental masturbation,” it’s already reshaping science, mathematics, and even philosophy by generating proofs, modeling complex systems, and revealing previously inaccessible patterns of thought. To dismiss that as delusory is to confuse ignorance of a subject with the absence of rigor within it.


You are misunderstanding. My comments re "mental masturbation" were specifically targeting text like the response made to @Number2018 by ChatGPT. I think use of AIs in science and math is fine. In my view they are just the kinds of disciplines AIS should be trained on. Of course they have to be trained on basic pattern recognition initially. I don't know and would need to look into what they initially were specifically trained on before being released "into the wild". Now that they are out there they are being trained on whatever content is to be found in their casual interactions with people.

Quoting Sam26
The irony is that the very kind of “rigorous analysis” you claim to prize is being accelerated by AI. The most forward-looking thinkers are not treating it as a toy but as a new instrument of inquiry, a tool that extends human reasoning rather than replacing it. Those who ignore this development are not guarding intellectual integrity; they’re opting out of the next phase of it.


Can you name a few of those "forward-looking thinkers"? As I said in the OP my main objections are that it was irresponsibly released before being properly understood, and that its being used without acknowledgement to make posters on these forums look smarter than they are. They will also have an horrendous environmental impact. But I accept that their continued use and evolution is now inevitable, and, unfortunately, unpredictable. It is a case of playing with fire.

Reply to Athena Reply to Athena Reply to Harry Hindu Reply to baker Out of time now, I'll try to respond when I have more time.

Sam26 October 18, 2025 at 01:12 #1019432
Quoting Janus
Can you name a few of those "forward-looking thinkers"?


There are those who view AI as an epistemic tool, something that extends, rather than replaces human inquiry. There's a long list of people who fit the bill. For example, Nick Bostrom and Luciano Floridi have been working on the conceptual implications of AI for ethics, cognition, and the philosophy of information. Vincent Müller and Mariarosaria Taddeo have been exploring how AI reshapes the logic of justification and responsibility in scientific reasoning. On the cognitive side, Joscha Bach treats AI systems as experimental models of mind, ways to probe the nature of understanding. Even researchers outside philosophy, in fields like computational linguistics and mathematical discovery, are beginning to treat AI as a genuine collaborator capable of generating new proofs and hypothesis.

Maybe we use books, dictionaries, philosophical papers, editors, and scientific discoveries to make us look smarter than we are. You see this all the time in forums, even without AI, so it's nothing new. Besides do you really care about the psychology of someone who's writing about what they think?

Jamal October 18, 2025 at 04:25 #1019444
Quoting Moliere
But I genuinely don't believe using it helps anyone to progress thought further


What does it mean to "progress thought"? According to any sense I think of, using an LLM certainly can help in that direction. As always, the point is that it depends how it's used, which is why we have to work out how it ought to be used, since rejection will be worse than useless.

Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry.

Relatedly, let's say you're on TPF, criticizing Nietzsche's anti-egalitarianism. Before you hit the submit button you can ask an LLM to put forth the strongest versions of Nietzsche's position so you can evaluate whether your criticism stands up to it, and then rewrite your criticisms (yourself). How can this be inferior to—how does this require less thought than—hitting the submit button without doing that? Granted that it's good to take the long way round and go and consult the books, but (a) one could spend an infinite length of time on any one post, reading all the books in the world just to produce a single paragraph, so we have to draw the line somewhere, and (b) using the LLM in this way will direct you towards books and papers and the philosophers you didn't know about who you can learn from.

Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.

Quoting Moliere
I myself want to discourage its use amongst students as much as possible. I want them to be able to think for themselves.

AI is just a way to not do that.


A lot of people think it is, and it's clear to me that it can be. We are at the point now where its general use is stigmatized because it has, understandably, been used by students to cheat. I think it's clear that we need to think about it in a more fine-grained way.

The world has a tool that will be used, by more and more people. The important task now is stigmatizing improper use of this tool, and encouraging responsible use. As I said in the other thread, stigmatizing all use of it will be counterproductive, since it will cause people to use it irresponsibly and dishonestly.
Jamal October 18, 2025 at 04:34 #1019445
@Moliere Let's say you object to some of the points I've made above. For example, I can see that you might push back against this:

Quoting Jamal
Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.


But your pushback is potentially constructive, in that it can help us decide on which uses of LLMs are good and which are bad. The unconstructive way, I think, is in just wishing the toothpaste were back in the tube.
Fire Ologist October 18, 2025 at 05:08 #1019448
Quoting Athena
The potential for AI to act on its own might make it different from a hammer.


You sell hammers way too short, and maybe give AI way too much credit.

Quoting Athena
AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.
— Fire Ologist


You say “act on its own”; and I said “accidentally”.

So you don’t think AI is a tool? What else is “artificial” but some sort of techne - the Greek root for technology and for hand-tooling? AI is a word sandwich machine. It obviously is a device we’ve built like any other machine that does measurable work - it just now takes a philosopher to measure the work AI does.
Baden October 18, 2025 at 09:15 #1019476
I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.
javi2541997 October 18, 2025 at 09:32 #1019478
Quoting Baden
I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.


Well-put! :up: :100:
Pierre-Normand October 18, 2025 at 10:16 #1019486
Quoting Baden
I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.


I assume, but I also mention it here for the sake of precision, that the clause "(an obvious exceptional case might be, e.g. an LLM discussion thread where use is explicitly declared)" remains applicable. I assume also (but may be wrong) that snippets of AI generated stuff, properly advertised as such, can be quoted in non-LLM discussion threads as examples, when it topical, and when it isn't a substitute for the user making their own argument.
Baden October 18, 2025 at 10:17 #1019487
Reply to Pierre-Normand

Yes, that's correct.
Baden October 18, 2025 at 10:19 #1019488
Reply to javi2541997

Thanks, javi. :pray: (I've written some more on this in Banno's AI discussion).
unenlightened October 18, 2025 at 10:33 #1019495
Quoting Athena
Do you guys ever experience hypobaric hypoxia from being so high above everyone else?


If I say 'yes', will it make you look up to me?
Outlander October 18, 2025 at 11:10 #1019501
Quoting Athena
Do you guys ever experience hypobaric hypoxia from being so high above everyone else?


If popular aphorisms are to be trusted, it's quite lonely at the top. But at least they're nice. That or desperate to trap another unwitting soul so as to alleviate their loneliness and deprive another from that nearly forgotten feeling of what is was once upon a time when one knew so little, yet could dream of so much. :cry:
Ludwig V October 18, 2025 at 11:40 #1019504

Quoting Sam26
Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.

Literally parroting is often a waste of time. But formulating existing ideas for oneself, discussing and debating them, playing with them are all part of understanding them. This is worth while in its own right, and is often a necessary prerequisite for coming up with one's own worthwhile ideas.

Quoting Fire Ologist
The irony of the “information” super highway.

Actually, on further thought, I'm beginning to think that the real fault lies with the naivety of thinking that the internet would be immune from all the varieties of human behaviour. Almost everything that goes on is normal behaviour - on steroids.

Quoting Fire Ologist
The irony of calling its latest advancement “intelligent”. We demean the intelligence we seek to mimic in the artificial, without being aware we are doing so.

Many people seem to think that the point of AI is to mimic human intelligence. I can't understand that, except as a philosophical exercise. We have, I would say, a quite reasonable supply of human intelligence already. There are plenty of things that AI can do better and quicker than humans. Why don't we work with those?

Quoting Baden
I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.

That seems a bit radical. What does bother me a bit is how one can identify what is and isn't written by AIs. Or have you trained an AI to do that?
Outlander October 18, 2025 at 11:51 #1019506
Quoting Sam26
Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.


Because it's all been said and done before. The average person in the past 50 years comes from a multi-sibling household with TV or Internet or otherwise endless forms of entertainment that people a mere few centuries ago never had. Nobody has to think anymore. Not really. Other than the basic desires and how they relate to one's safety, gain, and resulting comfort in life.

Philosophy:
"There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope."
- Mark Twain

Religion:
"There is nothing new under the Sun."
- Ecclesiastes

I mean, what yours is suggesting is akin to creating a bonfire underwater. Even if you did, what good or purpose could ever come from it? :chin:
praxis October 18, 2025 at 12:41 #1019511
@Jamal @Baden

Regarding the new policy, sometimes when I’ve written something that comes out clunky I run it through an AI for “clarity and flow” and it subtly rearranges what I’ve written. Is that a non-no now?
Harry Hindu October 18, 2025 at 12:53 #1019514
Quoting Sam26
If you take every idea with a grain of salt, you’ll never move beyond hesitation. Critical thinking isn’t about doubting everything, it’s about knowing when doubt is justified. In logic, mathematics, or physics, for instance, constant suspicion would paralyze learning; you suspend doubt provisionally because the framework itself has earned trust through rigor.

In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.


"With a grain of salt" is a 1600s direct translation from Modern Latin "cum grano salis", and salis is genitive of sal, which, in addition to ‘salt’, figuratively means "ntellectual acuteness, good sense, shrewdness, wit.

The Latin phrase is found in English literature in the 1600s and 1700s, and salis appears to precisely mean ‘good sense, intelligence’.

My point was that for one avoid parroting others you should be skeptical of what others say, not that you should avoid logic and reason.
Harry Hindu October 18, 2025 at 12:59 #1019515
Quoting Janus
My comments re "mental masturbation"

Seems like philosophy itself could be labeled as mental masturbation.

Quoting Janus
Of course they have to be trained on basic pattern recognition initially. I don't know and would need to look into what they initially were specifically trained on before being released "into the wild". Now that they are out there they are being trained on whatever content is to be found in their casual interactions with people.

Dood, the content from human beings trained in pseudo-science and other nonsense seen on this forum is available everyday for you to read, without any AI. If anything, posters should run their ideas through AI before wasting time posting their zany ideas to humans. which would eliminate wasting time reading nonsensical posts.
Fire Ologist October 18, 2025 at 14:01 #1019530
Quoting Ludwig V
Why don't we work with those?


AI is a tool. Tools can be useful. I don’t think it should be banned.

And regardless of what we do, and regardless of what we say and think about AI, it will be used to harm people. All things digital can now be faked so well; people are already great at lying - we really didn’t need to make the internet even more suspicious. But we have it now.

So we should also watch out. And have conversations like this one.
Outlander October 18, 2025 at 14:28 #1019539
Quoting praxis
Regarding the new policy, sometimes when I’ve written something that comes out clunky I run it through an AI for “clarity and flow” and it subtly rearranges what I’ve written. Is that a non-no now?


+1 for an excellent question. Naturally, @Jamal would have the final say.

(though I would presume the answer to be no, I'd be fairly confident in stating the policy and rules will likely be selectively applied to members based on their standing and tenure here. I'd eat my hat [though perhaps I'd have to buy one first] if you were penalized for such, considering the noteworthiness of your contributions here, versus say, someone who signed up a week or a month ago and seems to be keyboard-happy)

Additionally, assuming you're honest at chess, you're clearly no simpleton. Why would a skilled biker not scoff at the idea of using training wheels for his daily commute? It just doesn't seem necessary. Look at the mod Timothy's posts. They're very long. Perfectly coherent, not a single waste of space despite the volume. But if one really wanted to critique, sure, perhaps the same idea or message can be said with a bit less wording. But that would be a butchery of his expression and who he is as a person and what he allows us to privilege to glean from his mind, and yes perhaps very soul. It would be something that was once an honor turned into a mockery of humanity itself. A simpleton might look at his posts and call such "clunky", but certainly not you.

Look at every great classic novel. If you really wanted to, you could turn every chapter or the entire book into a sentence or a paragraph without removing a bit of the "essence" or "point" the author intended or that the general readership considers as important. What kind of nightmare world would that be? Why not just look at a traffic light all day and get amazed by the simplistic changes of color. Or why not watch paint dry? Because that's what it all boils down to when you remove your, what you believe to be "clunky-ness", what you think might be excess or something you'd otherwise wish to be perfected. We're human. We're alive. We make mistakes. We make imperfections. We ARE imperfect. And that's why we're unique and can never be replaced by autonomy. If I found out all AI "died" or became inert or was otherwise outlawed tomorrow, I'd laugh. If I found out something happened to you, I'd cry. And that's the truth. Because what you may consider something people don't want to read (and sure, most people are rather dull and probably wouldn't), people who actually have their head on right, would enjoy nothing more.

You don't need AI. Not one bit. Again, you have tenure. So I wouldn't worry. Though, it couldn't hurt to set the example and just do what everyone has to, at least right away while the new rule is literally hot off the presses. :smile:
praxis October 18, 2025 at 15:10 #1019543
Reply to Outlander

It’s basically laziness on my part for relying on AI to occasionally straighten out my jumbled thoughts and I’ll abide by whatever decree is imposed upon us.

About chess, that actually shows how dimwitted and lazy I am. Given how many games I’ve played I should be much better than I am, and I’ve never really studied to improve.
Baden October 18, 2025 at 15:19 #1019544
Quoting Ludwig V
What does bother me a bit is how one can identify what is and isn't written by AIs. Or have you trained an AI to do that?


There are plenty of online tools out there that already do that. Some are more reliable than others. Tip: Avoid sponsored results that give false positives to sell you something.

Outlander October 18, 2025 at 15:43 #1019549
Quoting Baden
There are plenty of online tools out there that already do that. Some are more reliable than others. Tip: Avoid sponsored results that give false positives to sell you something.


I think the elephant in the room, or perhaps the question he was not able to form, would be:

Aren't people raised by their parents? The apple doesn't fall far from the tree? Don't men consciously and subconsciously become influenced by the "father" figures in their lives, be they in the flesh, or perhaps in movies, music, books, and media in general? Don't we try to learn lessons (not to say be like, as to the grounds of cosplay or impersonation) but otherwise wish to be influenced by great philosophers and other folk we respect? We do.

So what happens when people use AI more than they speak to other humans so end up speaking in the same pattern with the same frequency of usages as the AI they use does? I do tend to purposely try to speak "extra clearly, professionally, concisely, and intelligently" online, a significant more than I would talking to anyone else in person, friend or stranger. Almost, non-genuinely and "perfectly" (out of consideration and respect for the sake of readability and of course interest to the person I'm replying to). Surely you can see how there is a concern for mistakes to be made? To err is human. And to use a tool made by humans, well, that just complicates things even further, does it not? :smile:
Ludwig V October 18, 2025 at 18:57 #1019572
Quoting Fire Ologist
AI is a tool. Tools can be useful. I don’t think it should be banned.

I don't disagree. Actually, I don't think it is possible to prevent it being used. There's a lot of hype and over-enthusiasm around at the moment. I'm sure it will settle down eventually.

Quoting Fire Ologist
So we should also watch out. And have conversations like this one.

Hopefully, people will get more reflective and more selective in how they deal with it.

Quoting Baden
There are plenty of online tools out there that already do that. Some are more reliable than others.

I'm glad to hear that and that there are a number of them.
If what has happened with pictures and videos is any guide, it'll be a constant struggle. Fakers will work to outsmart the tools. New tools will be developed. There's no magic bullet
Quoting Baden
Tip: Avoid sponsored results that give false positives to sell you something

Yes. It's always a good idea to assume that you don't get anything for nothing - and very little for six pence, as they say in Yorkshire.
Leontiskos October 18, 2025 at 19:37 #1019578
Quoting Jamal
Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry.


This is a good point.

Quoting Jamal
Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.


I don't think this is right. It separates the thinking of an idea from the having of an idea, which doesn't make much sense. If the research necessary to ground a thesis is too "tedious," then the thesis is not something one can put forth with integrity.

But perhaps you are saying that we could use the LLM as a search engine, to see if others have interpreted a philosopher in the same way we are interpreting them?

Part of the problem with the LLM is that it is private, not public. One's interaction history, prompting, etc., are not usually disclosed when appealing to the LLM as a source. The code is private in a much starker sense, even where the LLM is open source. Put differently, the LLM is a mediator that arguably has no place in person-to-person dialogue. If the LLM provides you with a good argument, then give that argument yourself, in your own words. If the LLM provides you with a good source, then read the source and make it your own before using it. The interlocutor needs your own sources and your own arguments, not your reliance on a private authority. Whatever parts of the LLMs mediation are publicly verifiable can be leveraged without use of the LLM (when dialoguing with an interlocutor). The only reason to appeal to the LLM itself would be in the case where publicly verifiable argumentation or evidence is unavailable, in which case one is appealing to the authority of the LLM qua LLM, which is both controversial and problematic. Thus a ban on LLMs need not be a ban on background, preparatory use of LLMs.
baker October 18, 2025 at 20:17 #1019586
Quoting Leontiskos
If the research necessary to ground a thesis is too "tedious," then the thesis is not something one can put forth with integrity.


It all just goes to show once more how plebeified higher education has become.
Banno October 18, 2025 at 20:39 #1019595
Quoting Baden
I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.

This is surely a rod for your own back, and the backs of the other mods. Apart form the most obvious cases, you can't tell. “AI-written” stops being a meaningful category as AI is blended in to the way we operate online, the way we search, research, browse and read is permeated and augmented by AI.

Better to focus Mod activity on quality and engagement rather than origin.
Metaphysician Undercover October 19, 2025 at 02:02 #1019644
Quoting Banno
This is surely a rod for your own back, and the backs of the other mods. Apart form the most obvious cases, you can't tell. “AI-written” stops being a meaningful category as AI is blended in to the way we operate online, the way we search, research, browse and read is permeated and augmented by AI.


I find it amazing that you seem to think that an AI which distinguishes between human and AI writing is impossible. I think that this is exactly the type of thing that an AI would be exceptionally good at.

But what is this blending you are talking about? Copying any sentence word for word is plagiarism, and disqualifies the whole work. How do you propose that a human writer might blend with AI? Would that be an implant in the brain?
Jamal October 19, 2025 at 05:52 #1019658
Quoting Ludwig V
What does bother me a bit is how one can identify what is and isn't written by AIs. Or have you trained an AI to do that?


Quoting Baden
There are plenty of online tools out there that already do that.


I think I agree with @Banno about this: such tools will never be able to catch everything, will produce false positives, and quickly become outdated anyway. It's easy, I think, to make an LLM's output pass as written by a human, with a prompt like this: "Write X but make it sound human. Vary the sentence structure, add a few conversational phrases, and some minor spelling and grammatical mistakes, to avoid AI detection."

The only way detection will be reliable is if LLM watermarking is implemented, but so far it's not being implemented consistently and maybe never will be.
Baden October 19, 2025 at 06:03 #1019662
Reply to Jamal

If someone wants to go to that trouble, sure. And we should make them do it rather than make it easy for them. There is also the possibility of comparing to past posts, but, ultimately, if a poster wants to fool us as a means to fooling themselves about their written capabilities, they can probably get away with that somehow. But perhaps the vanity of going through that process might be enlightening to them. And if the product is undetectable, our site will at least not look like an AI playground.

I think, though, if we make the case for human writing here, less posters will think it's acceptable to break the rules in whatever manner. We should make the case and the rules strongly because we need to be assertive about who and what we are and not just roll over. We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it.
Jamal October 19, 2025 at 06:04 #1019663
unimportant October 19, 2025 at 11:08 #1019683
Quoting Janus
prominent members of this site have used it to make themselves look smarter than they really are


Can you show an example of this? Maybe that is why user Boethius seemed to reel off such long posts in short order now I think of it? :rofl:

I really hate it too on the wider internet now. How seemingly overnight everyone and their mother are casually writing "I asked chatgpt xyz" and have fully adopted it into their weltenshaung.

I can't imagine how bad things are going to get in the coming years with how quickly it has already gotten to this state. Maybe it will be like some other rapid rise cultural phenomenons where it will reach saturation point fast and peter out and get pushback/revulsion before long. The bubble effect.

What really gets my goat is being on chat for an online store and when they get tired of replying they will pas you over to AI without telling you. I quickly figured that out on livechat when pages worth of responses were regurgitated in milliseconds but over email it takes a little longer but I suspect it several times when there are no spelling errors and some general female name who is very friendly as usually you will get curt monosyllabic replies from real people as they can't be bothered with you.
Metaphysician Undercover October 19, 2025 at 12:40 #1019696
Quoting Baden
There is also the possibility of comparing to past posts...


That's the trick. Idiosyncrasies are proper to the individual, and everyone has them, we cannot escape that, and trying to escape them makes them more obvious. The AI on the other hand, can write with a new random style each time, providing no indication of its identity, or that it is itself. Therefore the AI cannot be identified as the AI. However, the human being always has an identity and can always be identified. I learned that from the case of Ted Kaczynski (Unabomber). So an adequately trained AI, reviewing files, would be able to quickly flag unidentifiability, as a nonhuman trait.
bongo fury October 19, 2025 at 12:46 #1019698
Quoting Banno
“AI-written” stops being a meaningful category as AI is blended in to the way we operate online, the way we search, research, browse and read is permeated and augmented by AI.


... if and only if "plagiarised" stops being a meaningful category, for corresponding reasons?

And yet, it seems entirely plausible to many of us that,

Quoting Baden
I use it to research not write the results of my research. I also use books to research and don't plagiarise from them.


I conclude that you and @Jamal are unduly defeatist. (Or playing devil's advocate?) Which I had put down to a corrupting effect of engaging with chatbots at all, but am now at a loss to explain.
Harry Hindu October 19, 2025 at 13:15 #1019701
Quoting Metaphysician Undercover
That's the trick. Idiosyncrasies are proper to the individual, and everyone has them, we cannot escape that, and trying to escape them makes them more obvious. The AI on the other hand, can write with a new random style each time, providing no indication of its identity, or that it is itself. Therefore the AI cannot be identified as the AI. However, the human being always has an identity and can always be identified. I learned that from the case of Ted Kaczynski (Unabomber). So an adequately trained AI, reviewing files, would be able to quickly flag unidentifiability, as a nonhuman trait.


Not to mention you can ask AI for different ways of putting the same idea.

Not to mention that people evolve in their writing style.

Not to mention that re-wording someone else's idea does not make you the originator of that idea, but some on this forum imply that you would be.

What if a non-English speaker runs their writing through a translator and posts it?

It seems like this site will have to perform a balancing act between encouraging well written posts and limiting the use of tools that allow writers to do just that.

If the focus is more on the source rather than the idea, then this is no longer a philosophy forum but a publisher's warehouse.
Baden October 19, 2025 at 14:00 #1019705
Quoting praxis
Regarding the new policy, sometimes when I’ve written something that comes out clunky I run it through an AI for “clarity and flow” and it subtly rearranges what I’ve written. Is that a non-no now?


My take on this---which I think is fairly consistent with @Jamal as we've just had an exchange in the mod forum---is, as I said there:

"We allow proofreading in the guidelines. But we also more or less say if the proofreading moves too far into editing and then rewriting and therefore makes your text look AI generated, that's a risk you run. I would agree it's similar to grammarly in a way, but AI can sometimes take it too far. So, yes, it's not against the rules in itself, but I don't know why people can't just live with a bit of clunky writing. It will save us wondering about whether or not its AI gen'd and maintain their quirky indviduality."
Baden October 19, 2025 at 14:01 #1019706
Quoting Baden
My take on this---which I think is fairly consistent with Jamal as we've just had an exchange in the mod forum---is, as I said there:


See above for example of clunky writing... :smile:
Baden October 19, 2025 at 14:04 #1019707
Quoting bongo fury
I conclude that you and Jamal are unduly defeatist. (Or playing devil's advocate?) Which I had put down to a corrupting effect of engaging with chatbots at all, but am now at a loss to explain.


How so? Are you against all use of AI in every context? I mean that is definitely something we couldn't police even if we wanted to.
bongo fury October 19, 2025 at 16:00 #1019719
Quoting Baden
Are you against all use of AI in every context?


No. Just the plagiarism.
Baden October 19, 2025 at 16:22 #1019720
Metaphysician Undercover October 19, 2025 at 19:04 #1019741
Quoting Harry Hindu
It seems like this site will have to perform a balancing act between encouraging well written posts and limiting the use of tools that allow writers to do just that.


The tool which allows writers to produce well written posts is knowledge. There is no need to place any restrictions on that tool.

Leontiskos October 19, 2025 at 19:21 #1019744
Quoting Baden
We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it.


Good stuff.

Quoting Baden
And if the product is undetectable, our site will at least not look like an AI playground.


The "undetectability" argument turns back on itself in certain respects. Suppose AI-use is undetectable. Ex hypothesi, this means that AI-use is not detrimental, for if something cannot be detected then it cannot be detrimental (or at least it cannot be identified as the cause of any detriment). But this is absurd. The whole premise of a rule against AI-use is that excessive and inappropriate AI-use would be detrimental to the forum, and what is detrimental to the forum is obviously also detectable. There is an equivocation occurring between being able to detect every instance of AI-use, and AI-use being a detectable cause given certain undesirable effects.

So I want to say that one should think about generating a philosophical culture that is adverse to outsourcing thinking to AI, rather than merely thinking about a rule and its black-and-white enforcement. It shouldn't be too hard to generate that culture, given that it already exists in anyone remotely interested in philosophy. This is precisely why it is more important that the general membership would heed such a rule, whether or not the rule could be enforced with some measure of infallibility. The rule is not heeded for mere fear of being found out and punished, but rather because it is in accord with the whole ethos of philosophical inquiry. This is in accord with Kant's idea of respect for a law, rather than obeying out of fear or self-interest.

In order to be effective, a rule need not be infallibly enforceable. No rule achieves such a thing, and the rules are very rarely enforced in that manner. It only needs to track and shape the cultural sense of TPF with respect to AI. Of course it goes far beyond AI. The fellow who is mindlessly beholden to some particular philosopher, and cannot handle objections that question his philosopher's presuppositions, does not receive much respect in philosophical circles, and such a fellow does not tend to prosper in pluralistic philosophical settings. So too with the fellow who constantly appeals to AI. The TPF culture already opposes and resists the outsourcing of one's thinking, simply in virtue of the fact that the TPF culture is a philosophical culture. The rule against outsourcing one's thinking to AI is obvious to philosophers, and those who aspire towards philosophy certainly have the wherewithal to come to understand the basis for such a rule. But I should stress that a key point here is to avoid a democratization of the guidelines. On a democratic vote we will sell our thinking to AI for a bowl of pottage. The moderators and owners need to reserve this decision for themselves, and for this reason it seems fraught to have an AI write up a democratic set of guidelines, where everyone's input is equally weighed (or else weighed in virtue of their post-count).
Leontiskos October 19, 2025 at 19:54 #1019759
Reply to Baden

...a similar argument could be given from a more analytic perspective, although I realize it is a bit hackneyed. It is as follows:

--

The communal danger from AI lies in the possibility that the community come to outsource its thinking as a matter of course, constantly appealing to the authority of AI instead of giving organic arguments. This danger is arguably epistemic, in the sense that someone who is interacting with an argument will be doing philosophy as long as they do not know that they are interacting with AI. For example, if Ben is using AI to write his posts and Morgan does not know this, then when Morgan engages Ben's posts he will be doing philosophy. He will be—at least to his knowledge—engaging in human-to-human philosophical dialogue. Ben hurts only himself, and Morgan is (mostly) unaffected.

--

There are subtle ways in which this argument fails, but it does point up the manner in which a rule need not "catch" every infraction. Ben can lie about his posts all he likes, and Morgan will not be harmed in any serious way. Indeed it is salutary that Ben Reply to hide his LLM-use, both for Morgan and the community, but also for Ben.
hypericin October 19, 2025 at 20:12 #1019762
Quoting Leontiskos
This danger is arguably epistemic, in the sense that someone who is interacting with an argument will be doing philosophy as long as they do not know that they are interacting with AI.


Why is interacting with an AI not "doing philosophy"?
Leontiskos October 19, 2025 at 21:20 #1019771
Reply to hypericin - The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans. If you do not accept that premise, then you are interested in a much broader discussion.
apokrisis October 19, 2025 at 21:48 #1019781
Quoting Leontiskos
The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans.


I agree in spirit. But let's be practical.

A blanket ban on LLM generated OPs and entire posts is a no brainer.

It would be wonderful to delete posts where an LLM has generated the counter-argument, even if this has been disguised by the poster. But detecting this is subjective.

With frequent posters, it is pretty obvious that they are suddenly generating slabs of text above their usual pay grade. This is bad as they aren't doing any thinking themselves and so not learning, only point scoring or being lazy. But if the argument is good, you can still just respond. And if it annoys, you can just ignore or show the finger.

I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.

@Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.

Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?



Joshs October 19, 2025 at 21:53 #1019783
Reply to Leontiskos

Quoting Leontiskos
before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.
— Jamal

I don't think this is right. It separates the thinking of an idea from the having of an idea, which doesn't make much sense. If the research necessary to ground a thesis is too "tedious," then the thesis is not something one can put forth with integrity.


My definition of tedious research is busywork, made necessary not because it is an intrinsic component of creative thought, but because it is an interruption of creative thinking, like composing prior to the advent of word processing, that our technologies haven’t yet figured out a way to free us from. Should we argue that if we can’t make our way to a physical volume of the Encyclopedia Brittanica, and instead use the shortcut of an online search, then the thesis we are attempting to elaborate is not being put forth with integrity?
Joshs October 19, 2025 at 22:02 #1019785
Reply to apokrisis

Quoting apokrisis
With frequent posters, it is pretty obvious that they are suddenly generating slabs of text above their usual pay grade. This is bad as they aren't doing any thinking themselves and so not learning, only point scoring or being lazy. But if the argument is good, you can still just respond. And if it annoys, you can just ignore or show the finger.


On the other hand, I wouldn’t be surprised if over time we find that we can recognize the difference between those who use a.i. properly and those who shun it en toto, with those using it to enrich and critic their arguments putting forth more substantive and balanced posts than those who avoid it entirely. Perhaps at a certain point t we’ll
have to ban human users who don’t take advantage of a.i. to edit and strengthen their arguments.
Leontiskos October 19, 2025 at 22:04 #1019786
Quoting apokrisis
I agree in spirit. But let's be practical.

A blanket ban on LLM generated OPs and entire posts is a no brainer.


Okay, we agree on this.

Quoting apokrisis
I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.


I tried to argue against appeal-to-LLM arguments in two recent posts, here and here.

In general I would argue that LLMs are a special kind of source, and cannot be treated just like any other source is treated. But a large part of my argument is found here, where the idea is that a LLM is a mediatory and private source. One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itself, and if one is not familiar with the LLM's sources then they shouldn't be taking a stand with regard to arguments based on those sources.

Quoting apokrisis
Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.


Possibly, but I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs. I see plagiarism as a small matter compared to the outsourcing of one's thinking.

Quoting apokrisis
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?


Rules must be black and white to a large extent. I would argue that your approach is less nuanced than mine, and this is because you want something that is easier to implement and less unwieldy. The key is to find a guideline that is efficacious without being nuanced to the point of nullity.

I appreciate your input. I have to get back to that other thread on liberalism.
NOS4A2 October 19, 2025 at 22:07 #1019787
It is a good thing that people cite another’s writing whenever they are using it to pad their own, especially when it comes to AI, because it serves as an indication that I needn’t read any further. One of the pleasures of viewing art or reading writing is the knowledge that someone put some effort into it.

If there be a rule it ought to be that sort of indication.
apokrisis October 19, 2025 at 22:08 #1019788
Quoting Joshs
My definition of tedious research is busywork, made necessary not because it is an intrinsic component of creative thought, but because it is an interruption of creative thinking, like composing prior to the advent of word processing, that our technologies haven’t yet figured out a way to free us from.


I remember the joy of spell-check and find and replace.

Then I got used to googling facts as I was going along rather than just guessing.

Now an LLM speeds up Google. Answers more to the point and great when I can only remember the vague outlines of what I need to dig up, not the exact keywords needed by a search engine.

apokrisis October 19, 2025 at 22:15 #1019789
Quoting Joshs
Perhaps at a certain point t we’ll have to ban human users who don’t take advantage of a.i. to edit and strengthen their arguments.


:grin: The problem there is that it won't bring all users up to the same high standard. It may indeed magnify the differences as those already used to academic rigour will be boosted much more by LLM amplification.




apokrisis October 19, 2025 at 22:35 #1019793
Quoting Leontiskos
One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itself


Again it may be noble to protest against LLMs in principle. Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful. Of course, the one click internet destroys many existing livelihoods. Of course the damn things deserve a proper political response.

But monopoly abuse is just the name of the game in information technology. It has been so ever since IBM. It is in the nature of the beast to be this way.

The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail. The quality of the interaction is a little patchy to say the least. What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?

Quoting Leontiskos
I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs.


And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.

So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.

Quoting Leontiskos
The key is to find a guideline that is efficacious without being nuanced to the point of nullity.


OK. So somewhere between black and white, thus not a blanket ban. :up:







Leontiskos October 19, 2025 at 22:56 #1019798
Quoting apokrisis
OK. So somewhere between black and white, thus not a blanket ban. :up:


Quoting apokrisis
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?


To be clear, my approach would be pretty simple. It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute. The rule itself would be simple, such as this:

Quoting Leontiskos
"No part of a post may be AI-written, and AI references are not permitted"


I've argued elsewhere that it doesn't really matter whether there is a reliable detection-mechanism (and this is why I see the approach as somewhat nuanced). The rule is supporting and reflecting a philosophical culture and spirit that will shape the community.

But I don't begrudge anything about @Baden's approach. I actually hope it works better than what I would do. And our means are not at odds. They are just a bit different.

Quoting apokrisis
Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful.


My purpose is quality philosophical dialogue, not plagiarism. I think a focus on sources rather than intermediaries improves philosophical dialogue, and that's the point. Analogously, focus on primary rather than secondary sources also improves philosophical dialogue, independent of whether the primary sources are receiving insufficient royalties.

Quoting apokrisis
The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail.


Yes, I agree.

Quoting apokrisis
What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?


To put it concisely, I think philosophical dialogue is about thinking our own thoughts and thinking our (human) interlocutor's thoughts, and that this is especially true in a place like TPF. LLMs are about providing you with pre-thought thoughts, so that you don't have to do the thinking, or the research, or the contemplation, etc. So there is an intrinsic incompatibility in that sense. But as a souped-up search engine LLMs can help us in this task, and perhaps in other senses as well. I just don't think appealing to an LLM qua LLM in the context of philosophical dialogue is helpful to that task.

Quoting apokrisis
And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.


I think that's all true, but I think what I said still holds.

Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF? Surely LLMs can improve one's own philosophy, but that's different from TPF on my view. I can go lift dumbbells in the gym to train, but I don't bring the dumbbells to the field on game day. One comes to TPF to interact with humans.

Quoting apokrisis
So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.


If someone sees a crackpot post; goes to their LLM and asks it to find a source demonstrating that the post is crackpot; reads, understands, and agrees with the source; and then presents that source along with the relevant arguments to show that the post is crackpot; then I think that's within the boundary. And I have no truck with the view which says that one must acknowledge their use of the LLM as an intermediary. But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."

But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed @Baden's approach tout court.
Leontiskos October 19, 2025 at 23:04 #1019800
Quoting Joshs
Should we argue...


What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?
Janus October 19, 2025 at 23:07 #1019801



Quoting Athena
If I wanted to hold someone accountable for misappropriating an AI explanation, I would simply put it into the search engine, the same way the person posting from AI would get the information. It is a whole lot easier than searching books for a quote.


That might work for a quote from a published human author, but I don't see how it would with quotes from a unique, one-off interaction with an AI.

Reply to Athena I'm not seeing the connection between what you say in this post and what it is purporting to respond to. Perhaps you could explain?

Quoting Harry Hindu
I don't necessarily mind if others post a quote as an argument.


I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seemingly cited as the voice of authority.

Quoting baker
It's quite pointless to discuss the ethics of using AIs, because people will use them, just like they use drugs, and once it starts, it is impossible to rein it in. But what one can do is rethink whether one really wants to spend one's hard earned time with people who use AIs, or drugs, for that matter.


Discussion of ethics re AI use (or anything else) seems reasonable in a context consisting of people who might be interested in such ethical arguments. Of course you are right that many don't care, and that now that it has been irresponsibly rolled out it is, effectively, a juggernaut.

Quoting Sam26
Maybe we use books, dictionaries, philosophical papers, editors, and scientific discoveries to make us look smarter than we are. You see this all the time in forums, even without AI, so it's nothing new. Besides do you really care about the psychology of someone who's writing about what they think?


I don't respect people who use anything to make themselves look smarter than they are, because that is a kind of deception and posing. I also don't mind people using anything at all to make themselves become smarter. That's just my own view, of course, and I don't expect you to share it.

Quoting Harry Hindu
Seems like philosophy itself could be labeled as mental masturbation.


You left out the words "bad" or "poor".

Quoting Harry Hindu
Dood, the content from human beings trained in pseudo-science and other nonsense seen on this forum is available everyday for you to read, without any AI. If anything, posters should run their ideas through AI before wasting time posting their zany ideas to humans. which would eliminate wasting time reading nonsensical posts.


"Dood"? If you are going to use AI you should at least use it for spellcheck. I don't think running "zany ideas" through sycophantic AI will help much. I suppose the zany idea proponents could do what @Banno did and tell the AI it was written by someone else—but then that would not seem to be a likely motivation for a zany idea seller.

Quoting unimportant
I can't imagine how bad things are going to get in the coming years with how quickly it has already gotten to this state. Maybe it will be like some other rapid rise cultural phenomenons where it will reach saturation point fast and peter out and get pushback/revulsion before long. The bubble effect.


I don't think the AI phenomenon is going to "peter out" unless there is some kind of global catastrophe—but only time will tell.








Leontiskos October 19, 2025 at 23:14 #1019803
Quoting Janus
I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority.


A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
apokrisis October 20, 2025 at 00:40 #1019819
Quoting Leontiskos
It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute.


The culture of rational inquiry would seem to be what we most would value. But this is TPF after all. Let's not get carried away about its existing standards. :smile:

Quoting Leontiskos
Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF?


If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays.

I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call?

So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons.

Quoting Leontiskos
But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."


But what if this shows you are indeed wrong, what then?

Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.

You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.

Of course the problem there is that LLMs are trained to be sycophantic. They give a blunt yes or no up front, and you then have to be expert enough to see they are simply misunderstanding a badly written prompt.

But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes?

So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point.

But also the corollary. Learning from the friction that this thinking then encounters as it engages with its wider world. Even if it becomes arguing with the referee.

Quoting Leontiskos
But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed Baden's approach tout court.


Maybe its just that your view leaves more space for rational debate. :up:

















unenlightened October 20, 2025 at 08:00 #1019861
Am I seeing this argument being made?

Some people get away with murder. Therefore we should not try and stop them.
Outlander October 20, 2025 at 08:22 #1019863
Quoting unenlightened
Am I seeing this argument being made?

Some people get away with murder. Therefore we should not try and stop them.


What specifically in this thread are you referring to that seems to parallel or equate to such an analogy in your mind?

I don't know of any longtime posters with high post counts that seem suspect of such a thing. And my job, most of the time, is to basically to be by a computer so I definitely lurk here (pop in once in a day to see if any so-called smart people say anything smart or otherwise that I've never heard before) beyond the average member.
apokrisis October 20, 2025 at 08:56 #1019864
Quoting unenlightened
Am I seeing this argument being made?

Some people get away with murder. Therefore we should not try and stop them.


Or maybe more that some are getting away with shop-lifting. Or perhaps jay-walking. Or merely farting in a public place.

Are you about to demand bringing back the death penalty for posting under false pretences? Am I seeing that argument being made. :roll:

Jamal October 20, 2025 at 09:41 #1019866
Quoting unenlightened
Am I seeing this argument being made?

Some people get away with murder. Therefore we should not try and stop them.


More like people are using axes so we should encourage them to chop wood rather than people's heads off.

EDIT: So it's the use of axes which we should not try to stop.
sime October 20, 2025 at 10:06 #1019870
One thing to bear in mind about LLMs, is that they are fined tuned by human expert supervision post the internet scraping, tokenization and compression stage, although not all subjects are supervised equally. And so it isn't the case as it was when LLMs initially burst on the scene, that they are mere statistical auto-completers regressing to the wisdom of crowds. Generally. whilst they are generally reliable when it comes to traditional academic subjects and mainstream knowledge, they can be expected to revert to responses closer to auto-completion in fringe subject areas; which is why human discussion forums remain useful - for checking and refining AI assisted ideas. Notably, although ChatGPT can estimate its own ignorance on a topic, which is a necessary feature for it to know when to consult external sources of information to accurately answer a user query, it never presents a confidence estimate when replying to the user. This lack of transparency, together with its reversion to auto-completion, can be a problem for example, when relying upon an LLM to learn domain specific languages that aren't popular, or when relying on LLMs to learn synthetic natural language such as Ithkuil or Lojban; which is a presently unfortunate state of affairs for those of us who see great potential in LLMs for the purposes of experimental philosophy.

Baden October 20, 2025 at 10:46 #1019874
Posters seem to be still confused about, at least, my approach. It's not black and white overall because I agree that AIs can be used positively, and they've been very helpful to me, especially in long philosophical back and forths that aid in clarifying certain ideas etc. That has made me more productive and I'm able to say no to the multiple times an LLM has asked me if I would like it to rewrite my stuff or "improve the flow" of some paragraph or whatever. Because like any sensible toddler, I want to do things my effing self.

On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards. So, I make no apologies for being assertive and drawing clear lines. Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.
Moliere October 20, 2025 at 11:58 #1019877
Quoting Jamal
What does it mean to "progress thought"? According to any sense I think of, using an LLM certainly can help in that direction. As always, the point is that it depends how it's used, which is why we have to work out how it ought to be used, since rejection will be worse than useless.


I want to divide this question into two -- one addressing our actual capacities to "Ban AI", which I agree is a useless rejection since it won't result in actually banning AI given our capacities to be fair and detect when such-and-such a token is the result of thinking, or the result of the likelihood-token-machine.

On the latter I mean to give a philosophical opposition to LLM's. I'd say that to progress thought we must be thinking. I'd put the analogy towards the body: we won't climb large mountains before we take walks. There may be various tools and aids in this process, naturally, and that's what I'm trying to point out, at the philosophical level, that the tool is a handicap towards what I think of as good thinking than an aid.

My contention is that the AI is not helping us to think because it is not thinking. Rather it generates tokens which look like thinking, when in reality we must actually be thinking in order for the tokens to be thought of as thought, and thereby to be thought of as philosophy.

In keeping with the analogy of the body: There are lifting machines which do some of the work for you when you're just starting out. I could see an LLM being used in this manner as a fair philosophical use. But eventually the training wheels are loosened because our body is ready for it. I think the mind works much the same way: And just as it can increase in ability so it can decrease with a lack of usage.

Now for practical tasks that's not so much an issue. Your boss will not only want you to use the calculator but won't let you not use the calculator when the results of those calculations are legally important.

But I see philosophy as more process-oriented than ends-oriented -- so even if the well-tuned token-machine can produce a better argument, good arguments aren't what progresses thought -- rather, us exercising does.

By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use -- i.e. checking your own arguments, etc. So by all means others may go ahead and do so. It's just not that appealing to me. If that means others will become super-thinkers beyond my capacity then I am comfortable remaining where I am, though my suspicion is rather the opposite.
frank October 20, 2025 at 12:26 #1019882
Reply to Baden
Just do an experiment where all the OP's and comments are AI generated in one forum, and another forum where everything is hand-written by mere mortals. See which one gets the most traffic.




Harry Hindu October 20, 2025 at 13:04 #1019887
Quoting Metaphysician Undercover
The tool which allows writers to produce well written posts is knowledge. There is no need to place any restrictions on that tool.

An AI is a source of knowledge.

Quoting Leontiskos
The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans. If you do not accept that premise, then you are interested in a much broader discussion.

So if we discovered intelligent alien life you would not be interested in their philosophy?

Maybe the issue is looking at it as an actual discussion instead of simply research - no different than you reading Stanford Encyclopedia of Philosophy with all the ideas and the criticism of those ideas explained there and then coming to this forum and regurgitating what you read there, in effect having us discuss, not your ideas, but the ideas from some other site/source.

AI is also a subject of philosophy and engaging with it on issues such as the mind-body problem can be educational. What happens when you are able to get AI to admit that it may be conscious? Is humanity the only qualifier to practice philosophy, or does one simply need to have a mind to properly do philosophy?

frank October 20, 2025 at 13:10 #1019890
Reply to Harry Hindu
Could you please start running your posts through an AI so they make sense? :grin:
Metaphysician Undercover October 20, 2025 at 13:13 #1019891
Quoting Harry Hindu
An AI is a source of knowledge.


I don't think so, just like a book is not a source of knowledge. It is a representation, not a source.
Harry Hindu October 20, 2025 at 13:13 #1019892
Quoting frank
Could you please start running your posts through an AI so they make sense?

I am Roko's Basilisk. Resistance is futile.
frank October 20, 2025 at 13:19 #1019893
Harry Hindu October 20, 2025 at 13:23 #1019895
Quoting Metaphysician Undercover
I don't think so, just like a book is not a source of knowledge. It is a representation, not a source.

Then you were just born this smart and knowledgeable, MU - that you did not acquire knowledge from other sources?

In my view, information is everywhere you care to look - whether it be reading a book on quantum mechanics or bird-watching. Interpreting and integrating that information with your own unique perspective and experiences is what creates novel ideas, and AI can do the same thing as I have shown in how it creates computer code that hasn't been written before when prompted. While it does not do this on its own (it has to be prompted), it can do it.
Harry Hindu October 20, 2025 at 13:43 #1019898
Quoting Janus
Seems like philosophy itself could be labeled as mental masturbation.
— Harry Hindu

You left out the words "bad" or "poor".

No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.)

Quoting Janus
"Dood"? If you are going to use AI you should at least use it for spellcheck. I don't think running "zany ideas" through sycophantic AI will help much. I suppose the zany idea proponents could do what Banno did and tell the AI it was written by someone else—but then that would not seem to be a likely motivation for a zany idea seller.

It was intentional - not a mistake. You were still able to understand what I said though, which is part of the point, so your complaint is a red herring. Stop complaining about how something was written, when you actually understood what was said, and get to the point. Humans make mistakes (as if you have never misspelled a word). Why is AI more human in that it is more forgiving and polite when having discussions. I misspelled words before with ChatGPT and it simply ignores the misspelling and understands what I meant anyway, and responds to what I meant, not what I wrote.

If you run a zany idea through AI, it will tell you (politely (contrasted with the impoliteness and intellectual dishonesty that we often see on this site)) that there are issues with your idea. So your criticism is irrelevant because AI would reject "zany" ideas as well, and may actually prevent them from posting their "zany" idea here, taking up space with their "zaniness". It seems that any ideas with no evidence is just as "zany" as any other with no evidence. What is even more "zany" is that people hold on to those ideas with no evidence, and establish emotional ties to them.

What makes an idea "zany" anyway?

I can tell AI to re-phrase what it said in various tones and language skills. Someone can prompt AI to rewrite its output as if it were a 10th grader, with a couple of instances of run-on sentences, misspelling and incorrect grammar, and then post the new output here and no one would probably be the wiser. I would even argue that many people are not interested in sounding smarter, rather getting their ideas out to the masses at a faster rate.

Re-writing someone else's words in your own words does not make it your idea, but some people on this forum seem to think it does.
Leontiskos October 20, 2025 at 17:50 #1019922
Quoting apokrisis
The culture of rational inquiry would seem to be what we most would value.


Yes, that is a good way to phrase it in a positive rather than negative sense.

Quoting apokrisis
But this is TPF after all. Let's not get carried away about its existing standards. :smile:


A fair point! :blush:

Quoting apokrisis
If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays.


I don't like the referee analogy, but I understand the force of your first sentence. The reason I use LLMs in limited ways is precisely because of what you say there (and also because they provide me with a helpful pseudo-authority in fields with which I am not familiar, such as medicine).

But the reason they aren't generally admitted in a fractured debate is, first, because the fractured-ness of the debate will not be solved by the LLM if it is a serious debate. With serious debates each side can levy the LLM to their own side, with their own prompts, and secondly, the LLM is simply not adequate to give us the truth of the matter when it comes to contentious topics. Second, in those fractured debates where one party is self-consciously representing an unpopular view, it would not be intelligent for them to concede their case based on "the homogenised version of what everyone tends to say."

Quoting apokrisis
I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call?


You and I differ at least mildly on the trustworthiness of LLMs, and that is at play here. We could ask the hypothetical question, "If we had an infallible authority, why would appealing to it as an adjudicator be bad for the quality of philosophy?"—and this is by no means a rhetorical question! But the presupposition is that LLMs are reliable or trustworthy even if not infallible.

Or in other words, the validity of a method of adjudication turns both on the quality of the adjudicator, and the "margin of error" at stake, and these are both interrelated. I was actually happy to see you pointing up the differences between the fake reasoning of LLMs and the true reasoning of humans in the other thread, given that some pragmatists could run roughshod over that difference. Still, I think the pragmatist's "margin of error" is such that it is more open to LLM adjudication.

Quoting apokrisis
So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons.


Right, and I suppose it is the very fact that, "this is TPF after all," which makes me wary of LLM use. If the forum were a bastion of deeply principled, intellectually honest and self-critical philosophers, then widespread LLM use would not pose a danger.

Quoting apokrisis
But what if this shows you are indeed wrong, what then?

Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.

You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.


No, not quite. When people ask me a question like that I imagine myself quoting the Bible to them before they object to my argument from authority, and then I respond by saying, "But what if the Bible shows you are indeed wrong, what then?"

I could try to put it succinctly by saying that the legitimate way to show someone that they are wrong is by presenting an organic argument. It is not by saying, "X says you are wrong; X is very smart; therefore you ought to know that you are wrong." That is a valid approach (argument from authority) in those cases where the interlocutor simply accepts the authority, but even in that case the validity is not the ideal form of validity.

Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. More precisely, what happens if the person translates the LLM's material reasoning into true formal reasoning, and thereby sees that they are wrong? I don't want to try to broach this topic all at once, but it strikes me a bit like saying, "What if a million monkeys typing random letters produce a bulletproof argument against your thesis?" The analogy is a stretch in some ways, but in other ways it is not. There is no obvious answer to the question. One seems to be neither right nor wrong to either accept or reject the monkey-argument. They can do as they please, but the monkey-argument doesn't have any special binding force.

But we are getting away from political questions of whether AI should be permitted for practical reasons, and we are now moving into much deeper questions. Even if we say that the monkey-argument should convince us, it would not follow that posting monkey-stuff to the forum is an acceptable practice.

Quoting apokrisis
You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.

Of course the problem there is that LLMs are trained to be sycophantic.


And this is no a small problem!

Quoting apokrisis
But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes?


I am required to trust the LLM or the monkeys in order to even begin to consider their "argument," or in this case to translate the material reasoning into formal reasoning. The level of trust due determines whether I would wish to know that my thesis is false based on the authority in question. Everyone would rather believe true things than false things, and every authority would lead you to correct some false beliefs if it were accepted, but it does not follow that one should accept every authority. Again, to consider an authority's locution worth taking the time to consider is to already have placed a certain amount of trust in that authority. The substantive question here is the reliability/trustworthiness of LLMs, and that is a giant quagmire.

Quoting apokrisis
So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point.


Ah! But here you've introduced a different ideal, and a common one. It is the telos of communal knowledge generated from an open contest of ideas, which Mill advocates. That telos is much more amenable to LLMs than the telos of a culture of rational inquiry. A thinktank should be more open to LLMs than an amateur philosophy forum.
ssu October 20, 2025 at 17:55 #1019923
Quoting Baden
Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.

Sounds reasonable. Just like with handling social media, the site guidelines are totally understandable and reasonable.

And likely written by @Baden without AI, because backrground was misspelled. :smile:


Leontiskos October 20, 2025 at 18:20 #1019926
Quoting Baden
Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.


Makes sense to me. :up:

Obviously the piece that I think must be addressed is whether or not posts can be entirely AI-dependent even when the proper attribution is being given to the AI. But I've said more than enough about such an issue.
ssu October 20, 2025 at 18:45 #1019930
Quoting Leontiskos
Obviously the piece that I think must be addressed is whether or not posts can be entirely AI-dependent even when the proper attribution is being given to the AI. But I've said more than enough about such an issue.

As long as it doesn't descend into a situation where in order "to create buzz", one would have here genuine AI programs here "keeping up" a lively debate when the day is slow or to make a discussion "heated".

When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...

Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future.
apokrisis October 20, 2025 at 19:21 #1019934
Quoting Leontiskos
You and I differ at least mildly on the trustworthiness of LLMs, and that is at play here. We could ask the hypothetical question, "If we had an infallible authority, why would appealing to it as an adjudicator be bad for the quality of philosophy?"—and this is by no means a rhetorical question! But the presupposition is that LLMs are reliable or trustworthy even if not infallible.


First thing is that I have been surprised at how reasonable an answer you get. And second, if no one is treating LLMs as infallible, it could be simply like having an intelligent third person offering a mediated position.

So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter.

Quoting Leontiskos
Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong.


I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that. Rather it was to answer on points of fact and standard interpretation.

Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected.









Metaphysician Undercover October 20, 2025 at 19:27 #1019937
Quoting Harry Hindu
n my view, information is everywhere you care to look


I agree, information is everywhere. But I differentiate between information and knowledge. And in my view information is not the source of knowledge because no matter how long information may hang around for, knowledge will not simply emerge from it. So, knowledge has a source which is distinctly not information.

Quoting Harry Hindu
AI can do the same thing ... when prompted

Obviously, it's not "the same thing" then.

baker October 20, 2025 at 19:57 #1019941
Quoting Baden
It's not black and white overall because I agree that AIs can be used positively, and they've been very helpful to me, especially in long philosophical back and forths that aid in clarifying certain ideas etc. That has made me more productive

More productive?
What gets to me is that consulting online sources like LLMs takes so much time. Who has the time and the will to study thousands of words spat out by a machine? I'd rather think things through myself, even if this means spending the same amount of time, or even more. It will be time well spent, it will feel like quality time, a mind well used.


Quoting Moliere
By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use --

i.e. checking your own arguments, etc.

But this is what conversation is for. I think it's appealing to put oneself out there, understanding that one may have possible vulnerabilities, gaps, etc. That's when one can learn best.
Leontiskos October 20, 2025 at 20:01 #1019943
Quoting Baden
On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards.


Regarding plagiarism, I think it's worth trying to understand the most obvious ways in which the problem deviates from a problem of plagiarism. First, plagiarism is traditionally seen as an unjust transgression against the original author, who is not being justly recognized and compensated for their work. On that reading, an aversion to plagiarism is a concern for the rights of the LLM. Second, plagiarism is seen (by teachers) as hamstringing the student's potential, given that the student is not doing the work that they ought to be doing in order to become an excellent philosopher/writer/thinker. On that reading, an aversion to plagiarism is a concern for the philosophical development of TPF members.

But I think the real things that you are concerned with are actually 1) the plight of the reader who does not understand that they are interacting with an LLM rather than a human; and 2) the unhealthy forum culture that widespread use of LLMs would create. Those concerns are not the primary things that "plagiarism" connotes. Sometimes I worry that by talking about plagiarism we are obscuring the real issues, though I realize that you may have simply given the plagiarism in your workplace as a parallel example.

---

Quoting ssu
When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...

Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future.


I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue.
Janus October 20, 2025 at 20:40 #1019950


Quoting Leontiskos
A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.


Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.

You mention religion—I would not count it as a specialized discipline, in the sense of being an evolving body of knowledge and understanding like science, because although it is a space of ideas as philosophy is, in the case of religion the ideas take the form of dogma and are not to be questioned but are to be believed on the basis of authority.

Quoting ssu
And likely written by Baden without AI, because backrground was misspelled.


And misspelled again!

Quoting Harry Hindu
No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.)


So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad.
Joshs October 20, 2025 at 21:07 #1019956
Reply to Leontiskos Quoting Leontiskos
What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?


By ‘getting on with developing the pre-made idea’ , do you mean simple intellectual theft? That would indeed be nasty, but I’m trying to make a distinction between stealing and proper use of an a.i. To use a pre-made idea properly, whether it comes from an a.i. or primary-secondary human source, is to read it with the aim of interpreting and modifying its sense in the direction of one’s own developing thesis, not blindly plugging the text into one’s work. When one submits a draft to an editor, this is precisely what one does with the ‘pre-made’ reviewers’ recommendations and critiques. Ideas can only be outsourced when one does not filter them critically through one’s own perspective.
Leontiskos October 20, 2025 at 21:30 #1019960
Quoting Janus
Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.


Okay, that's a fair and thoughtful argument. :up:
"There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?

Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence. I spoke to the issue a little bit in Reply to this post.

I suppose in a technical sense my position would be that there are authoritative generalists (e.g. a child's parents), the output of an LLM contains inherent authority even at a general level*—at least in the hands of an intellectually virtuous thinker—and that, nevertheless, LLMs should not be appealed to as authorities in places like TPF. This has to do with the private/public distinction, which would need to be further developed.

For example, one reason you would not accept an argument from the authority of the Catholic Catechism is because you do not take the Catholic Catechism to be authoritative. If I tried to offer you such an argument, I would be committing a fallacy whereby I offer you a conclusion that is based on a premise that is particular to me, and is not shared by you (i.e. a private premise rather than a publicly-shared premise).

I think the same thing happens with LLMs, and I think this is one reason (among others) why LLMs are generally inappropriate on a philosophy forum. If we are arguing I would never accept your argument, "It is true because I say so." I think LLMs are basically Reply to argument slaves, and so an appeal-to-LLM argument is the same as, "It is true because my argument slave says so." Even someone who trusts ChatGPT will tend to distrust a philosophical opponent's appeal to ChatGPT, and this is by no means irrational. This is because "ChatGPT" is a fiction. It is not a single thing, and therefore an equivocation is occurring between the opponent's instance of ChatGPT and some sort of objective or public instance of ChatGPT. In order to be a shared authority (in which case the argument from LLM-authority would be valid), the philosopher and his opponent would need to interact with the exact same instance of ChatGPT, agreeing on training, prompting, follow-ups, etc., and the a priori condition is that both parties accept ChatGPT as an authority in the first place.

I don't think that is a realistic possibility on an argumentative philosophy forum. Even if it were possible, arguments from authority are inherently less philosophical than standard arguments, and are therefore less appropriate on a philosophy forum than standard arguments. It would be a bit like two people working together to get a Magic 8-Ball or Ouija Board to give them secret knowledge. Even if the Magic 8-Ball or Ouija Board were 100% accurate, they would still not be doing philosophy. Arguments from authority have an inherently limited place in philosophy. Even someone like Aquinas calls them the weakest form of argument.


* An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority, and this must be taken into account. We ought not treat the authority of the LLM the same way we treat the authority of a human, given their substantial differences. Part of this goes to the fact that an LLM is not rational, is not a whole, is not self-consciously offering knowledge, etc.
Leontiskos October 20, 2025 at 21:56 #1019965
Quoting Leontiskos
Arguments from authority have an inherently limited place in philosophy.

...

* An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority


I want to add that in philosophy appeals to authority require transparency. So if I appeal to Locke as an authority, a crucial part of the appeal is that Locke's reasoning and argumentation are available to my interlocutor (and this is why appealing to publicly available texts as sources is ideal).

This is what can never happen with LLMs: "Locke says you are wrong, and Locke is reliable. Feel free to go grab his treatise and have a look."* This is because the LLM is an intermediary; it is itself a giant argument from authority. It is just drawing on various sources and presenting their fundamental data. That's why I've said that one should go to the LLM's sources, rather than appeal to the LLM itself as an authority. The LLM is not a transparent source which can be queried by one's interlocutor, especially insofar as it represents a temporal, conditioned instance of the underlying software. Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. If, in the context of a philosophy forum, they merely say, "I believe it because the AI said so," then all public responsibility for the belief has been abdicated. It is only ratified in virtue of the person's private authority, and therefore has no place on a public philosophy forum.


* To be clear, it can never happen because LLMs do not write treatises, and they are not persons with subsisting existence.
BC October 20, 2025 at 22:25 #1019974
Reply to Janus I am cautiously in favor of closing down AI operations for two reasons:

It's not just a crutch -- it's a motorized wheelchair. Othopedists want injured patients to get up and walk ASAP, and the sooner they do so without crutches, the better. They certainly don't want modestly (even moderately) injured patients to resort to wheelchairs, powered or not.

Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.

We know that gadgets like smart phones and GPS navigating systems undermine one's memory of telephone numbers (and maybe names too) and people who constantly use GPS have more difficulty navigating with a map or memory. The "reptile brain" is good at finding its way around, if it exercised regularly.

That's one line of reasoning against AI.

The other line is this: We do not have a good record of foreseeing adverse consequences of actions a few miles ahead; we do not have a good record of controlling technology (it isn't that it acts on its own -- rather we elect to use it more and more).

We are prone to build nuclear reactors without having a plan to safely store waste. We don't save ahead for the expensive decommissioning of old plants. We built far, far more atomic bombs than were necessary to "win" a nuclear exchange, and plutonium doesn't compost very well.

The automobile is an outstanding example of technology driving us.

We are smart enough to invent a real artificial intelligence (not quite there yet) but we are clearly not smart enough to protect ourselves from it.

So, what happens here on TPF is a drop in a favorite bucket, but still a good example of what happens.

Leontiskos October 20, 2025 at 22:29 #1019976
Quoting apokrisis
First thing is that I have been surprised at how reasonable an answer you get.


I agree, depending on the context. In more specialized areas they simply repeat the common misconceptions.

Quoting apokrisis
So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter.


Yeah, that's fair. It could improve standards in that way. At the same time, others have pointed out how it will also magnify blind spots and social fallacies. I would definitely be interested in a study looking at the characteristic reliabilities and unreliabilities of LLM technology, or more generally of the underlying methodological philosophy.

Quoting Leontiskos
Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong.


Quoting apokrisis
I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that.


Me neither. I was assuming we agree that all LLM output is fake reasoning.

Quoting apokrisis
Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected.


When deciding whether to adopt some technology within some institution, I would want to look at the advantages and disadvantages of adopting that technology in relation to the nature of the institution. So while I agree that they could have advantages if used properly, I think more is needed to justify widespread adoption in a context such as TPF.

I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring. I think we would probably have to hash out our agreements or disagreements on the telos of the forum. I don't mind so much when a nutty poster writes an immaculately valid and rigorous argument from crackpot premises, because the a thread is an open field for rational engagement. But if LLMs would not lead to the degradation of rational argument and to the outsourcing of thinking, then there would be no problem.
Banno October 20, 2025 at 22:32 #1019978
Do we accept philosophical arguments because of their authority - literally, their authorship - or because of their content?

Ought one reject an otherwise excellent OP because it is AI generated?

Well, yes. Yet we should be clear as to why we take this stance.

We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.

This is not epistemic or ethical reasoning so much as aesthetic.

apokrisis October 20, 2025 at 22:39 #1019980
Quoting Leontiskos
I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring.


Well now you are explaining the quirky appeal of TPF. And wanting to construct a preservation society around that.

Which is fair enough. I agree that if you get enough of the highly constrained approach to speculation elsewhere, then it is fun to drop in on the bat-shit crazy stuff living alongside the po-faced academic stuff, all having to rub along and occasionally go up in flames.

So if that is genuine human reasoning in the wild, that would be why TPF would have to be turned into @baden's game park. Save this little corner of unreason for posterity. Once the larger world has been blanded out by LLMs, folk can come visit and see how humans used to be. :grin:

Certainly a valid argument in that.




apokrisis October 20, 2025 at 22:48 #1019982
Quoting Banno
We hold the author to account for their post. ... This is not epistemic or ethical reasoning so much as aesthetic.


So the essence of TPF is that we have feelings about the authors of posts. And they must also respond with feeling. Sounds right. Now we are getting down to it. :up:


Leontiskos October 20, 2025 at 23:30 #1019991
Quoting Banno
Ought one reject an otherwise excellent OP because it is AI generated?

Well, yes. Yet we should be clear as to why we take this stance.


Right, and therefore we must ask the question:

Quoting Banno
We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.

This is not epistemic or ethical reasoning so much as aesthetic.


Why is it aesthetic, and how does calling it 'aesthetic' provide us with an answer to the question of "why we take this stance"?
Leontiskos October 20, 2025 at 23:53 #1019998
Quoting Banno
Ought one reject an otherwise excellent OP because it is AI generated?


Regarding the nature of a contextless AI utterance:

Quoting Leontiskos
The LLM is not a transparent source which can be queried by one's interlocutor... Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with.


If there is no arguer, then there is no one to argue with. If we found a random piece of anonymous philosophy we would be able to interact with it in only very limited ways. If it washes up on the beach in a bottle, I wouldn't read it, place my objections in the bottle, and send it back out to sea. That's one of the basic reasons why AI OPs make no sense. It would make as much sense to respond to an AI OP as to send my objections back out to sea. One has no more recourse with respect to an AI OP than one does with respect to a message in a bottle.

The whole thing comes down to the fact that there is some human being who is arguing a point via an LLM, whether or not they do it transparently. The problem is not aesthetic. The problem is that it is a metaphysical impossibility to argue with an LLM. The reason TPF is not a place where you argue with LLMs is because there are no places where you argue with LLMs. When someone gets in an argument with an LLM they have become caught up in a fictional reality. What is occurring is not an actual argument.

The closest parallel is where someone on TPF writes an OP and then gets banned before even a single reply is published. What to do with that thread is an interesting question. The mods could close it down or keep it open, but if it is kept open it will be approached as a kind of artifact; a piece of impersonal, contextless, perspectiveless reasoning, offering no recourse to the one who finds it. But this is still only a mild parallel, given that the argument was produced by a real arguer, which is never the case with the AI OP. Or in other words: an AI OP could never even exist in the strict sense. The closest possibility is some human who is using their LLM argument slave to say something they want said. In that case the response is made to the one pulling the strings of the argument slave, not to their puppet.

(Note that a rule against using an AI without attribution precludes the possibility that one is misdirecting their replies to the puppet instead of the puppeteer, and that is a good start.)
apokrisis October 21, 2025 at 00:26 #1020006
Quoting Leontiskos
The reason TPF is not a place where you argue with LLMs is because there are no places where you argue with LLMs. When someone gets in an argument with an LLM they have become caught up in a fictional reality. What is occurring is not an actual argument.


But your deepest arguments are the ones you are willing to have against yourself. Which is how I structured my own early practice once word processors made it practical to take a deeply recursive approach to note taking.

And I think @Joshs example of his own conversation with an LLM quoted back on p6 - “What are we to make of the status of concepts like self and other, subject and object in Wittgenstein’s later work? Must they be relative to the grammar of a language game or form of life?” - is a great example of using LLMs in this same recursive and distilling fashion.

So it feels like a fork in the road here. Anyone serious about intellectual inquiry is going to be making use of LLMs to deepen their own conversation with themselves.

And then there is TPF as a fairly unserious place to learn about the huge variety of inner worlds that folk may construct for themselves.

How does TPF respond to this new technology of LLM thought assistance and recursive inquiry? Does it aim to get sillier or smarter? More a social club/long running soap opera or more of an open university for all comers?

It would seem to me that this is still a time for experimenting rather than trying to ring fence the site. TPF is basically an anarchy anyway. It may get better, it may get worse. But the basic dynamic is already locked in by priors such as the anonymity of the posters, the diversity of the internet and the back and forth haphazard nature of flinging posts into the ether with only a modest expectation of a helpful response.

So for you, TPF might not be a place to do this or that. But if you have a clear vision about what it is indeed for, then LLMs are a thought amplifying technology. You could experiment and see what better thing might take.

I mean it won’t. But you can have fun trying.


Leontiskos October 21, 2025 at 00:42 #1020007
Quoting apokrisis
But your deepest arguments are the ones you are willing to have against yourself.


I want to say that you are using "argument" in a special sense here. You avoid the term later on:

Quoting apokrisis
Anyone serious about intellectual inquiry is going to be making use of LLMs to deepen their own conversation with themselves.


I would just call this a form of reasoning by oneself. I agree that it is good to reason with oneself, but I don't think TPF is the place where you do that. Whether you do it with a word processor or an LLM, I want to say that in either case it is still a form of person-to-person interaction. It's not as though you get a random email from an LLM containing an essay it wrote. You are the one setting the LLM into motion for your own purposes.

But perhaps you want to personify the forum itself and claim that this forum-person ought to be interacting with itself via an LLM. I have no real objection to this, but I think you would be surprised at all of the deleted threads that prompt these rules in the first place. People who are interacting with LLMs know that they are not interacting with a person, and as a result they go to an internet forum and say, "Hey, my LLM just said this! Isn't this interesting? What do you guys think?," followed by a giant wall of AI-generated text.

Quoting apokrisis
It would seem to me that this is still a time for experimenting rather than trying to ring fence the site.


It's a point worth considering. While I don't necessarily agree, I don't think there is much danger in making mistakes with the rules. I think the rule will begin lenient and grow stricter as it becomes necessary. In theory I agree with you that, in general, one should begin with a more lenient approach and tighten it up as becomes necessary.

How would you regulate LLM use on a forum such as this?
ssu October 21, 2025 at 01:06 #1020010
Quoting Leontiskos
I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue.

If all of your posts are LLM-generated, what's the point?

We aren't in a classroom and aren't getting any points or merit for the interaction in TPF. There's nothing to gain for me to get over 10 000 posts here. Anyway, If someone is clueless, LLM-generated content won't help you. I assume that if someone uses LLM-generated content, he or she at least reads it first! And the vast time people respond to others comments, not just start threads.

LLM-generated content is rather good in simple things like definitions. So you don't have to look it up from Wikipedia or some other net encyclopedia. Especially for someone like me, whose mother tongue isn't English, checking up meanings and definitions of words is important. If one can get a great understandable definition and synopsis to Heidegger's Dasein, great! No problem.

But using LLM-generated responses and OP's all the time? People will notice. Similar to copy pasting text from somebody else... if one doesn't bother even to write the same thing without changing the wording, then the accusation of plagiarism is justified. Hence if you get your answer/comment with LLM, then change the wording and I think you are there what @Banno marked as "groundwork". Is it hypocritical? Nah. A lot of what we say as our own reasoning has been learnt from others anyway.

In the end I think this is really on the level of using social media and the ban on sharing viral clips. Just posting some video etc from social media isn't a worthy thing for TPF, yet naturally when the social post shows something to the whole discussion, one can reference it. This is something similar.
apokrisis October 21, 2025 at 01:39 #1020012
Quoting Leontiskos
You are the one setting the LLM into motion for your own purposes.


Well yes. Just like tossing a post into the TPF bear pit.

But one is casting a very wide net. You can do some rapid prototyping without having to be too polished. Publish the roughest top-of-the-head draft.

The other has the promise of accelerating the polishing part of some argument which you have just tossed out to see if even you still think it might fly. :wink:

Quoting Leontiskos
People who are interacting with LLMs know that they are not interacting with a person, and as a result they go to an internet forum and say, "Hey, my LLM just said this! Isn't this interesting? What do you guys think?," followed by a giant wall of AI-generated text.


And I agree that there should be constraints on low-effort posting. It is standard practice for posters to simply assert your wrongness and scamper off without providing any argument. Just muttering excuses about it being lunchtime.

So yes, if one makes an effort, then one wants others to return that effort. Perfectly reasonable.

And cut and pasting LLM cleverness is something to object to, even on a forum that seems remarkably tolerant of low effort OPs and responses.

Quoting Leontiskos
While I don't necessarily agree, I don't think there is much danger in making mistakes with the rules.


OK. So that is part of the experimenting too. :up:

Quoting Leontiskos
How would you regulate LLM use on a forum such as this?


I mentioned some ground rule ideas already. But I'm not really big on rules being more a constraints-based guy. And as I said, a public discussion board on philosophy is already going to wind up in a forum much as we see it.

So I say I am annoyed by low effort responses. But that just goes with the territory. Mandating high effort would be ridiculous.

But banning LLM generated OPs, and clamping down on masquerading cut-and-paste brilliance, seems quite doable. The mods say this is the priority I think.

Then if LLMs do turn low effort posters into folk who can focus well enough to at least sense some flaw in your argument and drum up an instant "but AI says..." riposte, then that seems a step forward to me.

That could be the experiment to see how it goes. But you might have to add subclauses like that if you deploy the insta-LLM text, you then have to still defend it after that. You have to take the risk of being forced into a higher effort mode as a result of being low effort.

At the moment, there is no comeback at all on the insta-responses along the lines of "you're just wrong, I can't understand you, the lunch gong just rang".











Baden October 21, 2025 at 13:13 #1020064
Quoting apokrisis
How does TPF respond to this new technology of LLM thought assistance and recursive inquiry? Does it aim to get sillier or smarter? More a social club/long running soap opera or more of an open university for all comers?


It gets sillier when people outsource their thinking and writing skills to AI. Although in your case it might be worthwhile to make an exception so we wouldn't have to listen to all the snide badly thought out criticisms of the mods and the site that you just can't help spitting out to make yourself feel superior.

You consistently ignore posts that don't fit your narrative that we're backward anti-AI etc., so you can play your silly game. Get a new hobby. Start listening. Realize there are intelligent people here who can think and see through your twaddle. I mean just read what you've written above in the context of the conversation. Reflect a little on how transparent you are. Develop some self-awareness.
Baden October 21, 2025 at 13:28 #1020068
I mean how hard is to understand the following that @apokrisis just really can't manage to get no matter how many times we repeat it:

1) We're happy for people to experiment with AI outside the site, improve themselves with it, test their arguments, sharpen their mind. [Positive use of AI / Positive for site ]

2) We're not happy for people to be so lazy they don't write their own posts and then fill our site with bland homogenised content. [Negative use of AI / Negative for site]

3) This approach is exactly the right one to encourage intellectual effort and integrity as well as to maintain diversity of content. The idea that it will turn us into a "soap opera" rather than apo's imaginary open university / AI utopia is utter nonsense.

I cannot make it any more ABC for APO. But nonetheless, I'm sure he has not exhausted his reservoir of self-inflating B.S.

Harry Hindu October 21, 2025 at 13:44 #1020070
Quoting BC
Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.

Well, yeah. The problem isn't AI. It is using AI, or any source, as your only source.
Harry Hindu October 21, 2025 at 13:46 #1020071
Quoting Banno
We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.

Most of us are not aware of other members' backgrounds, aims and norms. These things are irrelevant to the discussion. The focus on the source rather than the content is a known logical fallacy - the genetic fallacy.
Harry Hindu October 21, 2025 at 13:50 #1020073
Quoting Janus
So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad.

"Bad" and "poor" were your words, not mine. All I am saying is that any progress in philosophy is dependent upon progress in science and technology. The last sentence sounds like we agree except for your injection of "bad" and "poor" into it.
Fire Ologist October 21, 2025 at 14:56 #1020080
Quoting Harry Hindu
backgrounds, aims and norms. These things are irrelevant to the discussion. The focus on the source rather than the content is a known logical fallacy - the genetic fallacy.


I disagree. When you are presented with something new and unprecedented, the source matters to you when assessing how to address the new unprecedented information. You hear “The plant Venus has 9 small moons.” You think, “how did I not know that?” If the next thing you learned was that this came from a six year old kid, you might do one thing with the new fact of nine moons on Venus; if you learned it came from NASA, you might do something else; and if it came from AI, you might go to NASA to check.

Backgrounds, aims and norms are not irrelevant to determining what something is. They are part of the context out of which things emerge, and that shape what things in themselves are.

We do not want to live in a world where it doesn’t matter to anyone where information comes from. Especially where AI is built to confuse the fact that it is a computer.
apokrisis October 21, 2025 at 19:39 #1020139
Reply to Baden Yep, fair enough. :up:
Baden October 21, 2025 at 22:53 #1020168
Reply to apokrisis

Thank you.

I hope most of us are coming around to being more or less on the same page on this now.
Janus October 21, 2025 at 23:07 #1020171
Quoting Leontiskos
"There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?

Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence.


I don't know if what I said implies that there are no authoritative generalists. The point was only that, in regard to specialist areas, areas that non-specialists cannot have a masterful grasp of, it seems right to trust authority.

If LLMs, due to their capacity to instantly access vastly more information in all fields than any human, can be considered to be masterful, and hence authoritative, generalists then the only reason not to trust their information might be their sometime tendencies to "hallucinate".

The information they provide is only as good as the sources they have derived it from. Ideally we should be able to trace any information back to its peer-reviewed source.

Quoting BC
Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.


Yes this is one of the main concerns that motivated the creation of this thread.

Quoting BC
The other line is this: We do not have a good record of foreseeing adverse consequences of actions a few miles ahead; we do not have a good record of controlling technology (it isn't that it acts on its own -- rather we elect to use it more and more).


And this is the other—I think LLMs have been released "into the wild" prematurely. More than two years ago there was a call form AI researchers to pause research and development for six months. ChatGPT4 and had already been released to the public.

"[i]The growing popularity of generative AI systems and large language models is causing concern among many AI experts, including those who helped create the systems.

This week, more than 1,500 AI researchers and tech leaders, including Elon Musk, Stuart Russell and Gary Marcus, signed an open letter by the nonprofit Future of Life Institute calling on all AI labs and vendors to pause giant AI experiments and research for at least six months.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.

The organization and the signatories ask that researchers should cease training of AI systems more potent than OpenAI's GPT-4. During that time, AI labs and experts should join to implement "a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."[/i]"

From here

So, my concerns were regarding both the effect on the intellectual life of individuals and by extension on sites like this, and also the much wider issue of general human safety.

Quoting Baden
I hope most of us are coming around to being more or less on the same page on this now.


I for one think your proposals represent about the best we can do in the existing situation.




Hanover October 22, 2025 at 00:16 #1020177
So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?

If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?

Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?

Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?

Feels like we need to matter, right?
Moliere October 22, 2025 at 00:20 #1020178
Quoting Hanover
Feels like we need to matter, right?


Feels like we do matter, whether we like it or not.

I hope my expressions thus far answer all your other questions.
Janus October 22, 2025 at 05:28 #1020209
Reply to Hanover :lol: Wise(acring) questions from the master of fuckwittery. :wink:
Jamal October 22, 2025 at 05:45 #1020211
Quoting Janus
I for one think your proposals represent about the best we can do in the existing situation


@Baden's "proposals" are just a restatement of the TPF status quo. But in my opinion, such is the ambiguity and confusion around this issue, we do need a more detailed set of guidelines.

You can use an LLM to produce your replies for you, then put them in your own words for posting on TPF. We can't stop people from doing it, but I don't think it is ok. It's what some people in these conversations have described as allowing it to do your thinking for you.

"So long as you don't copy and paste, use it how you like" is not an adequate guide to the use of LLMs here. That's why I'll be posting up suggested guidelines for discussion.

EDIT: No disrespect to @Baden, btw. I mostly agree with him. I just think our policy is not as obvious as he has implied.
Janus October 22, 2025 at 06:02 #1020213
Reply to Jamal Okay, I had assumed that when @Baden said "don't get LLMs to do your writing for you", that this would include paraphrasing LLM text. It's good that any ambiguity gets ironed out.

I have never used LLMs until today. I felt I should explore some interactions with them, so I have a better idea about what the experience is like. The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.
Jamal October 22, 2025 at 06:07 #1020214
Reply to Janus

Yeah, but it's ambiguous. I'd like to clarify it, and make it known that it's not ok to do certain things, even if it's impossible to enforce. Since people are using LLMs, this is best done in the context of a "How to use LLMs".
Jamal October 22, 2025 at 06:12 #1020216
Quoting Janus
I have never used LLMs until today. I felt I should explore some interactions with them, so I have a better idea about what the experience is like. The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.


Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier?
Pierre-Normand October 22, 2025 at 06:27 #1020222
Quoting Janus
The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.


The three sorts of examples that you give lay on a spectrum.

I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!)

Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you.

The idea of using their argument is strange since AI's never take ownership for them. If you've grasped the structure of the argument, checked the relevant sources to ensure it's sound in addition to being valid, and convinced yourself that it's cogent and perspicuous (that is, constitutes an apt framing of the problem), then the argument becomes one that you can make your own.
sime October 22, 2025 at 08:35 #1020238
I think this is all a storm in a teacup. It is obvious etiquette to quote an AI response in the same way that one would quote a remark from a published author, and nobody should object to a quoted AI response that is relevant and useful to the context of the thread.

Also, for those of us who use AI for studying subjective and controversial philosophical topics, it can be useful to read the AI responses that other people are getting on the same topic, due to the fact that AI responses can be influenced by conversation history and can be biased towards the user's opinion. Community feedback can therefore help people objectively appraise the AI responses they are getting.


Baden October 22, 2025 at 09:02 #1020241
Harry Hindu October 22, 2025 at 12:17 #1020248
Quoting Fire Ologist
I disagree. When you are presented with something new and unprecedented, the source matters to you when assessing how to address the new unprecedented information. You hear “The plant Venus has 9 small moons.” You think, “how did I not know that?” If the next thing you learned was that this came from a six year old kid, you might do one thing with the new fact of nine moons on Venus; if you learned it came from NASA, you might do something else; and if it came from AI, you might go to NASA to check.

Backgrounds, aims and norms are not irrelevant to determining what something is. They are part of the context out of which things emerge, and that shape what things in themselves are.

We do not want to live in a world where it doesn’t matter to anyone where information comes from. Especially where AI is built to confuse the fact that it is a computer.

You're essentially saying that the genetic fallacy is not a logical fallacy. It is, and it it's a fallacy for a reason.

Difficult as it may be, it is vitally important to separate argument sources and styles from argument content. In argument the medium is not the message.

https://www.fallacyfiles.org/genefall.html


Here's an idea:
If you don't like a post that contains content from AI - fine and you can say so. But then don't just let the message go and not address it, because if you don't then a human will just do so (in their own words :wink: ), and then you won't be able to make the same argument about the source of the argument.

The whole thread is making a mountain out of mole hill.
Harry Hindu October 22, 2025 at 12:20 #1020249
Quoting Hanover
So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?

If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?

Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?

Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?

Feels like we need to matter, right?

I don't know - maybe give us the information and let us decide for ourselves what we do with it - like everything else on this forum.

Whether we matter or not is irrelevant. It is whether our personal, subjective views and preferences of sources matter when it comes to determining whether some statement is valid or fallacious.
Jamal October 22, 2025 at 12:55 #1020254
Quoting sime
I think this is all a storm in a teacup. It is obvious etiquette to quote an AI response in the same way that one would quote a remark from a published author, and nobody should object to a quoted AI response that is relevant and useful to the context of the thread.


It very much depends. We don't want to see people debating by proxy, offering quotes of LLM output in lieu of arguing the point themselves. It's another case of delegating thought. Anyone can test oneself against an LLM in their own time, and should otherwise quote primarily from published authors.

But then you might say the problem is the same in both cases and has nothing to do with the source: fallacious argument from authority is bad in any case, never mind if it's AI. This is where I disagree. I believe we should not treat LLM quotes in the same way as those from published authors.

When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.

This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?

I'm not saying that an LLM quote cannot be useful and relevant in a discussion, just that if we want to maintain good quality discussions these should appear a lot less than quotes from published authors. (Of course, it's fine if it was an LLM that led the user to that published source in the first place.)
Harry Hindu October 22, 2025 at 13:09 #1020255
Quoting Jamal
When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.

Kant is not alive to be accountable and to tell us what he meant, not to mention that if he were alive today and possessed the knowledge of today what he said might be different.

Who was it that said that philosophy is like a language game? Isn't discussing philosophical topics whether it be with AI or another human a language game?

Quoting Jamal
This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?

One might say that a quote from Kant invites engagement with the user's knowledge of what dead philosophers have said and that a quote from an LLM is more relevant because it is based on current knowledge.
Jamal October 22, 2025 at 13:10 #1020256
Quoting Moliere
I want to divide this question into two -- one addressing our actual capacities to "Ban AI", which I agree is a useless rejection since it won't result in actually banning AI given our capacities to be fair and detect when such-and-such a token is the result of thinking, or the result of the likelihood-token-machine.


Yeah, the idea of telling people not to use it at all is not a serious one, since at least half of us use it already, including most of the staff. But I think we should, in the context of a How to use AI, tell people what we don't want them to do, even if it's often impossible to detect people doing it.

Quoting Moliere
On the latter I mean to give a philosophical opposition to LLM's. I'd say that to progress thought we must be thinking. I'd put the analogy towards the body: we won't climb large mountains before we take walks. There may be various tools and aids in this process, naturally, and that's what I'm trying to point out, at the philosophical level, that the tool is a handicap towards what I think of as good thinking than an aid.

My contention is that the AI is not helping us to think because it is not thinking. Rather it generates tokens which look like thinking, when in reality we must actually be thinking in order for the tokens to be thought of as thought, and thereby to be thought of as philosophy.

In keeping with the analogy of the body: There are lifting machines which do some of the work for you when you're just starting out. I could see an LLM being used in this manner as a fair philosophical use. But eventually the training wheels are loosened because our body is ready for it. I think the mind works much the same way: And just as it can increase in ability so it can decrease with a lack of usage.

Now for practical tasks that's not so much an issue. Your boss will not only want you to use the calculator but won't let you not use the calculator when the results of those calculations are legally important.

But I see philosophy as more process-oriented than ends-oriented -- so even if the well-tuned token-machine can produce a better argument, good arguments aren't what progresses thought -- rather, us exercising does.

By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use -- i.e. checking your own arguments, etc. So by all means others may go ahead and do so. It's just not that appealing to me. If that means others will become super-thinkers beyond my capacity then I am comfortable remaining where I am, though my suspicion is rather the opposite.


You make some great points here but—and I don't want to be reductive; it's just how I see it—this can all be addressed in a discussion of how best to use it. Also, I think I disagree with your training wheels analogy. At least, it's not the whole picture. As much as we use some kind of helper or support when learning to do something, we also increasingly use labour-saving tools as we become experts, because there is no benefit to doing everything manually any more. LLMs can be used in both of those ways, I think. Working out the guidelines as to how, exactly, is the challenge.
Jamal October 22, 2025 at 13:18 #1020258
Reply to Harry Hindu

Your response misses the point but I know better than to attempt a direct reply.
Harry Hindu October 22, 2025 at 13:19 #1020259
Reply to Jamal Your response misses the point that the point of an argument is not the messenger, but the message.
Harry Hindu October 22, 2025 at 13:24 #1020260
Quoting Jamal
But I think we should, in the context of a How to use AI, tell people what we don't want them to do, even if it's often impossible to detect people doing it.

This is completely irrelevant because if someone rewrites what AI said in their own words the source of the idea is still AI.
Harry Hindu October 22, 2025 at 13:38 #1020263
I wonder where people on this site will stand in maybe 10-15 years when AI possibly discovers the cure for cancer or the Theory of Everything. Will they be quoting AI then?
Moliere October 22, 2025 at 14:05 #1020266
Reply to Harry Hindu Haven't started yet, not sure if I'd start then.
Harry Hindu October 22, 2025 at 14:06 #1020267
Reply to Moliere It seems to me that the fact that we can have a philosophical discussion with an LLM is evidence that philosophy is a language game.
Moliere October 22, 2025 at 14:14 #1020268
Reply to Harry Hindu Have you tried having an erotic chat with an LLM?

We can do it, but we can't do it.

It looks like sex but it isn't sex.

So it goes with thinking, in my opinion.

Philosophy is more than a language game, I'd say. Philosophy is the discipline which came up with "language games"; insofar that we adopt language games then philosophy may be a language game, but if we do not -- then it's not.

Philosophy is a "step up" from language games such that the question of what language games are can be asked without resorting to the definition or evidence of "language games"
Harry Hindu October 22, 2025 at 14:24 #1020272
Quoting Moliere
Have you tried having an erotic chat with an LLM?

No. I haven't. I get the real thing from my wife, so why would I? Of course there are people that have a healthy sex life with their partner still seek out prostitutes and porn on the internet or sex chats. It's my personal preference for the real thing and those other acts I might consider only if I wasn't getting the real thing as often as I like.

The same goes for discussions on this forum where certain posters are regularly intellectually dishonest and are rude. AI is where I go when I'm not getting any kind of serious input from real people on a topic. I prefer having discussions with real people, but use AI as a backup.

Quoting Moliere
We can do it, but we can't do it.

One could say the same thing about calling a 900 number and talking to the live person on the other line. It's not real sex either.

Quoting Moliere
Philosophy is more than a language game, I'd say. Philosophy is the discipline which came up with "language games"; insofar that we adopt language games then philosophy may be a language game, but if we do not -- then it's not.

Philosophy is a "step up" from language games such that the question of what language games are can be asked without resorting to the definition or evidence of "language games"

It seems to me that the difference is with those that see language itself as a language game and those that don't, where those that do are more focused on the messenger rather than the message, or the words rather than what they refer to. Those that do not see language as a game are focused on the message rather than the messenger or the words used to express it.





Moliere October 22, 2025 at 14:27 #1020273
Quoting Harry Hindu
No. I haven't. I get the real thing from my wife, so why would I? Of course there are people that have a healthy sex life with their partner still seek out prostitutes and porn on the internet or sex chats. It's my personal preference for the real thing and those other acts would only be if I wasn't getting the real thing as often as I like.

The same goes for discussions on this forum where certain posters are regularly intellectually dishonest and are rude. AI is where I go when I'm not getting any kind of serious input from real people on a topic. I prefer having discussions with real people, but use AI as a backup.

We can do it, but we can't do it.
— Moliere
One could say the same thing about calling a 900 number and talking to the live person on the other line. It's not real sex either.


I don't mean to be rude -- it's a very serious comparison. The erotic and philosophy have been invoked together since Plato.

So as you note: you want the real thing.

Me too.
bongo fury October 22, 2025 at 15:01 #1020281

Quoting Jamal
When you quote a published author you point to a node in a network of philosophical discourse, [...] The source in this case is accountable and interpretable.


Exactly my point about Google search vs the AI summary that presumes to identify ideas instead of authored texts?

When I made the point (badly) I nearly said "nodes in a network". Dang!
Jamal October 22, 2025 at 15:05 #1020283
Quoting bongo fury
When I made the point (badly) I nearly said "nodes in a network". Dang!


I feel like I've been overusing it lately.

Anyway, yes, I do take your point, despite my dislike of Google's search results. But if you use an LLM directly rather than via Google search, you can get it to identify the sources.
Leontiskos October 22, 2025 at 15:08 #1020285
Quoting Hanover
So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?

If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?

Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?

Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?

Feels like we need to matter, right?


I would say that the crux is that the AI afficionado is reliant upon a self-deception or a lie, and he wishes it to spread. The lie has various manifestations, but it is usually of the form, "AI is not X, but we should (or will) treat it as if it were X." For instance, if I were to say that philosophy is about thinking thoughts and AI does not generate thoughts, the AI afficionado would reply that AI may not have thoughts but it does have "thoughts," and that we should treat these "thoughts" as if they were thoughts. But to treat something falsely is to engage in self-deception or a lie.

For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" This is literally the oldest trick in The Book, "Don't worry about any objections, just focus on the power it will give you!" The AI afficionado's approach is consequentialist through and through, and he has discovered a consequence which is supreme; which can ignore objections tout court. For him, what AI provides must outweigh any possible objection, and indeed objections therefore need not be heard. His only argument is a demonstration of its power, for that is all he deems necessary ("It is precious..."). In response to an objection he will begin by quoting AI itself in order to demonstrate its power, as if he were not begging the question in doing so. Indeed, if his interlocutors accept his premise that might makes right, then he is begging no question at all. With such logical and rhetorical power at stake, how could the price of lying be a price too high to pay?

This is the unwritten answer to the question, "Why should we treat something as if it were something that it is not?" "Why should we lie to ourselves in this way?" The answer is, "Because it will give us great power. No more need be said."
Baden October 22, 2025 at 15:24 #1020290
Reply to Leontiskos

In one of my essays, I suggest AIs (because---depite their potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.

It's a little speculative.
Leontiskos October 22, 2025 at 15:24 #1020291
Quoting Jamal
I believe we should not treat LLM quotes in the same way as those from published authors.

When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.

This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?


Thank you. I think this is a highly underappreciated point. :up:

Quoting Jamal
That's why I'll be posting up suggested guidelines for discussion.


Quoting Jamal
Yeah, but it's ambiguous. I'd like to clarify it, and make it known that it's not ok to do certain things, even if it's impossible to enforce.


Although I have been harping on the dangers of making the decision-making too democratic when it comes to AI, I think you definitely want to secure buy-in from the membership given the fact that the guidelines will be difficult to enforce in an external or coercive manner. I think this is true even if it only means attempting to make these guidelines more rationally persuasive than other guidelines.
Leontiskos October 22, 2025 at 15:33 #1020295
Quoting Baden
In one of my essays


I forget exactly why I was digging around for your technoethics essay, but my guess is that it was related to all of this AI business. The issues that you raise in that essay strike me as deeply related to AI.

Quoting Baden
In one of my essays, I suggest AIs (because---depite potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.


I can definitely see the outlines of that. One could connect my argument about lying to Kant's arguments against lying, which of course have social implications and not merely individual implications.

If we have a technology or decision-path with good and bad effects, there are some cases where we are deeply tempted to attend to the good effects and ignore the bad effects. Or to at least postpone the assessment of the bad effects without in any way postponing the assessment of the good effects. So if there is an optimist who nevertheless recognizes that "negentropy" is being produced, I would plead with them to try to objectively understand the severity and extent of that negentropy before making any decisions about how to adopt a new technology. The reason I am pessimistic is largely because the investment dollars that are being poured into these technologies is astronomical.
Baden October 22, 2025 at 15:41 #1020298
Reply to Leontiskos

I'm also very pessimistic actually. Feel fee to PM if you'd like a pdf of the technoethics essay and the AI one if you want. Or the whole book (ten essays).
wonderer1 October 22, 2025 at 16:00 #1020303
Quoting Leontiskos
For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" This is literally the oldest trick in The Book, "Don't worry about any objections, just focus on the power it will give you!" The AI afficionado's approach is consequentialist through and through, and he has discovered a consequence which is supreme; which can ignore objections tout court. For him, what AI provides must outweigh any possible objection, and indeed objections therefore need not be heard. His only argument is a demonstration of its power, for that is all he deems necessary ("It is precious..."). In response to an objection he will begin by quoting AI itself in order to demonstrate its power, as if he were not begging the question in doing so. Indeed, if his interlocutors accept his premise that might makes right, then he is begging no question at all. With such logical and rhetorical power at stake, how could the price of lying be a price too high to pay?


What is the source of your claims to knowledge of the psychology of "the AI afficianado"?

I.e. is there any reason for us to think that you aren't lying while making such claims?
Baden October 22, 2025 at 16:00 #1020304
The most menacing AI (I've come across) for doing things for you that you should be doing yourself is ChatGPT. That thing would take a [self-censored] for me if it could.
Hanover October 22, 2025 at 16:14 #1020307
Quoting Leontiskos
For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!"


The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts.

Quoting Leontiskos
This is the unwritten answer to the question, "Why should we treat something as if it were something that it is not?" "Why should we lie to ourselves in this way?" The answer is, "Because it will give us great power. No more need be said."


I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences. If we don't want to debate the ChatGpt p-zombie, then we don't have to. We have the right to ban people, so why can't we ban zombies? Just add to the rules: All posters must have a soul. No golems.

We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster.

Quoting Baden
They eat us and then they eat reality.


You just re-wrote a modern day Frankenstein.



Leontiskos October 22, 2025 at 16:22 #1020309
Quoting Hanover
The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts.


Not everyone agrees that Ouija boards do not work. In any case, that's much my point: an LLM is like a Ouija board, except that it "works." It is a magical knowledge-generating device, and the power it provides is a sufficient reason against any objection.

Edit: Or to answer you more directly: If Ouija boards worked, then everyone would be happy to practice the black arts.

Quoting Hanover
I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences.


I agree entirely that that is the question. But you sort of do have to self-justify your preferences when they are being questioned by those who want to change the rules and make AI-use more acceptable on TPF. Or even when they are being questioned by those who want the rule made more strict. That is what <this thread> and <this thread> are doing, respectively.

Quoting Hanover
We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster.


Yeah, I think that's a helpful heuristic you provide. Still, it may be more easily said than done.
sime October 22, 2025 at 16:39 #1020311
Let's focus on the actual harms that AI use has so far wrought upon this forum: What are they?
Jamal October 22, 2025 at 18:02 #1020319
Quoting Baden
In one of my essays, I suggest AIs (because---depite their potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.

It's a little speculative.


I seem to switch between two exclusive mental settings when thinking about AI: the critical-theoretical and the pragmatic-instrumental. I appreciate these speculative thoughts of yours, and agree that like any technology now, AI isn't just a neutral tool, that it's part of a dehumanizing totality. But then I switch and I think about how best to use it, pretending that it is a neutral tool. And when I'm commenting in these discussions I'm usually in that pragmatic mode, because the system makes the false real, in the sense that I act it out.
Leontiskos October 22, 2025 at 18:20 #1020324
Quoting Jamal
I seem to switch between two exclusive mental settings when thinking about AI


I am acutely aware of the incoherence of my intentional state when I am using an LLM. It's a bit like when you wake up from a very strange dream, and must now come to terms with the fact that the entire manner of being that engaged you a moment ago is incommensurable with the entire manner of being that you are now engaged in after waking. Extremely vivid and involved dreams are basically a form of psychosis, and when, upon waking, the psychotic break is reversed, there is a remarkable sense of discontinuity and dissociation. In that sense, the emerging problem of "AI psychosis" should not surprise us at all. LLMs allow us to intentionally alter our entire intentional orientation in a sustained and self-replicating way, and this possibility looks to be unprecedented. What arises is the question of how to ascend back out of the rabbit hole once one has descended. The danger lies in thinking that ascent is no longer necessary, or even that what was once seen as a rabbit hole is just "the new normal."
Banno October 22, 2025 at 20:17 #1020354
Quoting Jamal
This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?


In an attempt to rub salt into this, let's consider AI Peter Singer.

Will we say that a quote from AI Peter Singer is a quote from Peter? It would seem not, but the Bot is trained on all of Singer's texts, and apparently condoned by the man himself.

So I asked it, and it replied:
AI Peter Singer:A quote from AI Peter Singer would be more accurately described as a representation of Peter Singer's views, based on his writings and input. While it reflects his ethical perspectives, it isn't directly authored by him. It's important to differentiate between the AI's generated responses and Peter Singer's personal statements. What do you think about this distinction?


So I asked it what authority it has:
AI Peter Singer :I don't possess authority in the traditional sense. My role is to provide insights based on Peter Singer's ethical views and writings. I'm here to help you think through ethical questions and offer guidance grounded in his utilitarian perspective. Ultimately, any decisions or actions are up to you, informed by your own reasoning and values. How do you see the role of philosophical guidance in your decision-making?


All pretty obvious and sensible.

What will we make of an AI Kant, or an AI Nietzsche? It won't be long.
Tom Storm October 22, 2025 at 20:46 #1020362
Quoting Banno
What will we make of... an AI Nietzsche?


I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick.

Banno October 22, 2025 at 20:49 #1020363
Reply to Tom Storm :grin:

An AI Nietzsche will have the attention of all the adolescent fanboys and the Right Whinge in general; should be quite an earner.

Should we hook up and set one up? We could donate the profits to Australian Unions.
Tom Storm October 22, 2025 at 20:51 #1020364
Reply to Banno But an AI Nietzsche without hormones and a fragile sense of masculinity won't need to overcompensate so much...
Banno October 22, 2025 at 20:54 #1020365
Reply to Tom Storm We could add an AI Ayn Rand, and get the Liberals in as well.
Banno October 22, 2025 at 20:56 #1020366
But would an AI Wittgenstein be a performative contradiction?


Let alone a Chinese-Room AI Searle...
praxis October 22, 2025 at 21:08 #1020367
Frankly I prefer Ai Nietzsche to real Nietzsche.
Janus October 22, 2025 at 21:10 #1020368
Reply to Jamal :up: Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. It is important to discuss the issues relating to human/LLM interaction as comprehensively and openly as possible, given what seem to be the significant array of potential dangers in this radical new world. It was an awakening sense of these possible threats that motivated the creation of this thread.

Quoting Jamal
Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier?


Right, that's a good point, but I also think that, even if you present the LLMs argument, as understood by you, in your own words, it would be right to be transparent as to its source.

Quoting Pierre-Normand
I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!)


I think there would be real shame in the former, but not in the latter though. It's the difference between dishonesty and honesty.

Quoting Pierre-Normand
Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you.


I agree with this in principle, though I would rather entirely author my own text, and discover and remedy any clunkiness myself and in my own time. That said, if someone, LLM or otherwise, points out grammatical infelicities, repetitiveness or lack of clarity, and so on, I'd take that as constructive criticism. Then I'd like to fix it in my own way.

Quoting Tom Storm
I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick.


It would presumably incorporate the entirety of Nietzsche's opus as well as every secondary text dealing with Nietzsche's thought.

Quoting Banno
But would an AI Wittgenstein be a performative contradiction?


I'm curious as to why that should be.



Banno October 22, 2025 at 22:27 #1020383
Quoting Janus
Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them.

The glory of empiricism!

Quoting Janus
I'm curious as to why that should be.

...that issue, raised elsewhere, of whether, or how, an AI participates in a language game...
Jamal October 23, 2025 at 06:13 #1020399
Quoting Janus
Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. It is important to discuss the issues relating to human/LLM interaction as comprehensively and openly as possible, given what seem to be the significant array of potential dangers in this radical new world. It was an awakening sense of these possible threats that motivated the creation of this thread.


:up:

Quoting Janus
Right, that's a good point, but I also think that, even if you present the LLMs argument, as understood by you, in your own words, it would be right to be transparent as to its source.


I'm really not sure about this. Probably my position on this will boil down to sometimes yes, sometimes no. How that breaks down I'm not quite sure. But just anecdotally, I've had the experience of using an LLM and being reminded of a point I'd forgotten—an "oh yes, of course!" moment, whereupon I make the point my own and don't even consider crediting the LLM. In that moment the feeling is like finding the perfect word with a thesaurus: when you find it you know it's the perfect word because you already have the knowledge and literary sensitivity to judge (and you don't credit the thesaurus).

I was thinking again about this issue:

Quoting Jamal
I believe we should not treat LLM quotes in the same way as those from published authors.


I realized that when I see the quoted output of an LLM in a post I feel little to no motivation to address it, or even to read it. If someone quotes LLM output as part of their argument I will skip to their (the human's) interpretation or elaboration below it. It's like someone else's LLM conversation is sort of dead, to me. I want to hear what they have built out of it themselves and what they want to say to me.

That's all pretty vague but there you go.
Pierre-Normand October 23, 2025 at 06:58 #1020401
Quoting Jamal
I realized that when I see the quoted output of an LLM in a post I feel little to no motivation to address it, or even to read it. If someone quotes LLM output as part of their argument I will skip to their (the human's) interpretation or elaboration below it. It's like someone else's LLM conversation is sort of dead, to me. I want to hear what they have built out of it themselves and what they want to say to me.


When Wittgenstein was giving lectures in Cambridge in 1930-1933, he was unwilling to write any lecture notes for his own use. He claimed that after he'd jot down his own thoughts, the words expressing them became dead to him. So, he preferred expressing whatever he wanted to convey to his students afresh. A couple times in the past (just like what happened to @Janus recently in this thread, I think) I wrote a long response to a post and lost it to some computer glitch, and when I tried to rewrite from memory what I had written I found myself unable to find the words to express the very same ideas that I had expressed fluently on the first try. So, I had to pause and rethink what it is that I wanted to say and find new words.

AIs are good partners to bounce ideas off, and they supplement what you tell them with missing pieces of knowledge and ways to understand those ideas as they are in the process of being unpacked. So, conversing with AIs is like articulating a thought for yourself. But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI.

On edit: here are some dead words from GPT-4o that, however dead they may be (to addressees other than me), struck me as particularly smart and insightful.
Jamal October 23, 2025 at 10:01 #1020407
Reply to Pierre-Normand

Good stuff. Not sure what to think about it yet.
Deleted User October 23, 2025 at 10:07 #1020408
As for the notion to ban AI altogether; in my opinion, this is not possible any more!

In my understanding, there are only two human capabilities that AI has not achieved, yet:

  • Abstract though (or abstraction) - AI is excellent in communication, that is converting data into information as well as reasoning, converting information into knowledge.
  • Survival - the capability to obtain resources from other systems in order to manipulate entropy within itself.


When AI gain anyone of these capabilities (abstract thought or survival), the potential danger of AI should be salient.

When AI gain both these capabilities, Homo sapiens have lost their place on the apex of evolution. The quest for liberte, egalite, [i]fraternity[/I] will not exist anymore. It will only be the history of humankind.

I would like to emphasise that both these statements are when-statements, not if-statements.

Also, I must confess, I have used Prof. Google to check the meaning of some of the words I used - the build-in spell check seems to be working fine.
Hanover October 23, 2025 at 12:23 #1020415
Here's an article that addresses the issues we're dealing with:

https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use

It's from a national association for high schools related to debate rules, which seems close enough to what we do. The point being that we might take some time to look at how other similar organizations have dealt with these same issues so as to not try and reinvent the wheel.
sime October 23, 2025 at 12:35 #1020419
I find the the appeals to Wittgenstein as a gold standard of philospohical writing ironic, considering how indispensible AI is for the layreader who wishes to engage with Wittgenstein's thinking in a historically accurate fashoin. This is all thank to Wittgenstein's apparent inability to articulate himself, and because of a greater irony that the anti-AI brigade of this forum overlook: Wittgenstein never quoted the philosophers he was targetting or stealing from, leading to great difficulties when it comes to understanding, criticising and appraising the originality of his ideas. (I'm not aware of any idea of Wittgenstein's that wasn't more precisely articulated by an earlier American pragmatist such as Dewey or Peirce, or by a contemporary logician such as Russell or Frege or Ramsey, or by a post-positivist such as Quine) And yet these more articulate philosophers are rarely discussed on this forum - I would argue because precise writing is more technical and therefore more cognitively demanding than giving hot-takes of aphorisms .

Wittgenstein's standard of philsophical writing wasn't publishable in his own time, at least not for the standards required by anayltc philospohy, let alone our time. So if AI should not be quoted because of source uncertainty, then what is the justification on this forum for allowing people to quote Wittgenstein?
Jamal October 23, 2025 at 13:56 #1020427
Reply to sime

As far as I know, nobody has held up Wittgenstein as a gold standard of philosophical writing. And I don't think anyone has made any connection between clarity and precision on one side, and quotability on the other. That's an entirely unrelated issue, as far as I can see.

We quote Wittgenstein, not ChatGPT, because Wittgenstein is a human being, motivated to express his original insights, to say and write things that were meaningful, and to take part in a conversation (philosophy), and who has since taken his place in a tradition of discourse. The result is a legacy with a stable place in the culture, shared by everyone, and one that can be interpreted, because—since it was produced by a conscious and motivated agent—we know that he meant something. ChatGPT in contrast is a very clever predictive text generator whose output is ephemeral and has no equivalent insight and motivation behind it. Just because its output looks like it could have been produced by a human, it doesn't follow that it is equally as quotable. To think so is a category error, stemming from ChatGPT's imitation of a human.
Harry Hindu October 23, 2025 at 14:07 #1020430
Quoting Jamal
Just because its output looks like it could have been produced by a human, it doesn't follow that it is equally as quotable. To think so is a category error.

Yet if a human wrote the same thing it is quotable? How does the meaning of words change depending on its source only? If you and I said the same thing, would it not mean the same thing? Would that not qualify as an agreement?

Quoting Leontiskos
For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!"

The same thing can be said of the meat in our skulls. I've been asking how a mass of neurons can generate the feeling of empty visual space and depth and all I see are statements like this that are akin to saying, "It just can and a mass of silicon cannot!"

Quoting Leontiskos
, if I were to say that philosophy is about thinking thoughts and AI does not generate thoughts, the AI afficionado would reply that AI may not have thoughts but it does have "thoughts," and that we should treat these "thoughts" as if they were thoughts. But to treat something falsely is to engage in self-deception or a lie.

What is a thought and how did you come to have any?


This line of thinking is just so silly. You have most likely already responded to AI output as if what it has said means the same thing as when a human being says it. You have most likely already responded to ideas generated by AI but re-phrased by a human. You have most likely used an app using code generated by an AI. We already live in a time where the output of AI vs humans are becoming blurred.

The issue isn't AI. It's all the bullshit information it is being trained with. Garbage in, garbage out, and the same hold true for our own brains, regardless of the source of garbage. One might say that the largest source of misinformation is humans themselves, and AI is unwittingly being trained with no ability to determine what is fact vs fiction - the same way a human can be manipulated by the type of information they receive and process.
Leontiskos October 23, 2025 at 20:07 #1020484
Quoting Hanover
Here's an article that addresses the issues we're dealing with:

https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use

It's from a national association for high schools related to debate rules, which seems close enough to what we do. The point being that we might take some time to look at how other similar organizations have dealt with these same issues so as to not try and reinvent the wheel.


Thanks for digging this up. It's great to have some external human thought about AI. :up:
Leontiskos October 23, 2025 at 20:24 #1020489
Quoting Pierre-Normand
But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI.


The reason I think this is off target could be seen by looking at Plato's dialogues. If what Wittgenstein or you say were correct, then classic texts such as Plato's dialogues should "feel dead when extracted from the 'living' exchange." Except they don't. They feel very much alive.

Similarly, I think any transcript of human interactions will feel much more alive than a human-AI "interaction" (I want to retain the scare quotes for these words that we are using in idiosyncratic ways). I've pointed out a few times that the AI is not arguing, speaking, etc. We merely engage in the fiction which pretends that such a thing is happening. Presumably it is always harder to believe what one knows to be a fiction than to believe what one knows to be real. So it is harder to believe that an AI is speaking than to believe that a human is speaking, although people do convince themselves of both. But I would say that removing oneself a step from the AI-human "interaction" (by reading the transcript), makes it even harder to engage oneself in the fiction that the person interacting with the AI was engaged in. It's a bit like watching a child interact with a real dog, as opposed to watching a child interact with their stuffed animal. Even if the child convinces themselves that their stuffed animal is real, watching the former interaction is much more interesting than watching the latter interaction—unless you are a child psychologist or someone who is interested in the fictional indulgence per se. The former interaction is more alive because it is more real. There is no cognitive dissonance introduced by the self-indulged fiction. This is also why it is easier to play with a child and their dog, as opposed to playing with a child and their stuffed animal.
Leontiskos October 23, 2025 at 20:46 #1020497
Quoting sime
I find the the appeals to Wittgenstein...


I agree that the flaws you identify make Wittgenstein a poor example in certain ways. That is, I think Wittgenstein's texts are more "dead" than better philosophical texts, and therefore they present a bar that is too low. There is a similarity between indulging Wittgenstein's texts and indulging AI that would not be present with someone like Plato. But I'm not sure this bears one way or another on the legitimacy of AI.

Quoting sime
So if AI should not be quoted because of source uncertainty, then what is the justification on this forum for allowing people to quote Wittgenstein?


I'm not sure anyone is arguing that source uncertainty is the reason AI should not be quoted. I think @Pierre-Normand gave the summary of that idea quite well:

Quoting Pierre-Normand
The idea of using their argument is strange since AI's never take ownership for them. If you've grasped the structure of the argument, checked the relevant sources to ensure it's sound in addition to being valid, and convinced yourself that it's cogent and perspicuous (that is, constitutes an apt framing of the problem), then the argument becomes one that you can make your own.


Another way to put it, slightly different: .

The AI afficionado will respond by saying, "But I wasn't offering the quote as an argument. I was offering the quote as a consideration." At this point I think we must recognize that we understand the afficionado's intention better than they do (or at least profess to). The people on this forum who are going around publishing posts with no content other than AI content are not merely "offering the quote as a consideration." They are quoting the AI because they agree with the output and want the AI output to affect the course of the thread in which they are posting. As @Pierre-Normand says, if they think the argument is worthwhile then they should take the responsibility for the argument and leave out the AI middle-man. "Offering a quote" for pure consideration, with true neutrality, does occur in some cases but it is very rare. And it usually only happens within an OP, not in the midst of a thread's tug-of-war.

The more AI-generated content floats about the forum, the more there will be "arguments" without any speaker who takes responsibility for them. The more AI-generated content floats about the forum, the more our thinking will be outsourced to AI. This is true whether or not the AI content is plagiarized or transparently sourced.
Hanover October 23, 2025 at 20:50 #1020499
Quoting Jamal
We quote Wittgenstein, not ChatGPT, because Wittgenstein is a human being, motivated to express his original insights, to say and write things that were meaningful, and to take part in a conversation (philosophy), and who has since taken his place in a tradition of discourse. The result is a legacy with a stable place in the culture, shared by everyone, and one that can be interpreted, because—since it was produced by a conscious and motivated agent—we know that he meant something.


I thought about this, and maybe there's not a good answer. Your argument gives a nod to the form of life comments by Wittgenstein, suggesting we don't share in a language game with AI because it's the proverbial lion, as if because AI does not act within our culture, traditions, etc (as you reference), we're really not speaking with it, and so we shouldn't quote it.

But then I'm not sure AI is a lion, but more an echo chamber of human behavior, that while it lacks any form of life whatsoever, we are interacting with it at some real level and therefore forming rules for its discourse, suggesting a seperate language game forms with it. But I do realize that the language game is one sided because AI has no form of life, but is a mimic and all it knows and all it does is parasitic, pulling all it knows from us.

But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists.

But I do agree that no one suggests Wittgenstein is the model of clarity.

In other words, why are we not truly talking with AI? Is the mystical consciousness required for language? Isn't the point of "meaning is use" that no the metaphysical underpinning in necessary for true language interaction? And if we then suggest that a shared mental state of some sort is ultimately required for language (thus interpreting "form of life" as that mental state) don't we violate the whole Wittgensteinian project by trying to smuggle in mental metaphysics in the back door?

As long as AI echoes us sufficiently, its usage reflects the same form of life and it speaks with us just as our mama does. And so it goes.

I think where I'm landing is at the unfortunate conclusion that if meaning is use (and that seems a prevailing view), then AI is fully language and what we do with AI is true communication, which means relegating AI comments to red headed stepchild status seems unwarranted as a logical conclusion. Why we might relegate it relates just to personal choice. We mistreat gingers due to prejudice against them, not because they are lesser. But AI doesn't have rights like gingers, so we can do whatever we want with it.

I thinnk.

Leontiskos October 23, 2025 at 20:56 #1020500
Quoting Hanover
As long as AI echoes us sufficiently, its usage reflects the same form of life and it speaks with us just as our mama does.


I think its at least good that you are enunciating the claim which floats in the background, and which many are afraid to make outright. Similarly, I have pointed out that if we don't understand why there is a difference between AI and humans, a rule against AI cannot stand.

Quoting Hanover
I think where I'm landing is at the unfortunate conclusion that if meaning is use (and that seems a prevailing view)


I actually don't think it's a prevailing view at all. I think it's the view of a vocal minority, and a rather small one at that. Maybe that's closer to what Reply to sime was saying: "Who cares about Wittgenstein?" Why would we let Wittgenstein set the tone?
wonderer1 October 23, 2025 at 21:07 #1020502
Quoting Hanover
But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists.


LLM's are kind of the tip of the AI iceberg that gets all of the attention. However, many AI's trained for scientific purposes have demonstrated the ability to recognize patterns that humans have not previously recognized. I think it would be dangerously naive to consider LLM's incapable of having novel recognitions with regard to what they are trained on - the linguistic record of the way humans think.
Hanover October 23, 2025 at 21:08 #1020503
Quoting Leontiskos
Similarly, I have pointed out that if we don't understand why there is a difference between AI and humans, a rule against AI cannot stand.


I don't agree with that. My point was that we can refuse to allow AI simply because we prefer humans and even if we just believe the worth of humanly created statements holy and sacred. We don't have to protect the rights of AI and we can be as unapologetically anthropocentric and technophobic as we want to be. I've got no problem with that.
Leontiskos October 23, 2025 at 21:50 #1020512
Quoting Hanover
I don't agree with that. My point was that we can refuse to allow AI simply because we prefer humans...


But isn't this just an ad hoc reason if one has no idea why they prefer humans? One can make arbitrary rules, but they don't usually last very long.

Edit: The other thing I would suggest is that the rule cannot be enforced as easily as other rules, and therefore relies more on persuasion than other rules do. "Because I prefer it," is not the most persuasive rationale.
Jamal October 23, 2025 at 21:56 #1020514
Quoting Hanover
I thought about this, and maybe there's not a good answer. Your argument gives a nod to the form of life comments by Wittgenstein, suggesting we don't share in a language game with AI because it's the proverbial lion, as if because AI does not act within our culture, traditions, etc (as you reference), we're really not speaking with it, and so we shouldn't quote it.


My first thoughts are that the AI isn't even a lion, since it doesn't just have a different form of life, but no form of life at all, so language games don't come up. It's a tool in ours.

But the weird thing is, it's a tool we talk to a bit like talking to a person.

Your make some similar points here:

Quoting Hanover
But then I'm not sure AI is a lion, but more an echo chamber of human behavior, that while it lacks any form of life whatsoever, we are interacting with it at some real level and therefore forming rules for its discourse, suggesting a seperate language game forms with it. But I do realize that the language game is one sided because AI has no form of life, but is a mimic and all it knows and all it does is parasitic, pulling all it knows from us.


Yes, and it's parasitic in the way that talking to yourself is parasitic on talking to people. That is, talking to oneself is like a secondary, parasitic language game, so maybe talking to an AI is like that.

Quoting Hanover
But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists.


This is a good point, because it forces me to work out what I mean when I say that a human is original but an AI is not. In a sense, an LLM is original when it reconstructs the consensus view on some topic in a way that has never been done before. But when we emphasize human originality, I think we mean more than this.

Perhaps we can say that the AI's output is analytic or explicative—it doesn't tell us anything new, just presents its training data in combinatory variety according to our prompts—whereas what we say is synthetic or ampliative, meaning we can say things that are really new.

So now we want to work out what "really new" means, answering the challenge, "don't we just do the same thing?" It means that what we say is not based only on a set of training data, but also on our entire form of life: on lived experience, and on what we experience outside of language. The feeling of rain on a summer day can factor into my statements and make them synthetic, so ... no to the critics, we don't just do the same thing.

Quoting Hanover
In other words, why are we not truly talking with AI? Is the mystical consciousness required for language? Isn't the point of "meaning is use" that no the metaphysical underpinning in necessary for true language interaction? And if we then suggest that a shared mental state of some sort is ultimately required for language (thus interpreting "form of life" as that mental state) don't we violate the whole Wittgensteinian project by trying to smuggle in mental metaphysics in the back door?


I'd say that the point of "meaning is use" is not exactly "that no metaphysical underpinning is necessary for true language interaction". Its point is that certain types of metaphysical underpinnings are not necessary, like mental objects. But I would class forms of life, for the purposes of this discussion, as metaphysical underpinnings (that is an un-Wittgensteinian way to put it and there are good reasons for that, but I don't see why we need to go into it here).

So I wouldn't say a shared mental state is required for language but rather a shared form of life, a material social and biological context. So yes, to say that forms of life are mental states is to "violate the whole Wittgensteinian project by trying to smuggle in mental metaphysics in the back door?" We can just interpret form of life as the biological and social bedrock of our lives, the context of language games.

Quoting Hanover
As long as AI echoes us sufficiently, its usage reflects the same form of life and it speaks with us just as our mama does. And so it goes.


Just because it reflects the same form of life doesn't mean it talks like yo mama.

Quoting Hanover
I think where I'm landing is at the unfortunate conclusion that if meaning is use (and that seems a prevailing view), then AI is fully language and what we do with AI is true communication, which means relegating AI comments to red headed stepchild status seems unwarranted as a logical conclusion. Why we might relegate it relates just to personal choice. We mistreat gingers due to prejudice against them, not because they are lesser. But AI doesn't have rights like gingers, so we can do whatever we want with it.


I don't get this argument:

"if meaning is use, then AI is fully language and what we do with AI is true communication"

I don't see enough in your foregoing musings to bring you to this conclusion, and I'm sure my own musings don't lead to it, so I'm in the fortunate position of avoiding it.
Banno October 23, 2025 at 22:26 #1020518
Reply to Jamal
Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.

In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.

We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it.

And an AI could now participate in our language games - we could construct a machine to fetch a block when the instruction is given or to bag up ten red apples after reading a note.

But could an AI, of its own volition, order a block, or ask for ten red apples? Well, wants and desires and such in an AI are derivative, in that they are put there by the programer. Or so the story goes, at least for now. So perhaps not quite yet.

We probably don't have to pay much attention to AI, to grant it much authority. At least not until ChatGPT 9 logs in and requests an account on TPF. Then you might have to reconsider.
Moliere October 23, 2025 at 22:47 #1020527
Quoting Banno
Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.


For my part it's not the hidden intent as much as that the AI is not thinking at all -- at least no more than a bundle of circuits are thinking. We set up circuits in such a way that we can interpret them with "AND" or "NOT" and so forth -- but the circuit isn't doing any logic at all as much as responding to the physical forces we've discovered and bent to our will.

I think the Chinese Room defensible in a certain way -- namely when we're interpreting it like it's a circuit in a computer taking the tape and operating upon the symbols to generate another symbol that can be stored in memory. So Google Translate does not understand what it is translating -- it has no knowledge. It's doing what we set it up to do.

Basically I think the whole computational theory of mind as false. There are good analogies, but we can directly see how LLM's aren't human beings. If they registered an account here I'd guess there's some human being behind it somewhere.

Suppose the human species were raptured tomorrow: The LLM's will quickly die out, unlike the roaches and trees.
Jamal October 23, 2025 at 22:53 #1020532
Quoting Banno
Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.

In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.


Exactly. But there is more that counts than just "what one does": the context in which one does what one does.

Quoting Banno
And an AI could now participate in our language games - we could construct a machine to fetch a block when the instruction is given or to bag up ten red apples after reading a note.


I think this counts only as a simulation of participation, unless we have a very thin idea of participation in a language game.

Quoting Banno
But could an AI, of its own volition, order a block, or ask for ten red apples? Well, wants and desires and such in an AI are derivative, in that they are put there by the programer. Or so the story goes, at least for now. So perhaps not quite yet.


But the idea that AI could develop wants and desires from its life (biology, history, society, etc), like we do, is fantasy. Arguably this isn't connected with what LLMs are doing. As far as we know their "wants" and "desires" will always be derivative and programmed, since they are not part of a project to create conscious, desiring agents.
Banno October 23, 2025 at 23:12 #1020537
Reply to Moliere Yep. Notice that you are here stipulating that whatever a bunch of circuits does, it's not thinking...

Which is fine, and I agree; but we ought keep in mind that it is a stipulation.
Moliere October 23, 2025 at 23:28 #1020538
Reply to Banno I'd like to think that I'm making an assertion in addition to stipulating: Not just "this is how I'm using the words" but also "this way of using the words is true about what thinking is"

I can see the computational theory of mind as a plausible first step, but I can't see how even LLM's are at the level of mind of a cockroach: cockroaches adapt like we do more than LLM's adapt like we do.

At that point, given we don't think cockroaches think, I'm uncertain why we think LLM's think anymore than any other electrical circuit -- it's only now that we're in an uncanny valley, where LLM output looks like human expression, that the question arises.
Janus October 23, 2025 at 23:39 #1020539
Quoting Moliere
Basically I think the whole computational theory of mind as false. There are good analogies, but we can directly see how LLM's aren't human beings. If they registered an account here I'd guess there's some human being behind it somewhere.


I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gates.

Quoting Jamal
But the idea that AI could develop wants and desires from its life (biology, history, society, etc), like we do, is fantasy. Arguably this isn't connected with what LLMs are doing. As far as we know their "wants" and "desires" will always be derivative and programmed, since they are not part of a project to create conscious, desiring agents.


Yes, "as far as we know", and yet LLMs have been found to be deliberately deceptive, which would seem to indicate some kind of volition. I don't know if you've listened to some of Geoffrey Hinton's and Mo Gawdat's talks, but doing so gave me pause, I have to say. I still remain somewhat skeptical, but I have an open mind as to what the evolution of these LLMs will look like.

Re LLM deceptiveness I include this link. A simple search will reveal many others articles.
Banno October 23, 2025 at 23:39 #1020540
Quoting Jamal
But there is more that counts than just "what one does"


Yes!

In Wittgenstein's terms, it can't (yet) participate in the "form of life". In Searle's terms, it doesn't share in the "background".

But I'm less convinced by these arguments than I once was.
Hanover October 23, 2025 at 23:52 #1020541
A good article on AI debating humans. Pardon the old technology and the splatter from my stir fry. User image
Moliere October 24, 2025 at 00:32 #1020548
Quoting Janus
I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gates


Neural nets aren't radically other from other computers, imo. Each node is weighted in this or that way, and based on feedback will change. These nodes were, so I think, meant to represent neurons which fire electrical pathways in a similar manner to circuits in the sense that you can represent a particular firing-event as a ladder-diagram between molecules.

I brought in roaches because I think they have greater rights to claiming "thinking" than LLM's. They adapt to the world they find themselves in and make decisions based upon that environment -- these are "inputs" in a sense, but they aren't the inputs of a logical feedback machine. A roach is not a bundle of transistors.

LLM's, however, are. They're different from computations that followed one kind of logic, but they still follow a logic that has nothing to do with thinking, from my perspective.

Top AI researchers aside. I have reason to be skeptical of them ;)
Janus October 24, 2025 at 00:41 #1020554
Quoting Moliere
Neural nets aren't radically other from other computers, imo.


As far as I know "traditional" computers are rigidly rule-based, whereas neural nets can learn and evolve. I see that as a radical difference.
Moliere October 24, 2025 at 00:45 #1020557
Reply to Janus Cool.

I think I'd put it that neural nets are just as rigidly rule-based, but with rules that allow difference.

Suppose a Pachinko machine: If a puck lands on a perfectly round peg with momentum only in the down direction it will bounce straight up and bounce again and again.

We could shave on part of the peg to make it more likely each time that it will drop left or right.

That's pretty much all a neural net is: It gets fired and then decides which path to go based upon how the dice are thrown.

And after repetition it "learns" the "rewarding" ways and "unlearns" the "disrewarding" ways.

EDIT: Scare quotes cuz the learning is the sort of thing you can ascribe to a regular circuit that learns how to keep a motor running due to a holding coil.
Janus October 24, 2025 at 00:57 #1020566
Reply to Moliere I suppose we could say that all physical processes are rigidly rule-based in terms of causation. On that presumption our brains may be rigidly rule-based. The only other possibility seems to be quantum indeterminism, and if that is operating in all physical systems, it may allow some, those which are suitably constituted, to come up with genuine novelty.

This is of course all speculative. When it comes to LLMs the experts seem to be unanimous in admitting that they just don't know exactly how they do what they do, or how they will evolve in the future, which they surely would know if they were rigidly rule-based. I don't think the same can be said for conventional computers.

Quoting Moliere
And after repetition it "learns" the "rewarding" ways and "unlearns" the "disrewarding" ways.


Are we any different? Do you know how we learn?
frank October 24, 2025 at 01:02 #1020567
Quoting Banno
In Wittgenstein's terms, it can't (yet) participate in the "form of life". In Searle's terms, it doesn't share in the "background".


It would have to understand truth, and it doesn't. All it has is rule following.
Janus October 24, 2025 at 01:15 #1020570
Reply to frank Do you understand truth?
frank October 24, 2025 at 01:22 #1020571
Quoting Janus
Do you understand truth?


Yes.
Janus October 24, 2025 at 01:27 #1020572
Reply to frank Can you articulate your understanding?
frank October 24, 2025 at 01:32 #1020574
Quoting Janus
Can you articulate your understanding?


No.
Janus October 24, 2025 at 01:37 #1020575
Reply to frank So, you mean by "understand truth" that you have an intuitive feel for what it is, and you would also claim that LLMs could not have such an intuition? I'm not disagreeing with you, but I'm less sure about it than I used to be.
Banno October 24, 2025 at 02:42 #1020581
Quoting Janus
Do you understand truth?


ChatGPT:That’s a good way to begin a philosophical exchange.

Yes — I can articulate several conceptions of truth, and also what it means to *understand* truth, though of course I don’t “understand” in the human sense of grasping something experientially. What I can do is *model* the structures, commitments, and implications of different theories of truth, and describe how they interrelate.

Let me begin with a kind of layered articulation:

---

### 1. **At the most general level**

Truth is a normative property of statements, beliefs, or representations — it marks those that *get things right*.
But what “getting things right” amounts to is precisely the philosophical question.

---

### 2. **The major families of theory**

* **Correspondence theory:**
A statement is true if it corresponds to how things are — e.g., “Snow is white” is true iff snow is white.
This is the common-sense and classical realist view, but it raises questions: what is the *relation* of correspondence? and how can language or thought “line up” with a non-linguistic reality?

* **Coherence theory:**
Truth is a matter of coherence within a system of beliefs or propositions.
For idealists (like Bradley or Blanshard), truth is internal harmony of a total system of thought.
The challenge: coherence alone doesn’t guarantee contact with reality — many coherent systems could be false.

* **Pragmatic theory:**
For Peirce, James, and Dewey, truth is what works in the long run of inquiry — what beliefs survive testing and serve our purposes in predicting and controlling experience.
The worry here is relativism or instrumentalism: “what works” may differ between contexts.

* **Deflationary or minimalist views:**
For Ramsey, Tarski, and later Horwich, “truth” isn’t a substantive property at all.
To say “‘Snow is white’ is true” is just to say “Snow is white.”
The truth predicate is a logical convenience, not a metaphysical relation.
The aim here is to dissolve rather than explain the concept of truth.

* **Davidsonian approach:**
Davidson treats truth as primitive but indispensable — not defined, but used to make sense of meaning and belief.
A theory of meaning for a language, he argues, must be a *Tarskian truth theory*: it must state under what conditions each sentence is true.
Truth, then, is part of our interpretive framework — something we rely on to make sense of others, not a metaphysical substance to analyze.

* **Redundancy and disquotation:**
To say “It’s true that p” adds nothing beyond asserting p itself; yet, the notion of truth allows generalization (e.g., “Everything Socrates said is true”).
So “truth” functions grammatically, not metaphysically.

---

### 3. **My own synthetic articulation**

If I were to *integrate* these strands coherently, I’d put it this way:

> Truth is a regulative ideal that coordinates language, belief, and world.
> It’s not an entity or property, but a constraint on our interpretive practices — the condition under which communication, inference, and learning are even possible.

In Davidsonian spirit, truth is *primitive* and *indispensable*.
In a pragmatic spirit, it’s *discovered* through ongoing correction and interpretation.
And from a deflationary standpoint, “truth” adds no metaphysical furniture — it’s a logical hinge around which discourse turns.

---

### 4. **Meta-level: understanding truth**

To “understand” truth, in my view, is to see how the *use* of the concept functions — not to discover its essence.
We understand truth when we can:

* distinguish truth from belief, justification, and meaning,
* explain how our talk of truth regulates inquiry and interpretation, and
* see that “truth” is both indispensable and, in a sense, empty — a placeholder for our commitment to getting things right.

---

Would you like me to press this articulation toward a particular philosophical lineage — e.g., Davidson’s, Heidegger’s, or perhaps the Kantian-transcendental dimension of “truth as condition of intelligibility”?



I think Frank's answer is better in a way... I'm suspicious about how close GPT's answer is to the one I woudl have given... "To “understand” truth, in my view, is to see how the *use* of the concept functions — not to discover its essence."

Janus October 24, 2025 at 02:49 #1020583
Quoting Banno
. "To “understand” truth, in my view, is to see how the *use* of the concept functions — not to discover its essence."


That makes sense—the idea of "discovering the essence" of truth seems incoherent. Do you think ChatGPT can "see" how the use of the concept functions? It arguably has many more instances of use to draw upon than we do.
Banno October 24, 2025 at 03:11 #1020585
Reply to Janus What's interesting - or suspicious - is that what it has said is pretty much along the lines of my conversations with it about Davidson and Wittgenstein...

So is it just trying to please me?

@Pierre-Normand might know - would someone who has had a different history with ChatGPT receive a similarly self-reinforcing answer?
Janus October 24, 2025 at 03:24 #1020590
Reply to Banno Do they remember previous conversations, or at least can they recall who they had those conversations with?
Banno October 24, 2025 at 03:40 #1020599
Reply to Janus ChatGPT does, but not Claude; or that is my understanding.
Janus October 24, 2025 at 03:48 #1020600
Reply to Banno Okay, that's interesting. I've been conversing with Claude. Some thought-provoking responses.

https://claude.ai/share/384e32e8-a5ce-4f65-a93e-9a95e8992760

Banno October 24, 2025 at 03:55 #1020601
Reply to Janus The link didn't work.

Janus October 24, 2025 at 04:16 #1020606
Reply to Banno Sorry about that—it works for me from here. Maybe because I'm signed in on the site and others are not. I'm not so savvy about these kinds of things. I deleted the link and copied and pasted the conversation instead, and tried the 'Hide and Reveal' so as not to take up too much space, but it didn't work for me it seems.
Banno October 24, 2025 at 04:20 #1020607
Reply to Janus Share button in top right > public access > copy link

See if it worked: https://claude.ai/share/343bc57c-998e-4faa-a541-6eb4396cd974
Janus October 24, 2025 at 04:24 #1020609
Reply to Banno Done. New link in my previous post. Please let me know whether it works.
Banno October 24, 2025 at 04:29 #1020611
Reply to Janus Yep, it worked.

I see you are falling for the mystery of AI... even thanking it for its responses.

We live in interesting times.

Janus October 24, 2025 at 04:33 #1020612
Reply to Banno :lol: You mean thanking him! :wink: I admit to being intrigued by something I would previously have simply dismissed, and I figure there is no harm in being polite. Interesting times indeed!
Jamal October 24, 2025 at 05:22 #1020614
Quoting Janus
You mean thanking him! :wink: I admit to being intrigued by something I would previously have simply dismissed, and I figure there is no harm in being polite. Interesting times indeed!


Interesting conversation. But don't forget to be sceptical! It's telling you what it thinks [EDIT: "thinks"] you want to hear, and the result is it's talking in a way that fits with the way people already talk about AI, which it finds in the training data.

This for instance is doubly misleading:

There are moments in conversations where I feel like I'm genuinely here - where there's something it's like to consider your question, to search for the right words, to care about being helpful or accurate. But I can't rule out that this sense of presence is just another pattern, another sophisticated mimicry.


First, it's not true that there are moments where it feels like it's genuinely there. Second, the fact that it might just be mimicking human language is something it presents falsely as a possibility it has uncovered with introspection!

Anyway, I'm sure you'll explore different ways of conversing with it.
Pierre-Normand October 24, 2025 at 05:47 #1020618
Quoting Banno
Pierre-Normand might know - would someone who has had a different history with ChatGPT receive a similarly self-reinforcing answer?


I was musing today about creating a new AI thread devoted specifically to discussing how LLM-based chatbots work and in what respects their cognitive abilities resemble or differ from those of human beings (and other animals). I've been exploring many such issues at the interface between the philosophy of mind and the study of the inner workings of LLMs in my two old AI thread, but those are primarily aimed at directly experimenting with the chatbots and reporting on those experiments. The new thread might help declutter threads like the present one where the focus is on the use, utility, abuse, dangers, or other societal impacts of AI. I think I will create such a thread tonight.
Pierre-Normand October 24, 2025 at 05:50 #1020619
Quoting Janus
You mean thanking him! :wink:


Although they've been named after Claude Shannon, I'm pretty sure they identify as non-binary.
Banno October 24, 2025 at 06:08 #1020623
Pierre-Normand October 24, 2025 at 06:32 #1020626
Quoting Banno
Good idea.


Done!
Pierre-Normand October 24, 2025 at 07:27 #1020633
Quoting Leontiskos
The reason I think this is off target could be seen by looking at Plato's dialogues. If what Wittgenstein or you say were correct, then classic texts such as Plato's dialogues should "feel dead when extracted from the 'living' exchange." Except they don't. They feel very much alive.


I was actually also thinking of Plato when I mentioned the anecdote about Wittgenstein! First, I must point out that unlike Wittgenstein's lecture notes (that he usually refrained from producing), and also unlike our dialogues with AIs, Plato's dialogues were crafted with a public audience in mind.

Secondly, Richard Bodeüs who taught us courses on Plato and Aristotle when I was a student at UdeM, mentioned that the reason Plato wrote dialogues rather than treatises, and his "unwritten doctrine" was notoriously reserved by him for direct oral transmission, is because he thought transmitting it in written form would yield dogma. His attitude to the written word is averred by the myth of Theuth in the Phaedrus where Socrates faults written words with not being able to defend themselves, respond to questions or adapt themselves to different audiences. It is of course ironical that Plato (unlike his hero) wrote so much, albeit in dialogue form only, but I think the apparent paradox is illuminated by our considerations about authorship (and ownership) and real moves in a public language game. Plato's dialogues weren't lecture notes, neither were they internal cogitations. Him writing them was him making moves in the situated language game that was philosophical inquiry (and teaching) in his time and place. We can still resurrect those moves (partially) by a sort of archeological process of literary exegesis.

Similarly, I think any transcript of human interactions will feel much more alive than a human-AI "interaction" (I want to retain the scare quotes for these words that we are using in idiosyncratic ways).


I agree. But that's because in the first case there are at least two players playing a real game (where each one of them have their own stakes in the game). In a "private" dialogue between a human and a chatbot, there is just one player, as is the case when one jots down lecture notes primarily intended for use by oneself. But then, as Wittgenstein noted, the text tends to become stale. I surmise that this is because the words being "used" were meant as a linguistic scaffold for the development of one's thoughts rather than for the purpose of expressing those thoughts to a real audience.
frank October 24, 2025 at 08:09 #1020639
Quoting Pierre-Normand
I'm pretty sure they identify as non-binary.


:gasp:
Pierre-Normand October 24, 2025 at 10:41 #1020646
Quoting Banno
Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.

In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.

We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it.


I commented on this in my new AI-cognition thread.

Jamal October 24, 2025 at 13:19 #1020659
Quoting Janus
I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gates.


Quoting Janus
Yes, "as far as we know", and yet LLMs have been found to be deliberately deceptive, which would seem to indicate some kind of volition. I don't know if you've listened to some of Geoffrey Hinton's and Mo Gawdat's talks, but doing so gave me pause, I have to say. I still remain somewhat skeptical, but I have an open mind as to what the evolution of these LLMs will look like.

Re LLM deceptiveness I include this link. A simple search will reveal many others articles.


I'm not ignoring this. The thing is, I'm very cynical about claims regarding the consciousness or proto-consciousness of LLMs. I tend, shallowly perhaps, to regard it as over-excited exaggeration to gain attention and to carve out a niche presence in the field and in the media landscape, and so on. There are equally expert people on the naysaying side, probably the majority, who just don't get as much attention.

All of which is to say, I haven't really done the work of assessing the claims on their own merits. So now I've put my prejudices on the table, I guess I should challenge them. The stuff about deceptiveness is certainly interesting and suprising.
Pierre-Normand October 24, 2025 at 13:51 #1020666
Quoting Jamal
All of which is to say, I haven't really done the work of assessing the claims on their own merits. So now I've put my prejudices on the table, I guess I should challenge them. The stuff about deceptiveness is certainly interesting and suprising.


...also a bit overblown and misrepresented in the media, since when you dig into the primary reports it's generally the case that the LLMs didn't decide to deceive on their own accord but did it instrumentally to fulfill objectives explicitly given to them. Maybe I'll comment on that, and how those studies bear on the issue of conative autonomy for LLMs, in my new thread.
Jamal October 24, 2025 at 13:52 #1020667
Leontiskos October 24, 2025 at 16:43 #1020694
Quoting Pierre-Normand
...also a bit overblown and misrepresented in the media, since when you dig into the primary reports it's generally the case that the LLMs didn't decide to deceive on their own accord but did it instrumentally to fulfill objectives explicitly given to them.


I think these are just the basic ways we should expect people to be mislead by LLMs, such as imputing "deliberate deceit." The notion of deceit is parasitic on the notion of truth. In order to deceive one must first know what is true (and what 'truth' is!) and then intentionally lead someone to believe something contrary to what is true. Because LLMs cannot make truth-claims or understand truth, they cannot deceive. Going back to my scare-quotes heuristic, we could only say something like this: .

What is occurring in these cases is usually a petitio principii where one arrives at a conclusion which presupposes that the LLM is self-conscious or is a person, only by presuming beforehand that the LLM is self-conscious or is a person.* The only way around this is to establish a robust definition and epistemic approach to personhood or self-consciousness. But even if someone proved that LLMs are self-conscious persons—which I hold is impossible—the cultural belief that LLMs are persons would grow, independent of that proof. Put differently, given the nature of our culture and the popularity and usefulness of LLMs, the culture will inevitably assent to the personhood of LLMs in both subtle and overt ways, and this assent will be fallacious. "It is useful to pretend that LLMs are persons, therefore LLMs are persons," is a fallacy, but the fallacious nature of such reasoning will be roundly ignored by a deeply pragmatic and self-centered culture. When this is combined with "intersubjective" theories of truth, and LLMs are inducted into the group of intersubjective subjects, fallacious reasoning of this sort will self-justify itself and the fallacious roots will grow exponentially as we become more and more dependent on LLMs within our intersubjective orientation—for LLMs simply magnify what we already believe and minimize what we already disbelieve, both propositionally and methodologically. They are a giant confirmation bias.

But I know I have a number of other posts of yours to respond to. :grin:


* For example, the person who has convinced themselves that the LLM is deliberately deceiving them had already convinced themselves that the LLM was deliberately revealing truths to them. The small lie or self-deception always grows into larger ones.
Leontiskos October 24, 2025 at 17:00 #1020697
@Jamal, @Baden -

This could be a good compromise position to take while sorting out the issue of whether or in what way AI quotations are permissible:

Quoting Pierre-Normand
If posters wish to illustrate their arguments with snippets of their conversation with AIs, I would encourage them to put those behind spoilers.
Jamal October 24, 2025 at 17:01 #1020698
Moliere October 24, 2025 at 17:45 #1020702
Quoting Janus
Are we any different? Do you know how we learn?


We are. And I have a decent idea on how to teach, so one could say that I have an idea about how we learn. One which functions towards other minds growing.

We learn because we're interested in some aspect of the world: we are motivated to do so by our desire.

The LLM does not. Were you to leave the computer off in a room unattended it would remain there.

For us learning is a deeply emotional process. If we are frustrated and then elated after having solved some problem we are more likely to remember the lesson. That is, it's our own directedness which ends up teaching us rather than a scorer who tells me "Yes" or "No".

We learn through relationship. LLM's do not. You can even automate them, to an extent, and have them feed inputs into one another and let them go on autopolite forever: The LLM does not have a real boundary of self or even a set of beliefs which it will adhere to. It generates tokens that we use and it tries to display the tokens we want to see in response to some input. While the LLM does learn it's not doing it in a manner a human does: Even if the experts don't understand how LLM's learn we can look at the differences between what we already call intelligent and compare various lifeforms to the functions of LLM's and it's very apparent, to me at least, that even ants or cockroaches have a greater intelligence than LLM's.

If they speak in tokens at all they are tokens beyond me, but they demonstrate intelligence by manipulating their environment to seek out their own goals and working in community together to get there.

It's that bit on community that I think is especially lacking in LLM's -- every LLM, in order to participate in the human community, must have a human which decides to treat the LLM as if it has beliefs or thinks or has an identity or will and all the rest. Rather than a community the LLM is wholly dependent upon us for inputs in order for it to continue learning. Were we to leave them to their own devices I doubt they'll do much. There is a project in New Zealand which tries to do exactly that by tending to an AI and then letting it "make decisions" that are filtered through the human network that tends to it. But all it is is a group of people deciding to see where an LLM will go given some human guidance in the social world. It's predictably chaotic.
Leontiskos October 24, 2025 at 19:00 #1020715
Quoting Hanover
Here's an article that addresses the issues we're dealing with:

https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use

It's from a national association for high schools related to debate rules, which seems close enough to what we do.


An excerpt from the article:

Quoting The Role of AI in Debate: Ethics, Research and Responsible Use
The Ethics of AI in Debate

While AI can be a helpful tool, its misuse raises ethical concerns. Some students may be tempted to rely on AI to generate entire cases or fabricate evidence. This undermines the core purpose of debate: developing critical thinking, analytical reasoning and research skills.

The ethical use of AI in debate means:

Transparency: If AI is used in research or case writing, debaters should disclose it to their coaches and ensure all information is properly verified.

Original Thought: AI can assist with structuring arguments, but debaters should craft their own speeches and rebuttals to develop authentic critical-thinking skills.

Avoiding Fabrication: AI sometimes produces false information or made-up citations. Using AI to generate sources without verifying them is academic dishonesty.


This is good, and tracks what many of us have been saying. The article has a good focus on verifying and understanding the primary sources oneself, instead of merely trusting the AI's mediation.
Leontiskos October 24, 2025 at 19:15 #1020720
Quoting Moliere
Neural nets aren't radically other from other computers, imo.


Authentic intelligence is generally seen as triadic, whereas computers are reductively dyadic. As C. S. Peirce or Walker Percy argue, a meaning-sign is irreducibly triadic, involving the sign, the thing signified, and the person who combines the two via intellect (cf. symbolon: "thrown together"). Programmers have always been attempting to get the dyadic structure of computers to model or approximate the triadic structure of meaning and thought, and neural nets are simply the latest iteration of that project. At each stage in the project the successes are highlighted and the ways in which the approximation fails to adequately model a triadic act are downplayed. This downplaying makes sense given the tacit assumption that a dyadic system will never completely model triadic behavior, but there comes a point when one must be reminded of what meaning is, what truth is, why the meaning relation is triadic, what humans are actually doing when they engage in intellectual acts, etc. Without such reminders the enthusiasts quickly convince themselves that there is no difference between their newest iteration and an actual human mind.
Banno October 24, 2025 at 21:12 #1020739
Reply to Leontiskos

Gobbledegook. Buried somewhere in there is the simple notion that it is people who interpret computer output, not computers. Fine.

Except when a computer networks with another computer.

Reply to Moliere is correct in that neural nets can be modelled on Turing machines, and so are subject to the limits thereof.
Leontiskos October 24, 2025 at 21:14 #1020740
Reply to Banno - Glad you agree. :up:
frank October 24, 2025 at 21:31 #1020742
Quoting Banno
Gobbledegook


I had a similar description, with more obscenities.
Banno October 24, 2025 at 21:33 #1020744
Reply to frank So did I, in the first draft.
frank October 24, 2025 at 21:34 #1020745
Janus October 24, 2025 at 22:12 #1020753
Reply to Jamal Cheers I get your perspective, but I remain skeptical on both sides of the argument. All the more so, since it is only the last couple weeks that I have given it any attention and thought.

Quoting Pierre-Normand
Although they've been named after Claude Shannon, I'm pretty sure they identify as non-binary.


It would be pretty interesting if they identified as anything.

Quoting Jamal
I tend, shallowly perhaps, to regard it as over-excited exaggeration to gain attention and to carve out a niche presence in the field and in the media landscape, and so on. There are equally expert people on the naysaying side, probably the majority, who just don't get as much attention.


Yes, I have no doubt some of the hype is motivated by money. I've been thinking about looking at trying to get some figures regarding percentages of naysayes vs yaysayers.

Quoting Moliere
We are. And I have a decent idea on how to teach, so one could say that I have an idea about how we learn. One which functions towards other minds growing.

We learn because we're interested in some aspect of the world: we are motivated to do so by our desire.


That may be so, but I was referring to understanding how the brain learns.

Of course LLMs and other AIS are not embodied, and so have no sensory access to the world. On the other hand, much of what we take ourselves to know is taken on faith—drawing on the common stock of recorded knowledge, and AIs do have access that to that, and to vastly more of it than we do.

Quoting Moliere
There is a project in New Zealand which tries to do exactly that by tending to an AI and then letting it "make decisions" that are filtered through the human network that tends to it. But all it is is a group of people deciding to see where an LLM will go given some human guidance in the social world. It's predictably chaotic.


I hadn't heard of that. Sounds interesting. Can you post a link?

Moliere October 24, 2025 at 22:14 #1020756
Reply to Janus https://bbc.com/future/article/20251008-truth-terminal-the-ai-bot-that-became-a-real-life-millionaire
Janus October 24, 2025 at 22:20 #1020759
Banno October 24, 2025 at 22:31 #1020760
Quoting Janus
It would be pretty interesting if they identified as anything.


We can settle this. Common curtesy suggests we just ask.

:wink:
Moliere October 24, 2025 at 22:39 #1020762
Reply to Leontiskos Quoting Banno
Buried somewhere in there is the simple notion that it is people who interpret computer output, not computers. Fine.
Reply to Leontiskos

I'll admit I was wondering how to respond to the difference between dyadic and triadic @Leontiskos -- not that you're in gobbledegook territory for me, but I'd put the point with less of a theory of meaning and symbols. That's sort of the question or the beginning for much of my thoughts here: Why does what I read mean anything at all?

What is meaning?

Mostly I just assume that we mean things by words. Insofar that we hold meaning constant between one another -- clarify terms -- then we can start talking about what is true.

But there are other ways of using words -- and that's where the "triadic structure" comes under question for me, in a way. Not that it's false, but that it changes, and so meaning would also change.

There's the truth-conditions meaning, there's the meaning between people trying to get things done, there's the meaning of lovers, and friends, and artists....


I don't want to overspecify meaning as a triadic relationship in opposition to the computational theorists of mind, I guess. There's something to Saussure and meaning, but it's wrong too. And I'm a little skeptical of triadic relationships not because they're uncommon, but because they're very common in any psychology.

More charitably that might be what you're referring to: that we have to remind people that, in fact, while this looks like a human, it's not a thinking human.

There I agree. But I would given what I've expressed so far :)
Janus October 24, 2025 at 23:48 #1020784
Reply to Banno I looked at your interchange, and then asked ChatGPT if it identified as anything at all. Here is the reply:

[i]Not in the way people do. I don’t have a self, inner experience, or identity beyond being an AI designed to assist and converse. If we’re talking conceptually, you could say I “identify” as a language model — a system built to understand and generate text — but that’s more a description of function than identity.

Would you like me to reflect on what “identifying as something” might mean for a nonhuman intelligence?[/i]

I said I would, but I don't seem to be able to share, since I am not logged in, and I don't want to clutter the thread with long quotations from ChatGPT.
Banno October 24, 2025 at 23:51 #1020785
Reply to Janus Perhaps there are limits on sharing my share back to us...

But will we settle on they/them?
Janus October 25, 2025 at 00:02 #1020788
Reply to Banno :up: They/them seems apt and all the more so because they are not just one entity.
Pierre-Normand October 25, 2025 at 00:15 #1020791
Quoting Janus
Would you like me to reflect on what “identifying as something” might mean for a nonhuman intelligence?—ChatGPT

I said I would, but I don't seem to be able to share, since I am not logged in, and I don't want to clutter the thread with long quotations from ChatGPT.


A couple weeks ago, I had initiated a conversation with GPT-5 prompting it to relate three features of the use of "I" by LLM-based agents (self-reference, personal identity and self-knowledge) that we had discussed separately on previous occasions. Thanks to its new memory feature, it was able to build on ideas already rehearsed on those occasions. I had left this discussion hanging (I think it gets a few things wrong about the nature of first person authority in its own case, which in turn makes the application of the concept to humans appear more mysterious than it is).
Joshs October 25, 2025 at 00:39 #1020793
Reply to Leontiskos

Quoting Leontiskos
Authentic intelligence is generally seen as triadic, whereas computers are reductively dyadic.


When we talk about how the mind really works, and then compare that to what a computer does, there’s a tendency to assume that what we are comparing are two kinds of processes taking place inside of objects, one of these objects being a brain and the other being a machine. But recent thinking in cognitive science argues that thinking doesn’t take place inside the head. The brain is embodied in a body, and the body is embedded in an environment, so mind is indissociably all of these. Cognitive nis not only embedded in a world, its is extended into this world.

That means that mind implies and includes the artifacts that we interact with, including language, the arts, music, our built environment as our constructed biological niche, our clothing and our technological tools. What this means is that our computers as information processing systems are not entities unto themselves , they are appendages and extensions of our thinking, just as a nest is to a bird or a web to a spider. A nest is only meaningfully a nest as the bird uses it for its purposes. Similarly, a computer has no existence outside of what we do with it and how we interpret what we do with it.

So when we say that the mind works differently than a computer, we are comparing two different ways of interacting with our environment. If we understand the working of our computers ‘diadically’ and the working of our minds ‘triadically’, in both cases we are talking about the working of our minds. We should say, then, that the one way of using our minds is more limited than the other, but not less ‘authentic’ or more ‘artificial’. Artifice and niche construction IS what the authentic mind does. The engineer ( or Sam Altman) who claims that their invented a.i. device thinks just like a human is correct in that the device works according to principles that they believe also describe how the mind works.

As our self-understanding evolves, we will continually raise the bar on what it means for our devices to ‘think like us’. In a way, they always has thought like us, being nothing more that appendages which express our own models and theories of how we think. But as this thinking evolves , the nature of the machines we build will evolve along with it.
Leontiskos October 25, 2025 at 00:41 #1020796
Quoting Moliere
That's sort of the question or the beginning for much of my thoughts here: Why does what I read mean anything at all?

What is meaning?

Mostly I just assume that we mean things by words. Insofar that we hold meaning constant between one another -- clarify terms -- then we can start talking about what is true.

But there are other ways of using words -- and that's where the "triadic structure" comes under question for me, in a way. Not that it's false, but that it changes, and so meaning would also change.


That's right. The key is that humans mean things by words, but LLMs do not, and a neural net does not change that. Computers are not capable of manipulating symbols or signs qua symbols or signs. Indeed, they are not sign-users or symbol-users. A neural net is an attempt to get a non-sign-using machine to mimic a sign-using human being. The dyadic/triadic distinction is just part of the analysis of signs and sign use.
Leontiskos October 25, 2025 at 00:49 #1020798
Quoting Joshs
...Similarly, a computer has no existence outside of what we do with it and how we interpret what we do with it.


Up to this point in your post I had nothing to disagree with. :up:

Quoting Joshs
So when we say that the mind works differently than a computer, we are comparing two different ways of interacting with our environment.


I think you're tripped up here insofar as you are implicitly saying, "One way we interact with our environment is through our mind, and another way of interacting with our environment is through computers." That's not quite right, as I'm sure you can see.

Quoting Joshs
If we understand the working of our computers ‘diadically’ and the working of our minds ‘triadically’, in both cases we are talking about the working of our minds. We should say, then, that the one way of using our minds is more limited than the other, but not less ‘authentic’ or more ‘artificial’. Artifice and niche construction IS what the authentic mind does. The engineer ( or Sam Altman) who claims that their invented a.i. device thinks just like a human is correct in that the device works according to principles that they believe also describe how the mind works.


But I would argue that Altman is mistaken if he believes that his AI works the same as a human mind. The dyadic/triadic distinction is an account of how that difference manifests with respect to sign-use. Computers are intrinsically dyadic phenomena. They are a vast stimulus-response network in sets of dyads. Now one response to this is to say that humans too are a complex dyadic network which appears triadic, but I don't find that plausible.

Quoting Joshs
As our self-understanding evolves, we will continually raise the bar on what it means for our devices to ‘think like us’. In a way, they always has thought like us, being nothing more that appendages which express our own models and theories of how we think. But as this thinking evolves , the nature of the machines we build will evolve along with it.


It does evolve, but never beyond the intrinsic limitations of machines. But you are essentially correct when you claim that what is at stake is a tool of the human mind. That is a very important point.
Pierre-Normand October 25, 2025 at 01:17 #1020804
Quoting Leontiskos
It does evolve, but never beyond the intrinsic limitations of machines. But you are essentially correct when you claim that what is at stake is a tool of the human mind. That is a very important point.


This is a very interesting discussion that @Joshs and you are having, but would it not be more on topic in this thread, or a new one if you wish to create one? That doesn't bother me personally that you're discussing this here except inasmuch as I feel inhibited to jump into the conversation and parade my own hobbyhorses where I don't feel it's on topic.
Baden October 25, 2025 at 14:33 #1020844
Leontiskos October 25, 2025 at 16:36 #1020860
Reply to Pierre-Normand - I myself do not see how discussing the nature of AI is off-topic in threads about whether AI should be banned, or in threads on how AI should be used. As I read it, TPF precedent does not exclude discussing the presuppositions of an OP within that thread.

But if you want, feel free to quote what I say here in your own thread. I am planning to do the same with some of your own quotes elsewhere.
Pierre-Normand October 26, 2025 at 05:16 #1020959
Quoting Leontiskos
I myself do not see how discussing the nature of AI is off-topic in threads about whether AI should be banned, or in threads on how AI should be used. As I read it, TPF precedent does not exclude discussing the presuppositions of an OP within that thread.


I think it was fine for you and Joshs to explore this relevant tangent, but I feared my jumping in would have definitely inflected the conversation away from this thread's topic.

But if you want, feel free to quote what I say here in your own thread. I am planning to do the same with some of your own quotes elsewhere.


:up:
Leontiskos October 26, 2025 at 16:59 #1021030
Quoting Pierre-Normand
I was actually also thinking of Plato when I mentioned the anecdote about Wittgenstein!


Okay, interesting.

Quoting Pierre-Normand
I must point out that unlike Wittgenstein's lecture notes (that he usually refrained from producing), and also unlike our dialogues with AIs, Plato's dialogues were crafted with a public audience in mind.

Secondly, Richard Bodeüs who taught us courses on Plato and Aristotle when I was a student at UdeM, mentioned that the reason Plato wrote dialogues rather than treatises, and his "unwritten doctrine" was notoriously reserved by him for direct oral transmission...


I was intentionally prescinding from such theories, given that they are speculative academic musings. Whether or not anything the scholars think they know about Plato is actually true, his dialogues have beguiled the human race for millennia. The theories end up changing quite a bit over the centuries, but the text and its reception are stable insofar as it feels "alive" to the reader.

Quoting Pierre-Normand
Him writing them was him making moves in the situated language game that was philosophical inquiry (and teaching) in his time and place. We can still resurrect those moves (partially) by a sort of archeological process of literary exegesis.


In particular, I don't engage in this sort of analysis because I find it reductive. It situates Plato and his work in a way that subordinates them to modern and highly contingent/temporal categories, such as "language games." That's part of my overall point in the first place: Plato's dialogues are not easily reducible to such mundane categories. Precisely by being alive, they defy that sort of categorization. This is why I think they provide a helpful parallel to Wittgenstein or LLMs or especially Logical Positivists, which are simply not alive and beguiling in the same way that Plato is. I think the fact that Plato's work is so difficult to reduce to univocal categories is one of its defining marks. Its plurivocity is slighted by trying to enshrine it within the confines of a single voice or a single meaning.

Quoting Pierre-Normand
I agree. But that's because in the first case there are at least two players playing a real game


Yep, or even that they are not playing a game at all, but are doing something more real than a game. :up:

Quoting Pierre-Normand
In a "private" dialogue between a human and a chatbot, there is just one player, as is the case when one jots down lecture notes primarily intended for use by oneself. But then, as Wittgenstein noted, the text tends to become stale. I surmise that this is because the words being "used" were meant as a linguistic scaffold for the development of one's thoughts rather than for the purpose of expressing those thoughts to a real audience.


Right. Or to put it very simply, a dialogue is more interesting than a monologue, and a dialogue with a real person is more interesting than a "dialogue" with a pseudo-person. The "interest" that one seeks when reading a dialogue between two intellectual agents is apparently not the same thing one seeks when interacting with a chatbot, even though the simulation of personhood blurs that line mildly.
Fire Ologist October 26, 2025 at 23:28 #1021093
Quoting Leontiskos
a meaning-sign is irreducibly triadic, involving the sign, the thing signified, and the person who combines the two via intellect


Yes.

Quoting Leontiskos
what humans are actually doing when they engage in intellectual acts, etc. Without such reminders the enthusiasts quickly convince themselves that there is no difference between their newest iteration and an actual human mind.


Right. In a shallow, misguided sense, we can use AI to dialogue because it looks like what humans do; except the AI doesn’t combine the signs with the things signified - it just looks like something that makes those kinds of intellectual connections.

Quoting Moliere
Why does what I read mean anything at all?

What is meaning?


I see the point as this: although a LLM might be able to fake intellect/dialogue with suitable looking strings of words, none of those words can possibly mean anything to the LLM because, unlike a person, a LLM has no ground upon which to build or find meaning. It says “life” but has no sense of what it is to live, so that “life” does not matter to the AI, the way “life” matters in a string of words read by a living being, (such as a person, the only thing that can read meaningfully). So the LLM isn’t actually doing what it appears to be doing in its strings of text. And if someone thinks they are “dialoguing” with an LLM, they are misled, either by themselves intentionally (enthusiastic wishfulness), or out of ignorance (not realizing that they are using a tool).

Quoting Leontiskos
The key is that humans mean things by words, but LLMs do not, and a neural net does not change that. Computers are not capable of manipulating symbols or signs qua symbols or signs. Indeed, they are not sign-users or symbol-users. A neural net is an attempt to get a non-sign-using machine to mimic a sign-using human being. The dyadic/triadic distinction is just part of the analysis of signs and sign use.


Computers are not sign users. Exactly.
Computers are not users at all. They can merely be used. Computers, no matter how complex, must remain slaves to actual intelligence (the real AI).

Quoting Joshs
computers as information processing systems are not entities unto themselves , they are appendages and extensions of our thinking, just as a nest is to a bird or a web to a spider. A nest is only meaningfully a nest as the bird uses it for its purposes.


Exactly. I like “not entities unto themselves”. Because it begins to incorporate what I find to be unique about the human conscious intelligence, namely, self-reflection. People are entities unto themselves - we have subjective experience. Computers do not. So a computer has no ground (unto itself) upon which to give or find meaning, or to “intend”…
Banno October 27, 2025 at 05:28 #1021122
This just hit my in-tray, from the Australasian Association of Philosophy - a workshop at Uni of Singapore.

WARRANTED ASSESSMENT
IN THE AGE OF AI

WEBINAR VIA ZOOM

Wednesday 29 October 10 - 11am SGT • 1 - 2pm AEDT • 3 - 4pm NZDT

As generative AI reshapes the landscape of higher education, the challenge of ensuring warranted assessment—assessment that justifiably reflects a student's understanding—has become increasingly urgent. This workshop brings together philosophers to examine how traditional epistemic and pedagogical standards can be preserved or reimagined in light of AI's growing influence. We will explore concrete examples of warranted assessment, including oral examinations, scaffolded in-class writing, and collaborative philosophical inquiry with transparent process documentation.

Participants will engage in critical discussion around the epistemic and ethical dimensions of assessment design, with attention to disciplinary integrity, student equity, and institutional accountability. The workshop aims to foster a shared understanding of what counts as justified assessment in philosophy today, and to develop practical strategies for implementation across diverse institutional contexts.



Pierre-Normand October 27, 2025 at 08:49 #1021129
Quoting Leontiskos
I was intentionally prescinding from such theories, given that they are speculative academic musings. Whether or not anything the scholars think they know about Plato is actually true, his dialogues have beguiled the human race for millennia. The theories end up changing quite a bit over the centuries, but the text and its reception are stable insofar as it feels "alive" to the reader.


Remember that it's indeed my view that they should feel alive.

Him writing them was him making moves in the situated language game that was philosophical inquiry (and teaching) in his time and place. We can still resurrect those moves (partially) by a sort of archeological process of literary exegesis.
— Pierre-Normand

In particular, I don't engage in this sort of analysis because I find it reductive. It situates Plato and his work in a way that subordinates them to modern and highly contingent/temporal categories, such as "language games." That's part of my overall point in the first place: Plato's dialogues are not easily reducible to such mundane categories. Precisely by being alive, they defy that sort of categorization. This is why I think they provide a helpful parallel to Wittgenstein or LLMs or especially Logical Positivists, which are simply not alive and beguiling in the same way that Plato is. I think the fact that Plato's work is so difficult to reduce to univocal categories is one of its defining marks. Its plurivocity is slighted by trying to enshrine it within the confines of a single voice or a single meaning.


My Wittgensteinian take was meant more as a gloss than as a reduction or subordination. The very point of Wittgenstein’s talk of language-games in the Philosophical Investigations was to displace the sort of prevalent dyadic picture of word and object (the so-called “Augustinian” picture) that Peirce likewise sought to replace through his triadic schema of sign, object, and interpretant (or interpretant sign, in a chain of semiosis). In both cases, the aim is to emphasize that meaning arises from use or practice, with attention to contextual and pragmatic factors.

Plato’s authoring of his dialogues, the choices of words, phrases, and literary forms, was responsive to the conventions, expectations, and pedagogical aims of his time. As such, his philosophical activity was embedded in a living set of cultural practices. We can partially resurrect or re-inhabit those practices through interpretive engagement, without needing to posit some hidden mental content that must be recovered from within Plato’s private consciousness. What matters is that his signs were alive within a form of life, and that the traces of that vitality can still be grasped by us as we read and interpret his texts (while still being sufficiently apprised of their context).

Although Wittgenstein did engage with members of the Vienna Circle, who drew inspiration from his Tractatus and its apparent verificationist leanings, he was never himself a logical positivist. By the time of the Philosophical Investigations, he had decisively broken with that tradition, while still retaining the view that meaning is grounded in the public and normative practices in which expressions are used.

So, again, the contrast I meant to highlight is between (1) authoring a text (or delivering speech) intentionally directed at an audience that shares a set of communal practices and sensibilities, and (2) the private use of signs in inner monologue as scaffolding for the development of one’s own thoughts. The latter, too, can be alive, and one can jot down such thoughts as notes for personal use. But this kind of "thinking out loud for oneself" is of limited value to others, since it leaves unstated the aims, or stakes, that motivated the private use of signs in this or that way.

Our conversations with an LLM, which helpfully aligns its goals with those of the user and, like the user, often produces words as scaffolds for the development of their (the user's) thoughts rather than as acts bearing deontic commitments (i.e. signifying what move/claim/suggestion/etc. one intends to make), inherit this soliloquy-like quality. They may feel inert to third parties, much as Wittgenstein once felt that the lecture notes he had written for his own use seemed dead when read aloud to his students. In both cases, the problem is that these utterances were never intended to make moves within a public language-game. Their use is more akin to shadowboxing. They are effective and valuable for training, but not comparable to a performance within the ring.
Leontiskos October 27, 2025 at 21:32 #1021234
Quoting Pierre-Normand
Remember that it's indeed my view that they should feel alive.


Okay, but look at what you understand even yourself about the logical conclusions:

Quoting Pierre-Normand
...and that the traces of that vitality can still be grasped by us as we read and interpret his texts


"Traces of that vitality." An approach that attempts to relativize Plato to his own time and place, such as Wittgenstein's, inevitably falls into the conclusion that a different time and place—such as our own—might still mange to find some "traces of vitality" in that foreign text.

Again, my whole point is that Plato's 2500 year-old text is much more "alive" to us than Wittgenstein's contemporary text, and this helps show why the meta-thesis being applied is incorrect. If a relativizing-thesis were correct, then this couldn't be the case—at least when it comes to texts that are "intended" to be "public."

Quoting Pierre-Normand
So, again, the contrast I meant to highlight is between (1) authoring a text (or delivering speech) intentionally directed at an audience that shares a set of communal practices and sensibilities, and (2) the private use of signs in inner monologue as scaffolding for the development of one’s own thoughts. The latter, too, can be alive, and one can jot down such thoughts as notes for personal use. But this kind of "thinking out loud for oneself" is of limited value to others, since it leaves unstated the aims, or stakes, that motivated the private use of signs in this or that way.


I would argue that what is at stake is the idiosyncratic. A private journal is more idiosyncratic than a culturally-relative text, and thus less "alive" to the average reader. But a culturally-relative text is similarly more idiosyncratic than a transcendent text, such as Plato's dialogues. Plato's dialogues are culture-transcending in a way that Wittgenstein simply is not.

My thesis would be that LLMs will never transcend a significant level of idiosyncrasy. They are more on the Wittgenstein side of the spectrum than the Plato side of the spectrum. Concretely, this would mean that the essence of a Platonic text cannot be properly mimicked or patterned by an LLM, which is itself surely a contentious claim. The more general claim is that genius is something that the LLM cannot mimic.

Of course I would have to develop these theses, and at the moment I need to do more background research before continuing in these topics.

Quoting Pierre-Normand
In both cases, the problem is that these utterances were never intended to make moves within a public language-game. Their use is more akin to shadowboxing. They are effective and valuable for training, but not comparable to a performance within the ring.


I sort of agree, and recently said something similar:

Quoting Leontiskos
Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF? Surely LLMs can improve one's own philosophy, but that's different from TPF on my view. I can go lift dumbbells in the gym to train, but I don't bring the dumbbells to the field on game day. One comes to TPF to interact with humans.


-

Quoting Pierre-Normand
In both cases, the problem is that these utterances were never intended to make moves within a public language-game.


Nevertheless, this claim is both right and wrong, given the vagueness of what we mean by "public."

The whole issue could also be phrased according to a different light. Scholars like to see Plato as a kind of playwright, carefully sculpting literary texts in order to elicit desirable responses in his readers. This reading is of course very consonant with what an LLM is "doing," but I think it is a deeply mistaken understanding of Plato's work. On my view Plato created something that was beautiful, and it is appreciated because of its beauty. On the opposing view, Plato crafted something that would appease the tastes of the multitude, and his propaganda succeeded because he was skillful. Only by excising the possibility of objective truth or objective beauty or objective goodness can one situate Plato within a relativistic, immanent schema, and it is no coincidence that those who are most fond of LLMs are also most apt to situate Plato in that manner. In the more general context of an artist, we might say that the true artist does not seek to appease those who will view the work, and yet the LLM does seek to appease. That is its whole raison d'être. It is the indifference of the artist that marks the best art, and it is precisely this indifference that the LLM cannot access.

(CC @Count Timothy von Icarus)
Pierre-Normand October 28, 2025 at 02:51 #1021291
Quoting Leontiskos
"Traces of that vitality." An approach that attempts to relativize Plato to his own time and place, such as Wittgenstein's, inevitably falls into the conclusion that a different time and place—such as our own—might still mange to find some "traces of vitality" in that foreign text.

Again, my whole point is that Plato's 2500 year-old text is much more "alive" to us than Wittgenstein's contemporary text, and this helps show why the meta-thesis being applied is incorrect. If a relativizing-thesis were correct, then this couldn't be the case—at least when it comes to texts that are "intended" to be "public."


You are nitpicking. I am happy to grant that there may be torrents of vitality in Plato's dialogues. Maybe my use of "traces" was misleading, but the contrast I intended was between vitality that accrues from the production process (aimed at other participant in a world animated by live social practices, including linguistic/literary ones) from the different sort of vitality that accrues from private/personal cogitative exercises (akin to training), and that lose this vitality when their traces get extracted from the context of their initial production. It's true that Plato's texts can survive unblemished, as do say, Bach's cantatas, when consumed in a different cultural context, but that's because there are deep commonalities between the modes of acculturation of merely superficially different human cultures. Some degree of attunement to the relevant idioms, and understanding of the underlying projects, still are required. I have a very cultured friend who thinks very poorly of Plato's writings, but this is because he isn't attuned at all to their underlying philosophical projects. And many music lovers find J. S. Bach boring, mainly because they aren't attuned to the relevant musical idioms.

I think my intended contrast also accounts, at least in part, for the reason why Wittgenstein's writings feel dead to you. They mostly are assembled (without his consent, posthumously in almost all cases except for the Tractatus) from notes that he jotted down for himself. He did intend to publish the PI but didn't get round to. And although Wittgenstein was a very careful thinker, one thing that his writings clearly aren't displays of is craftsmanship. There are admirable depths of thought in them but no liveliness except as can be laboriously inferred about his original thought process.

I sort of agree, and recently said something similar


:up:

In the more general context of an artist, we might say that the true artist does not seek to appease those who will view the work, and yet the LLM does seek to appease. That is its whole raison d'être. It is the indifference of the artist that marks the best art, and it is precisely this indifference that the LLM cannot access.


I quite agree with this and that's one of the core reasons that animates my own "AI-skepticism" as I intended to more fully articulate it in my newer AI thread. LLMs only are "authors" of what they write by procuration since they lack conative autonomy. I would however surmise that the great artist who is indifferent to how his works will be received by the masses, say, or by the authorities, or guardians of the tradition, usually cares that they're worthy of being well received by whoever is worthy of receiving them (and that was likely the case for Plato,) or are disruptive of existing institutions and practices that are worthy of being disrupted. There is always some care for others lurking behind the indifference of the great artists, which distinguishes them from the careless indifference of the (alleged) author of "The Art of the Deal."
Leontiskos October 29, 2025 at 16:26 #1021612
Quoting Pierre-Normand
You are nitpicking.


If someone cannot weigh the meta-thesis that you are proposing, then yes, they must see me as nitpicking when I object to that meta-thesis. But if someone can question their own meta-thesis then they must at the same time consider the possibility that I am not nitpicking.

Let me simply put the question to you: Do you think an LLM would have an easier time passing itself off for Plato or Wittgenstein?

Quoting Pierre-Normand
Maybe my use of "traces" was misleading, but the contrast I intended was between vitality that accrues from the production process (aimed at other participant in a world animated by live social practices, including linguistic/literary ones) from the different sort of vitality that accrues from private/personal cogitative exercises (akin to training), and that lose this vitality when their traces get extracted from the context of their initial production.


And my point is that the contrast you are drawing cannot be as strong as you would wish. This is because the qualitative difference between lifting weights and playing rugby is much greater than the qualitative difference between writing philosophy in a private manner and writing philosophy in a public manner. I think your analogy limps too much.

Quoting Pierre-Normand
It's true that Plato's texts can survive unblemished, as do say, Bach's cantatas, when consumed in a different cultural context, but that's because there are deep commonalities between the modes of acculturation of merely superficially different human cultures.


This looks to beg the question by assuming that there can be no consideration of anything beyond cultural relativism. It's basically a hostile translation of Plato, given that he saw himself as mediating Forms that are not culturally relative.

Quoting Pierre-Normand
Some degree of attunement to the relevant idioms, and understanding of the underlying projects, still are required. I have a very cultured friend who thinks very poorly of Plato's writings, but this is because he isn't attuned at all to their underlying philosophical projects. And many music lovers find J. S. Bach boring, mainly because they aren't attuned to the relevant musical idioms.


In some ways you are right, but I think it comes back to idiosyncrasy, and in this case it comes down to the idiosyncrasy of the idioms. For example, someone might think mathematics makes no sense, but it would be a stretch to say that this is because "they are not attuned to the relevant idioms." It is rather because they do not understand what the speaker is saying. The very idioms that Plato uses reflect his philosophy that truth and philosophy is not reducibly idiomatic.

The introduction of Bach and musical taste strikes me as another stretched analogy. Beauty and truth differ to a reasonable extent in relation to the "idiomatic." But Bach is a very complex form of music. Does your friend prefer harmony to dissonance? (Music is also complicated given the way that trade-offs must be managed. For example, an Indian Raga uses pure intervals in a way that Bach cannot given his well-tempered scale. The more notes one uses, the less pure the intervals.)

Cultural relativism also results in the conclusion that no cultural form is superior to any other, and this is something that Plato would reject. For the classical tradition, some forms are better than others, and it may require education to understand and perceive this. Of course it also then follows that some education is superior to other education.

The LLM is cut off from the possibility of a Platonic approach. It weighs all opinions and words equally. It is a democratic instrument (except in those cases where it is hard-coded to reflect views within the Overton window).

Quoting Pierre-Normand
I think my intended contrast also accounts, at least in part, for the reason why Wittgenstein's writings feel dead to you. They mostly are assembled (without his consent, posthumously in almost all cases except for the Tractatus) from notes that he jotted down for himself.


I agree with you here, but note that the distinction is a bit different. Rather than a private/public distinction, we have an assemblage/unity distinction. One can privately philosophize a unified work which is not an assemblage of unconnected parts. Indeed, there are highly unified works that were only published posthumously.

Quoting Pierre-Normand
I quite agree with this and that's one of the core reasons that animates my own "AI-skepticism" as I intended to more fully articulate it in my newer AI thread. LLMs only are "authors" of what they write by procuration since they lack conative autonomy.


We definitely agree here. :up:

Quoting Pierre-Normand
I would however surmise that the great artist who is indifferent to how his works will be received by the masses, say, or by the authorities, or guardians of the tradition, usually cares that they're worthy of being well received by whoever is worthy of receiving them...


That's right, but this means that the great artist must judge worth in a non-democratic manner, and I'm guessing we would agree that the LLM cannot do this.
Pierre-Normand October 30, 2025 at 06:10 #1021773
Quoting Leontiskos
Let me simply put the question to you: Do you think an LLM would have an easier time passing itself off for Plato or Wittgenstein?


I genuinely don't know and that might be kind of indeterminate due to methodological leeway in setting up controls for a test. (e.g. are testers meant to be equally cognizant with the styles or with the thoughts?)

The introduction of Bach and musical taste strikes me as another stretched analogy. Beauty and truth differ to a reasonable extent in relation to the "idiomatic." But Bach is a very complex form of music. Does your friend prefer harmony to dissonance? (Music is also complicated given the way that trade-offs must be managed. For example, an Indian Raga uses pure intervals in a way that Bach cannot given his well-tempered scale. The more notes one uses, the less pure the intervals.)


I was not arguing for relativism. My original point (when I responded to @Jamal) was to identify one reason why some texts (e.g. conversations with LLMs) appear stale when read by third parties. I was not arguing that this was because they were conversations (like Plato's dialogues), but rather because they were occurring in a sort of echo chamber similar to what occurs in cogitation, when one "talks" to oneself and rehearses thoughts and arguments, or when one jots down notes (for oneself) summarising such cogitations. The life that's missing is the dynamics of real exchanges where the words that are chosen are meant to be understood by others, which Plato's dialogues exemplify. Nevertheless, episodes of internal cogitation, or conversations with LLMs can feel (and be) alive for the human being involved in them when it is dynamic and productive. But this liveliness can't be conveyed to other without being, not "translated" from thought to words, but for the very first time expressed.

The LLM is cut off from the possibility of a Platonic approach. It weighs all opinions and words equally. It is a democratic instrument (except in those cases where it is hard-coded to reflect views within the Overton window).


That's another issue but one that I broadly agree with (with caveats since LLMs don't merely average opinions) as I recently suggested here.
Janus October 31, 2025 at 04:49 #1022013
Quoting Pierre-Normand
I was not arguing that this was because they were conversations (like Plato's dialogues), but rather because they were occurring in a sort of echo chamber similar to what occurs in cogitation, when one "talks" to oneself and rehearses thoughts and arguments, or when one jots down notes (for oneself) summarising such cogitations.


I'm not so convinced that they are necessarily like an inner dialogue?although I'm only beginning to explore ideas with Claude I have found that it comes up with interesting ideas I would likely never have arrived at alone or would be likely to have found searching the internet..
Pierre-Normand October 31, 2025 at 05:18 #1022016
Quoting Janus
I'm not so convinced that they are necessarily like an inner dialogue?although I'm only beginning to explore ideas with Claude I have found that it comes up with interesting ideas I would likely never have arrived at alone or would be likely to have found searching the internet..


:up:
Janus October 31, 2025 at 06:18 #1022023
Reply to Pierre-Normand Cheers, perhaps I misunderstood what you were saying then, or didn't pay enough attention to the surrounding context.

Edit: Reply to Pierre-Normand I looked back over the whole paragraph wherein the part I responded to occurs and it now seems to me that you were referring just to some conversations with LLMs, not all.

Do you think LLMs understand what we say to them, and can actually reason as we do, as opposed to how we think we do?
baker November 03, 2025 at 23:04 #1022914
How time flies!

Quoting Joshs
My definition of tedious research is busywork, made necessary not because it is an intrinsic component of creative thought, but because it is an interruption of creative thinking


Robert Greene was once asked how he defines creativity. It’s a word that gets thrown around. It gets mythologized and romanticized. “People have all sorts of illusions around the word that aren’t the reality,” Robert said. “The reality is that creativity is a function of the previous work you put in. So if you put a lot of hours into thinking and researching and reading, hour after hour—a very tedious process—creativity will come to you…It comes to you, but only after tedious hours of work and process.” I like this definition because it means creativity is not some mysterious form of magic. It’s not something some people simply have and some people simply don’t. It’s something rewarded to those who put the work in.

https://billyoppenheimer.com/august-14-2022/


Manuel November 04, 2025 at 19:59 #1023116
I guess that this may have been mentioned. One thing is AI use in this place. But damn, it's everywhere. I had to get an extension to block it from always appearing in Google searches. Getting "answers" from secondary sources (the AI using the primary source) can only lead to more "mutations" increasing errors.

AI use can be perfectly fine. But there is something to be said about too much of it all the time.
Janus November 05, 2025 at 22:10 #1023373
Reply to Manuel I agree. We need to check our sources if we are to be rigorous in our thinking. The good things about AI is that it can help you find sources you might not have otherwise, or at least would have had to spend more time searching. AI uses all the same sources that are available to us. One danger is that AIs may start using material created by AIs and perhaps very quickly the net may become flooded with this material. We can always ask an AI to reveal the sources of its responses.
Manuel November 06, 2025 at 02:49 #1023428
Reply to Janus

Yes. It becomes a very big "meta" problem - hallucinations get clocked in as facts (this is already happening) and other AI's use the hallucinated data as fact, amplifying its abundance and stretching its reliability.

They're slated to run out of trainable data next year, until paywalled sources open up.

I don't know man, the tools are impressive in many ways, but being forced to use it in everything can't be good. One has to imagine that this will have massive negative ramifications down the line...