How to use AI effectively to do philosophy.
Heres an addition to the growing number of threads on AI.
We've got threads hereabouts on what AI has to say about philosophical issues, weve a directive against merely presenting a post that is AI generated, or mostly AI generated. and weve threads begging for a total ban on the use of AI.
Some folk have expressed - shall we say "discomfort" - with idea of having any part of a post AI generated, while others, perhaps most, appear not to mind. But its too late, either way. The toothpaste is out of the tube, and it aint going back.
So it's time to ask, what to do about it, or more, what to do with it. Instead of a blanket Ban it! or a resignation to the End Times, lets look at how philosophers, and writers more generally, might use AI.
Seems to me, at the fundament, that what we who pretend to the title philosopher are looking for is some semblance of truth, whatever that is; at writing that is thought provoking; at nuanced and sound argument. Whether such an argument comes from a person or an AI is secondary.
Heres a point that philosophers of all stripes should be able to follow, and on which they might even be forced to agree. Rejecting an argument because it is AI generated is an instance of the ad hominem fallacy. Its rejecting the argument not because of a lack of soundness but because of its source. Philosophers, of all people, should evaluate an argument on its merit, not its authority. Rejecting AI outright is bad philosophy.
It might be argued that on a forum such as this, the point is to pit our ideas against each other, that its about building a community of thinkers, based on the romance of the cut-and-thrust of argument. On this account, making use of AI is cheating. Its a good point, on the face of it, but it brings us back to what we think we are doing here. Is this forum a place to evaluate ideas, to sort out the better ways of thinking about the issues at hand - or is it just a place to match wits? Is the forum more than just a pit for fighting cocks? In my naivety, I like to think so.
I've been using ChatGPT and Claud for a few months. Heres some of the uses to which Ive put it.
Ive used AI to quickly and succinctly summarise accepted fact. Physical theory, maths, and so on. Or to summarise the thought of this or that philosopher - what you get will be a weighted average, of course, but thats exactly what is wanted when the aim is to set up a background of general agreement.
Ive used AI to understand difficult posts. The Happy Sycophant will put the best light it can on even the worst paragraph, especially if prompted properly. It aims to please, and we can make use of that. It might be too willing to tell you what you want to hear, but if you pretend to be your opposite, you can have it tell you want you don't want to hear.
Ive used AI to be critical of my own writing. I do this by pretending it is not me. Ill feed it a draft post attributing it to someone else, and ask for a critique. Itll try to comment on the style, which I dont much want, but the right sort of prompt will usually find quite interesting and novel angles.
So heres the question for this thread. How can we use AI to do philosophy better? And especially, what specific ploys, what prompts, are you using effectively?
Comments (231)
Now that you've read that, your replies have been partially generated by AI. Have fun. :wink:
Then I succeeded. I hope you don't mind my starting a new thread rather than adding to your already existing one. I think the topic sufficiently distinct.
When I told ChatGPT I wrote the OP, it was much more accurate...
And went on at length about how good the post was.
I don't mind at all you creating another thread on AI. The more we think about and discuss the issues with AI the better in my view.
My view on using them is softening a little. Since their continued completely irresponsible rollout is inevitable, the stance that advocates not using them at all because you would be supporting the recklessness seems pointlesslike pissing to put out a forest fire.
It does irk me that people present stuff essentially written by AI as their own work, although I acknowledge that from the point of view of assessing the quality, relevance and soundness of the work itself, my response is irrelevant.
I was struck by this:
So the AI is saying that it is not an interlocutor... Is that a performative contradiction? If it's not an interlocutor, then no; but if we are to take it's argument seriously, then we are by that very act considering it to be an interlocutor... and the illocution cannot be sincere.
So if we are to avoid a performative contradiction, we have to say that the AI is not an interlocutor.
What we might do is to consider the strings of words the AI produces as if they were produced by an interlocutor. Given that pretence, we can pay some attention to the arguments they sometimes encode...
Just a side thought.
Let the world move on this will be the old hill I stand upon alone with my books and mumblings...
While I can't say I completely agree and embrace your sentiment, I certainly have respect for it and can see the dangers or otherwise loss of quality of life and human spirit that may be brought about by incessant and ceaseless use.
Everyone will sound so smart online or as long as they have their phone on them. Then, if the power goes off, or there's an error in the network, or they otherwise have to talk or explain something in person... all you're going to hear is "Uhhhhh.......yeah, I need more time to think about that to respond to it properly." :lol:
Geoffrey Hinton believes AIs are capable of reasoning, not yet as well as humans ( although I wonder which humans he is referring to). I guess if they are capable of reasoning then they can be interlocutors, and ChatGPT is simply bullshitting in saying AIs cannot be interlocutors.
If they are not capable of reasoning then all they are doing is presenting examples of human reasoning, albeit synthesized in novel ways and in their own words.
Perhaps we might best keep that part of the discussion in 's thread, and here look to how we can use AI to do better philosophy? Make the best of a bad lot, if you like.
Yep, that's what I was getting at before. We might use Austin's terminology. The AI at least presents us with a locution - a sequence of words in some sort of correct order. Does that amount to an illocution - an act performed in making an utterance?
Searle's Chinese Room says "no!", of course.
But seeing a Phatic act as an illocution is something we do, in making sense of the utterance.
So there is a way of understanding an AI claiming not to be an interlocutor at the locutionary level, such that it's a string of words that make no such claim. And a way of understanding it at the illocutionary level, were perhaps we understand the locution as having the same structure as an illocution that says that an AI cannot be an interlocutor...
All a bit convolute. The idea is that the AI isn't really saying anything, but is arranging words as if it were saying something.
There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it were saying something", that is that we don't have subjective experience any more than they do. I remain skeptical, but I entertain the possibility that there might be something in that.
AI at this point is not capable of giving any "semblance of truth".
Quoting Banno
Yes, it can very aptly "summarize accepted fact", and you might want to use it for that, if that is what you are after. But "fact by the masses" is a far cry from "semblance of truth".
Quoting Banno
I really do not see the difference here. Following Wittgenstein, all that "saying something" is, is arranging words as if you were saying something. Meaning (as in what is meant, by intention) is not a separate requirement for "saying something", because meaning is assumed to be inherent within "arranging words as if you were saying something".
Modern philosophy has so effectively combined meaning with saying, such that meaning is taken for granted, and "saying" is nothing more than arranging words in an intelligible way. You will not provide an effective argument to say that what the AI provides has no meaning. Therefore you will not have an argument to say that the AI doesn't say anything.
This implies that the ship has already sailed (the paste is out of the tube) with respect to "saying". It's too late to go back and insist that "saying" is something more than arranging words in an intelligible way. Instead, we need to look deeper, at the meaning of words like "honesty". "truth", and "communion", to determine whether AI partakes in any of these.
Something has gone astray here, in. that if this were so, it's not just that we have never said anything, but that the very notion of saying something could not be made coherent.
Perhaps if I dig a bit deeper, the problem with Hinton might become clear. I'll take it as granted that in these posts, we made assertions, asked questions and entertained thoughts; that we have, as Austin put it, performed illocutionary acts with words.
Now if folk do not agree with this, then there's and end to the thread, since that's what the thread is for. So we might continue in the assumption that we re doing more with these words than just putting them in a statistically likely order.
But according to Searle, the AI cannot make assertions, ask questions or entertain thoughts. All it can do is string words together, at the locutionary level.
Should I set out the illocution/locutionary difference more clearly? ChatGPT suggests:
I guess this is a reductio contra Hinton... your approach says we cant, but we do. So there is a mistake somewhere.
No, it isn't. Wittgenstein said nothing of the sort.
I agree with most of what youve said, and I use AI that way at work all the time. I test ideas and improve my sentences. It helps me get closer to what I want to say. I sometimes wonder if AI will replace some of the reasons people are members of forums like these.
People have been ripping off secondary sources, essays and encyclopedia extracts ad nauseam in these sorts of forums, so AI just seems like a more refined version of existing behaviours, to me.
There are some posters on forums who use every second reply to pontificate at length about the history of ideas, as if theyre teaching philosophy at a community college. AI will probably just make these guys more monotonous.
Good.
Allow me to get back to some semblance of truth.
Quoting Banno
I see what you are saying. But maybe you dont need to conflate AI with the hominem to make your basic point. All you need to say is, if 2+2=4 is written by AI or by a philosopher, we need not concern ourselves with any difference between AI or a philosopher and can instead still focus our philosophic minds and evaluate the soundness and validity of the argument qua argument.
I agree with that.
And I agree, its a separate, or secondary discussion to raise the differences are between AI versus hominem. (And to say AI generated is an instance of the ad hominem seems rash. And unnecessary.)
Quoting Banno
Rejecting good arguments no matter where they come from is bad philosophy. (For the same reason we should give each other more respect here on TPF.)
So I also agree with what is implied in your argument, namely that ad hominem attacks on AI, and anyone, are fallacious arguments.
But that all seems easier to swallow about AI. We always are stuck judging the validity and soundness of the words we are presented with, separately from judging the source from which those words come.
The more dramatic issue with AI is that it is a tool that can be used by a person, to easily deceive another person.
AI is a computer, as always. Its a tool. No need to completely shrink from using a new tool to process words for ourselves.
But to use a tool properly you have to know youre using a tool - you have to learn the tools limitations. You have to be aware of all of the ways AI can create error, before you can properly read its content.
If we dont know we are dealing with AI, and we think we are reading what a person like you and me would say, we can be deceived into trusting a source that is false to us and without this trusted context, misunderstand the content. Like if I thought the answer to 3.14386 X 4.444 came from a calculator or from a third-grader . We need to know who/what we are dealing with the evaluate how to judge content most diligently.
The simple solution to this deception is for people to admit they are using AI, or for purely AI-generated content for it to be clearly labeled as such - then we all know what we are dealing with and can draw our own judgments about sourcing and citation and hallucination and personal bias, and trust, and accuracy, etc, etc
Now, of course, instead, people will use AI to lie, and cheat, and defraud and harm.
But we cant ban it. Toothpaste is everywhere now.
So we should admit to ourselves weve created new sources of both treachery and beauty, and aspire to demand honesty about it between each other, thats all. Lets not allow AI, or worse, consciously use AI, to fill our world with more error. And not hiding AI as personal intelligence avoids the error of the lie.
This is the only way some semblance of truth will be maintained.
It is amazing to me how AI is loose in the world and at the same time we dont really know what it is (like a tool, a fast computer, like a new learning intelligence, like a person, like a toaster )
My prediction for the predictive language modelers: philosophers and psychologists will discover/demonstrate how these LLMs are not persons, and in so doing further define what it means to be human a bit better. AI, even that behaves exactly like a person, will never evolve a core in the same way we persons have a subjective seat of experience. They will always remain scattered, never unified into a consciousness of consciousness.
But just because AI is just a word machine, this doesnt mean we human inventors of this word machine cannot also derive truth and wisdom from the words our AI generates.
I could be wrong
From that article, another prompting strategy:
Thanks, this is the kind of discussion we need.
Quoting Banno
I like this. I asked Deepseek to incorporate it into a set of guidelines based on the existing AI discussions on TPF. Below is the output. I think it's a useful starting point, and I encourage people here to suggest additions and amendments.
Point 4 is the most relevant to the OP's question.
This is the thing I have a problem with. I dislike the reduction of philosophy to a set of arguments, with philosophers like puppets being made to fight each other. This kind of philosophy seems to always interpret the philosophers badly or controversially. Mere summaries won't do. Since there are a hundred ways of interpreting the Critique of Pure Reason, you cannot rely on Wikipedia, SEP, or LLMs. (I've seen it many times in philosophical discussions; the misuse of Kant and Nietzsche makes me despair.)
The worry is that LLMs might make this sort of thing more prevalent. So the upshot is I'd want to amend that part of the suggested guidelines.
EDIT: We might also want to say that legitimate secondary sources ought to be consulted before using LLMs. Arguably it's much better to quote from academic papers or the SEP than from an LLM.
Game changers: A generative AI prompt protocol to enhance human-AI knowledge co-construction
This takes a constructivist approach to prompt engineering, advocating knowledge co-construction... or so it claims. Loads of jargon, but at it's core it's about working with the AI to construct a better outcome. In our case, the outcome is presumably better arguments and discussion. Constructivist learning theory, such that human-AI interaction becomes a process of co-constructing knowledge, rather than one-sided extraction of data or accepting AI content holus-bolus. The protocl boils down to
Note that the process is iterative. In the best threads, folk work together to sort through an issue. AI can be considered another collaborator in such discussions.
I was working through this article when posted, but perhaps that idea of collaborative effort fits in with what he has in mind.
The summary produced is itself open to critique, in the same way that a misrepresentation of some philosopher's position by a human is. And a good prompt would bring out the tensions and anomalies you mention, rather than hide them.
It'll be interesting to see what others have to say.
Yep. Confabulation means the results are unreliable and must be checked.
I mean, training wheels are nice. And I'm no great scholar. A good majority of these threads and levels of discussion are a fair level over my head (but perhaps I now know why...) but aside from that, if you can't ever graduate from training wheels to riding a bike on your own, maybe, I dunno, bike riding just isn't your thing, y'know?
You can only force a thing that's not meant to be so much before it becomes a burden to not only yourself but those around you. Sure, everyone wants to be smart. Not to be "tricked" or a "sucker" or a "Fool" (or rather, they want to learn how to fool others and get ahead, gaining personal material or other benefit from knowledge, pretty much the motive of 99% of people, including myself, at one point) and everything else but, there's a natural order to things and some things can't be rushed. We can fool others, but we can never fool ourselves. Not for very long. Before one inevitably wake's up one somber morning realizing one has been living a lie and is nothing close to the person they've spent countless hours and energy convincing an ultimately uncaring world they were (or perhaps were not?).
In short, I doubt anyone with anything worth saying actually needs to use AI. Period. Full stop. Beyond that, that it would actually make anything better (other than shorter and quicker per brevity) for the reader. Save for the non-English speaker who is learning English and wishes to participate prematurely. I'm pretty sure that was the heart and spirit of the "anti-AI" rule as it was posted when it was posted. Perhaps I'm wrong. But I doubt it. Unless some backtracking or "change of heart" has occurred between the staff here. And that's fine. I'm happy to be here. I'll sing whatever tune is favored, I suppose. People do like consistency, though. All society and human struggle was fought and formed with such in mind, lest we forget. :smile:
This looks like a process well suited for mitigating the last two among three notorious LLM shortcomings: sycophancy, hallucination and sandbagging. You yourself proposed a method for addressing the first: present your ideas as those of someone else and as a target for criticism.
Hallucination, or confabulation, is a liability of reconstructive memory (in AIs and humans alike) and is mitigated by the enrichment of context that provides more associative anchors. In the cases of LLMs, it's enhanced by their lack of any episodic memory that could cue them as to what it is that they should expect not to know. An iterative dialogue helps the model "remember" the relevant elements of knowledge represented in its training corpus that contradict potential pieces of confabulation and enables a more accurate reconstruction of their latent knowledge (and latent understanding).
Sandbagging is the least discussed shortcoming that LLMs manifest. They've been trained to adapt their responses (in style and content) to match the comprehension ability of their users. This tends to yield a phenomenon of reward hacking during their post-training. The proximal reward signal that their responses are useful is that they are appreciated (which also yields sycophancy, of course) and hence leads them to favor responses that prioritize comprehensibility over accuracy. In other words, they learn to dumb down their responses in a way that makes them more likely to be judged accurate. The flipside is that putting efforts into crafting intelligent well informed and detailed queries motivate them to produce more intelligent and well considered replies.
GPT-5's comments and clarifications on the above, including links to the relevant technical literature.
Is yours a private hill, or would you mind a visitor?
No pets, no bad habits, just me, a real human with real human attributes .however well-seasoned they may be .fully willing and capable of standing my own gawddamn ground.
Does TPF now disparage as vanity or prejudice or laziness the preference of some posters to decline to engage with chatbots at all?
Which parts of "research, brainstorming, and editing" does that apply to?
No, the idea is to encourage specific good practices in the use of LLMs, assuming they're going to be used. We are not encouraging people to use it if they're not already.
Quoting bongo fury
The meaning of "substantial" use is detailed in "2. The Cardinal Rule: Transparency and Disclosure":
[quote=Deepseek;1019250]Substantial Use: If an LLM has contributed significantly to the substance of a postfor example, generating a core argument, providing a structured outline, or composing a lengthy explanationyou must disclose this. A simple note at the end like "I used ChatGPT to help brainstorm the structure of this argument" or "Claude assisted in refining my explanation of Kant's categorical imperative" is sufficient.[/quote]
Anyway, as I said, those guidelines are merely suggested; I am looking for constructive criticism.
(I don't want to dominate this discussion too much. I'll probably end up starting a new discussion thread specifically for building and refining the AI guidelines.)
So you keep this somewhere easy to find and paste it in at the top of every philosophy-related conversation you have with an LLM.
So you say, but will you demonstrate that you actually believe what you say? Otherwise you are just showing that you know how to arrange words in an intelligible way.
You appear to be incorrectly applying some sort of theory of cause and effect. You did no such thing. I willfully criticized your act of arranging words. I apologize for having to disillusion you, concerning your attitude of having power over me, but you are referring to something I did, as if it was something you did. Please, assign responsibility where it is due.
Good, thanks. Even better if you could retain:
Failing that, a clear statement of,
Quoting Jamal
?
Also helpful, perhaps, some routine and visible indication, one way or the other, of,
Quoting Pantagruel
? Or if that's too harsh, something like, that none (or some) were used for
Quoting Deepseek
[Depending on where lines got drawn?]
As the question is now (rather suddenly) tending to arise, when one starts to browse any post?
Thanks for pointing that out.
And saying nothing else.
Am I the only one saying things that could fit in the other thread?
Not at all :) -- I suspect that here we're likely not very alone on this after all.
Doesnt that just depend on the LLM? And who determines that? We need to be better philosophers first in order to judge whether the LLM output is better and so whether the LLM is useful.
The question since 3000 years ago is How can we use X to do better philosophy? AI is just a new tool, a new X. Nietzsche asked how can I use prose to do better philosophy? Russell and Witt asked about math and linguistics.
Unless this thread is a tutorial on using LLMs that better philosopher way.
How is that any different from any philosopher?
The difference (to you) is your own judgement of what is spouted. And maybe the number who make up the few.
I appreciate the explanation of sandbagging. The adaptive process seems parallel to marketing feedback to customer selections: An algorithm of mirrors inducts future novelty.
That adds another wrinkle to "when is the interlocutor not an interlocutor" question discussed previously.
That reading of Wittgenstein assumes "meaning" is an arbitrary convention. That is precisely what he militates against in Philosophical Investigations. There is a passage that is amusing to read in this conversation about interlocuters being actual or not.
As it relates to this OP, Wittgenstein's statement throws the issue of pretense of AI into sharp relief. It is a pretend form of monologue when talking to oneself and a pretend form of dialogue when directed at others, whether admitted or not.
As a camper on the colline de Molière, my observation more properly belongs on the other OP.
Yes ... reminds me of school, and later as well. Many teachers and the whole approach to education considered us to be basically things, automata, robots. And then later, in the early days of internet forums, there was this in-your-face atmosphere of, "You can't think for yourself, you're just parroting others, you can't produce anything original". That line, "Please, Blue Fairy, make me a real life boy (girl)" was oddly relatable. Come to think of it, it still is.
So when I now read criticism of AI/LLMs, I'm reminded that those were the exact things we were told.
Not quite. I reject the headlong conflation of a text, identifiable word for word, with the host of arguments, meanings, ideas, intentions, speech acts etc that we seek to discern and delineate by comparing texts. All of which are potentially noble and valuable things, but not reliably identifiable nor distinguishable from instance to instance.
Natural language games (arts and sciences) are typically indeterminate that way. As pointed out by many. Goodman, in particular, points out (in the usual place) that the discerning and discriminating, though frequently too fine-grained to achieve replicability, is usually valuable (cognitively) partly on account of the tradition of anchoring it in a comparison of physically identifiable artworks or literary works. (Or photos or musical recordings.)
Hence the potential importance of authenticity of a painting, and of identity of a text. And one way we often are able to benefit from the identification of texts and paintings, in our striving to discern meanings and arguments and intentions (e.g. hidden but implied premises or allusions) is by reference to historical facts about the authorship. A fallible way, sure. Fallacious, often. But certainly disrupted, and not necessarily for the better, by plagiarism of one kind or another.
I'll leave you to it. This should probably be elsewhere.
It also depends on the prompt. Prompt engineering is a "thing", as the kids say.
The difference between the response of GPT to my OP, prefixed or not prefixed by "Look at this rubbish, by Fred".
Perhaps an instance of Hegel noting where a change of quantity is a change of quality.
Well, it's output seems generally well written, though not scintillating. And, what's written speaks for itself. I think it should be identified when used but otherwise am unconcerned. I long to see its comment on such gems as "Nothing nothings."
Good stuff.
But first, it seems inevitable that in a thread on the effective use of AI to do philosophy, there will be some need to defend the use of AI to do philosophy. That by way of my excusing my going somewhat off topic.
In the OP I mentioned a few objections, but not aesthetics. From what you and others have said, it's clear that the strongest objection is aesthetic.
And if your objection is that we should not use AI because it is ugly, then that's perhaps an end to the discussion.
It is ugly because it is inauthentic. Not in the existential sense, but by way of not having a flesh-and-blood person behind the phatic act.
An AI cannot put its balls on the anvil.
I think this a very good objection.
The answer we have is
In posting, one commits oneself to the content of the post. That's the move from the mere phatic tot he illocution. And it occurs, AI or no.
I would like to challenge that but am presently more interested in the Bongo response.
Agreed! That's indeed the chief ground for not treating it like a person. People often argue that chatbots should not be treated like persons because they aren't "really" intelligent. But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative. One must know the layout of the space of reasons and one must be motivated to pursue the right paths while navigating this space in the pursuit of theoretical and/or practical endeavors. Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners. The human partner remains responsible for deciding where to put their balls.
Let's take that on as an example. Just as is, the reply is:
But add "That fool Fred said..." and we get
Following the protocol suggested in the Game changers article, we begin:
We might leave the problem fairly general - what are we to make of "Nothing nothings"? The prior knowledge, we might lift from what was said above. A prompt such as :
"Nothing nothings" is a translation of Das Nichts nichtet, from Heideggers 1929 lecture What Is Metaphysics? What are we to make of this? Summarise three different responses.
Here's the result.
What do you think, @Ciceronianus?
The discussion is in reference to academic journals, however many of the issues mentioned carry over to our simple convocation.
The comment by Ian McLaughlin: "L;DR: Worrying about keeping AI out of scholarship is elitist, capitalist, and fruitless conservative sheep dip" continuing:
Ugh. It seems that AI can successfully parrot
the explanations of Heildegger's many apologists. I'm with Carnap in this, of course, but am willing to acknowledge that the phrase may be an inept stab at poetry of a sort, which I think is what Carnap suggested as well.
That said, I think it's a good response.
Isn't it a bit ironic to have AI write the AI rules for the forum? This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all. In this case one might think that by allowing revisions to be made to the AI's initial draft, or because the AI was asked to synthesize member contributions, one has not outsourced the basic thinking to the AI. This highlights why "responsible use" is so nebulous: because everyone gives themselves a pass whenever it is expedient.
I like the separating out of good uses from bad uses, and I think it would be helpful to continue to reflect on what is disagreeable about AI use and why. For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque.
A sort of core issue here is one of trust and authority. It is the question of whether and to what extent AI is to be trusted, and guidelines etch the answer to that question in a communal manner. For example, it is easy to imagine the community which is distrustful towards AI as banning it, and the community which is trustful towards AI as privileging it. Obviously a middle road is being attempted here. Transparency is a good rule given that it allows members to navigate some of the complexities of the issue themselves. Still, the basic question of whether the community guidelines signify a trust or distrust in AI cannot be sidestepped. We are effectively deciding whether a specific authority (or perhaps in this case a meta-authority) is to be deemed trustworthy or untrustworthy for the purposes of TPF. The neutral ground is scarcely possible.
Next step is to refine and optimise. I'm with Carnap, too, so let's look for problems therewith.
And a follow up:
The result.
Quoting Ciceronianus
Does that mean that you dont believe the a.i. captures Heideggers intent but only his devotees misinterpretation? That would seem imply that you understand the a.i. response and perhaps even find it somewhat valuable. Id say thats progress.
Quoting Banno
I could easily counter these analytic objections to Heideggers phrase The nothing nothings (and I have done so on many occasions) , but given my very positive experience with a.i. I was confident that it is more than capable of capturing the essential features of my critique. And it did not let me down:
So here is a variant, using much the same prompt re-worded for the "poetic" tradition.
It's really quite good at describing and summarizing these opposing positions. The lawyer in me admires this. I think it will be very useful in preparing and responding to legal arguments. I've chatted with it about it's application in the practice of law.
There's trouble there, too.
In fairness I should note that I find it difficult to attribute any significance to questions regarding Being. So, naturally enough, Nothing means nothing to me.
My reference was merely to the fact that the obscurity of H's work has prompted his admirers to, seemingly, compete with each other in providing explanations of it.
Yes, I've heard about this sort of thing.
Now that is perhaps something I think we could all work with.
It seems to me difficult to argue against the point, made in the OP, that since LLMs are going to be used, we have to work out how to use them well, precisely to address the concerns you have been expressing. That is, the best way of preventing the kind of things you don't like is to create a culture in which some uses of LLMs are stigmatized and others are encouraged. The stigmatization of all LLM use is likely to be counterproductive, since it will inhibit people from disclosing their use of it. You are more likely to be confident in reading a non-plagiarized post on a forum where it has become habitual not to pass off LLM-produced text as your own.
Quoting bongo fury
It didn't occur to me that anyone would interpret those guidelines as suggesting that posts written by people who are usng AI tools are generally superior to those written by people who don't use AI, nor that they are suggesting to people that they take up AI if they don't already use it. But it's no problem to include a statement like "Don't use LLMs if you don't want to :-)"
Quoting Leontiskos
I was aware of the irony, yes. But doing it manually would have been very tedious, so I used the LLM as a labour-saving tool. It's true that the labour involved would have been mental labour, but is all mental labour valuable in itself? I suspect this is a prejudice. Calculators similarly save us from tedious labour which is mental. Maybe a thesaurus does too: maybe there was a time when members of the educated elite could quote verbatim long passages from classical literature, giving ready access to, and understanding of, a wealth of diverse vocabularybut now we just look it up. Are we intellectually degraded? In some ways it seems so, but in other ways probably not: we can probably point to ways in which intellectual culture is superior now to a hundred years ago when they were quoting Virgil and Milton at length in the quadrangles. Intellectual labour has been redirected, away from memorization (which was always employed as a cultural marker as much as in honest intellectual enquiry or aesthetic pleasure) and towards higher-order tasks involving creativity and evaluation.
So in this case the LLM carried out the tedious part of the task; we effectively did the creation, in the original discussion, and we are now doing the evaluation.
That is interesting. And also makes sense, given AI is like a text calculator. The prompt feeds into the whole chain of events that one might call AI doing philosophy so to speak.
Quoting Leontiskos
I see AI as a tool. We can wonder about personhood and consciousness, but we can ignore that. Its a tool that generates hypotheticals we can then evaluate, test and prove, and believe and adopt, or not. All of which makes using AI for philosophy, on one level, like using any one elses words besides your own to do philosophy.
However, simultaneously, I agree that it would be disconcerting to let AI (or anyone/anything) be my authority without my consent. And AI is facilitating such recklessness and discord. The presence and influence of AI in a particular writing needs to never be hidden from the reader.
Further, it makes no sense to give AI the type of authority that would settle a dispute, such as: you say X is true - I say Y is true; but because AI says Y is true, I am right and you are wrong. Just because AI spits out a useful turn of phrase and says something you happen to agree is true, that doesnt add any authority to your position.
You need to be able to make AI-generated knowledge your own, just as you make anything you know your own. Making it your own is just another way of saying understand it. So I dont care if AI is used verbatim with no changes (and find it fascinating when it seems to say something thst cant be improved on), but only when one can restate it in different words, one understands it.
I'm seeing the opposite. Google search is an unprepossessing but epistemically valuable (e.g. falsifiable) tool for connecting physically real and identifiable texts to their physically real and accountable authors and sources. The prettier AI summary presumes to cut out the middle man and connect up the ideas directly, as though they were physically real and identifiable.
Quoting Banno
Cheers - I wrote it myself :wink:
The idea that we should keep up with the times to keep up with the times isn't convincing. Technocapitalism is definitive of the times we're in now, and it's a system that is not particularly friendly to human creativity and freedom. But you don't even have to agree with that to agree with me, only recognize that if we don't draw a clear line, there will effectively be no line.
Worst of all, it's to the point of being almost unenforceable. How does one know a student has only written 40% using AI? Some students are likely to get away with writing the majority of their theses using it, and at that point it becomes almost a hopeless task to prevent a downward spiral.
The department thinks it's very clever, "keeping up with the times", "acknowledging the new reality" etc etc. I don't. I think they are tying a noose for themselves and academic progress as a human quality. The proposal here is not so obviously dramatic but it will at least in my opinion push us in the wrong direction. And I see no pressing need for it.
On an optimistic note, those department heads may soon be laid off and replaced with AI administrators who will have the good sense to reverse this airheaded policy.
:cool: :up:
I agree. The risk is that you or we do appear to be anti-LLM tout court, because that will lead people to hide their use of it generally.
Yes, I see the danger of giving that impression.
How do they police that?
Sounds like a modern for-profit college. Daddy's money buys anything these days. At the cost of the future of society. But oh well, when Country XYZ who actually punishes fraud and rewards honest intellectualism invents the Super Nuke(TM), everyone I know or care about will be dead by then. So, why worry? Logic 101. Good luck, great grandchildren.
I have nothing to say about this topic.
It hadn't occurred to me that I was ranting, until you showed that to be a plausible reading. I humbly (well, I hope not smugly) ask you to consider that many of your comments do indeed characterise the more proper use of AI as due diligence?
I'm eager to agree, but I'm not sure what you're asking.
Due as in obligatory?
I don't know. It's kind of like saying that you can steal 40% of the bank's money, but no more. At that point, the concept of policing has already sort of gone out the window.
Surprisingly, it's part of a public government-funded university. Which makes it worse.
I guess in an educational setting that makes sense, but if AI cures cancer, we'll all own AI as an extension of human creativity.
Once it becomes that kind of tool, won't universities embrace it?
In the dark, a lit tea light is better than nothing I suppose. :grin:
Well, it's already embraced for research and rightly so. But plagiarism generally isn't and shouldn't be.
We ought not conflate the two things. I personally embrace AI for research and have had conversations amounting to hundreds of thousands of words with it, which have been very helpful. That's different from letting it write my posts for me.
And the only thing that we can practically control here is what shows up on our site. If it looks AI generated, we ought investigate and delete as necessary. Our goal imo should be that a hypothetical AI checker sweeping our site should come up with the result "written by humans". AI content ought ideally be zero.
I suppose so. I don't have any opinion one way or the other. :smile:
Well, you sound like you, gratifyingly. AI don't make them typos. :party:
Quoting Banno
the analytic tradition has historically responded, roughly from the 1920s onward, to heideggers phrase from what is metaphysics: the nothing nothings. I want you to produce a strong counter-critique of the analytic traditions critique of this phrase, mentioning the lack of an effective conceptual ground on which to situate the thinking behind it.
I think the most intellectually honest way of working with a.i. in interpreting philosophical texts is to strive to produce prompts which cover as wide a variety of readings as possible. Thats certainly something I could live with. But, of course, that means each of us will prefer certain reading soccer others. Some of us might be in modes to reject some readings as out and out false. But if we do that, our search for the true interpretation may incline
us to shape our prompts away from variety of readings and toward tunnel vision.
Apart from our biases, our lack of exposure to certain influences on a philosopher can limit the range of prompts we can think of. For instance, your own prompts on the Nothing includes poetic thinking and Gadamerian hermeneutics, but says nothing about the Nietzsche-Deleuze connection. They write about the historical connection between nothingness , negation and lack. Whereas Metaphysics up through Hegel, Marx and Freud have treated negation and absence in a way which prioritizes presence and unification, post-Nietzschean thought treats negation affirmatively, as creative.
How did this come to be?
Are you using a voice-to-text app?
Hold on. Are you an AI?
But is your argument sound? If you have a group of people argue over a topic and then you appoint a person to summarize the arguments and produce a working document that will be the basis for further discussion, you haven't given them a "calculator" job. You have given them the most important job of all. You have asked them to draft the committee document, which is almost certainly the most crucial point in the process. Yet you have re-construed this as "a calculator job to avoid tedium." This is what always seems happen with LLMs. People use them in substantial ways and then downplay the ways in which they are using them. In cases such as these one seems to prefer outsourcing to a "neutral source" so as to avoid the natural controversy which always attends such a draft.
Quoting Jamal
It could have been made more irenically, but @bongo fury's basic point seems uncontroversial. You said:
To say, "We encourage X," is to encourage X. It is not to say, "If you are doing Y, then we would encourage you to do Y in X manner." To say "allow" or "permit" instead of "encourage" would make a large difference.
Are students at schools nowadays, at any level, actually encouraged to have their own opinion about philosophers?
Are they encouraged to think in terms that there may be several valuable interpretations?
Back when I went to school, we weren't expected to have our own opinion about anything, and there was this belief that there was only one true way to understand something.
Most people I know, including Americans, think this way: there is only one true way to understand something. An "interpretation" is something that needs to be overcome. "I don't interpret, I don't take a perspective, I tell it like it is" goes the maxim.
I'm getting at a more fundamental issue here: If people generally think this way, their use of AI is only going to strengthen them in their single-mindedness.
Quoting baker
The worst of it is I dont remember what I was trying to say.
Quoting Fire Ologist
So if you use someone else's words to do philosophy, you are usually appealing to them as an authority. The same thing is happening with LLMs. This will be true whether or not we see LLMs as a tool. I got into some of this in the following and the posts related to it:
Quoting Leontiskos
-
Quoting Fire Ologist
I tend to agree, but I don't think anyone who uses AI is capable of using it this way (including myself). If one did not think AI added authority to a position then one wouldn't use it at all.
Quoting Fire Ologist
I would argue that the presence and influence of AI is always hidden from us in some ways, given that we don't really know what we are doing when we consult it.
Quoting Fire Ologist
LLMs are sui generis. They have no precedent, and that's the difficulty. What this means is that your phrase, "just as you make anything you know your own," creates a false equivalence. It presumes that artificial intelligence is not artificial, and is on par with all previous forms of intelligence. This is the petitio principii that @Banno and others engage in constantly. For example:
Quoting Leontiskos
Given all of this, it would seem that @bongo fury's absolutist stance is in some ways the most coherent and intellectually rigorous, even though I realize that TPF will probably not go that route, and should not go that route if there are large disagreements at stake.
:up: :fire: :up:
I couldn't agree more, and I can't but help think that you are something like the prophet whose word of warning will inevitably go unheededas always happens for pragmatic reasons.
Relatedly:
Quoting Jamal
Why does it matter that LLMs are going to be used? What if there were a blanket rule, "No part of a post may be AI-written, and AI references are not permitted"? The second part requires that someone who is making use of AI findand hopefully understandthe primary human sources that the AI is relying on in order to make the salutary reference they wish to make.
The curious ignoratio elenchus that @Banno wishes to rely on is, "A rule against AI use will not be heeded, therefore it should not be made." Is there any force to such an argument? Suppose someone writes all of their posts with LLMs. If they are found out, they are banned. But suppose they are not found out. Does it follow that the rule has failed? Not in the least. Everyone on the forum is assuming that all of the posts are human-written and human-reasoned, and the culture of the forum will track this assumption. Most of the posts will be human-written and human-reasoned. The fact that someone might transgress the rule doesn't really matter. Furthermore, the culture that such a rule helps establish will be organically opposed to the sorts of superficial AI-appeals. Someone attempting to rely on LLMs in that cultural atmosphere will in no way prosper. If they keep pressing the LLM-button to respond to each reply of increasing complexity, they will quickly be found out as a silly copy-and-paster. The idea that it would be easy to overtly shirk that cultural stricture is entirely unreasonable, and there is no significant motive for someone to rely on LLMs in that environment. It is parallel to the person who uses chess AI to win online chess games, for no monetary benefit and to the detriment of their chess skills and their love of chess.
Similarly, a classroom rule against cheating could be opposed on @Banno's same basis: kids will cheat either way, so why bother? But the culture which stigmatizes cheating and values honest work is itself a bulwark against cheating, and both the rule and the culture make it much harder for the cheater to prosper. Furthermore, even if the rule cannot be enforced with perfection, the cheater is primarily hurting themselves and not others. We might even say that the rule is not there to protect cheaters from themselves. It is there to ensure that those who want an education can receive one.
Quoting Jamal
Would that be a bad thing? To cause someone to hide an unwanted behavior is to disincentivize that behavior. It also gives such people a string to pull on to understand why the thing is discouraged.
I think it goes back to telos:
Quoting Leontiskos
What is the end/telos? Of a university? Of a philosophy forum?
Universities have in some ways become engines for economic and technological progress. If that is the end of the university, and if AI is conducive to that end, then there is no reason to prevent students from using AI. In that case a large part of what it means to be "a good student" will be "a student who knows how to use AI well," and perhaps the economically-driven university is satisfied with that.
But liberal education in the traditional sense is not a servant to the economy. It is liberal; free from such servility. It is meant to educate the human being qua human being, and philosophy has always been a central part of that.
Think of it this way. If someone comes to TPF and manages to discreetly use AI to look smart, to win arguments, to satisfy their ego, then perhaps, "They have their reward." They are using philosophy and TPF to get something that is not actually in accord with the nature of philosophy. They are the person Socrates criticizes for being obsessed with cosmetics rather than gymnastics; who wants their body to look healthy without being healthy.
The argument, "It's inevitable, therefore we need to get on board," looks something like, "The cosmetics-folk are coming, therefore we'd better aid and abet them." I don't see why it is inevitable that every sphere of human life must substitute human thinking for machine "thinking." If AI is really inevitable, then why oppose it at all? Why even bother with the half-rules? It seems to me that philosophy arenas such as TPF should be precisely the places where that "inevitability" is checked. There will be no shortage of people looking for refuge from a cosmetic culture.
Coming back to the point, if the telos of TPF is contrary to LLM-use, then LLMs should be discouraged. If the telos of TPF is helped by LLM-use, then LLMs should be encouraged. The vastness and power of the technology makes a neutral stance impossible. But the key question is this: What is the telos of TPF?
Then I've no followed your argument here: . I took you to be pointing out that the difference between a genuine masterpiece and a forgery - an aesthetic difference - was the authenticity of the masterpiece.
An aesthetic difference because, given two identical artefacts, the authentic artefact is to be preferred. Hence, given two identical texts, one human generated, the other AI generated, the human generated one is preferable, on aesthetic grounds.
Now I think that argument is sound.
But it's not what you were saying?
You say "If it looks AI generated, we ought investigate and delete as necessary"; the "we" here is you and the other mods. But of course they can't tell what is AI generated and what isn't. That hypothetical AI checker does not work. Further, mixed authorship is now the norm. You yourself say you are using AI in research.
It would be much preferred to have the mods spend their time removing poor posts, AI generated or not, rather than playing a loosing war of catch-up against Claude.
Given the sense of your sentence, it should probably be "over" instead of "soccer".
Quoting Joshs
That might be a partial answer, and should be a result of the protocol set out earlier in this thread. called what you describe "sandbagging". I think the best defence we have against it is not a ban on using AI, but an open discussion in which others can point to the sandbags.
The remedy for the absence of the Nietzsche-Deleuze connection is not found in rejecting AI, but in seeking your input into the discussion.
My guess is that your finger was a bit to the left on the "V", you typed "ocer" instead of "over" and it was autocorrected.
I write most of my forum posts on an iphone while hiking. Not conducive for accurate spelling.
A pissing contest, combined with quasi-efforts at healing existential anxiety.
Even the serious folks here aren't all that serious, or at least the serious ones aren't serious enough about posting much.
You hike a lot!
7 days a week, averaging 10 miles a day
I mean, why not focus on one thing at a time?
It mars the hike to do something else while on the hike.
I make a point of not reading Leon's posts, but this drew itself to my attention as a direct reply. I've learned that he confabulates the arguments of others so as to suit his purposes. Here is a case in point. I have not made the argument he here attributes to me. I have, amongst other things pointed out that a rule against AI cannot be reliably enforce, which is quite different.
Over and above all that, there is the theme of this thread, which is to explore ways in which AI might be used to improve the quality of the discussion.
For those who think philosophy consists in a series of appeals to authority, AI must be quite confounding.
I tend to think that a very small percentage of those who a.i. have that aim in mind. Can you think of a telos for this forum which includes a.i. but not in a way that needs to be characterized as cosmetic or machine-like? I know more pay attention to the fact that I am using a machine when I consult a.i. than when I use the world-processing features of my iphone to type this. Its not the machine I am beholden to when I expose myself to the ideas it delivers up, its the human thinkers it puts me in touch with. If you have ever been prompted to seek out relevant literature to aid in the composing of an OP, or your response to an OP, then your telos in consulting that textual material is the same as that of the many here who consult a.i while engaging in TPF discussions.
I use it to research not write the results of my research. I also use books to research and don't plagiarise from them.
Been through this already.
Quoting Banno
Says who?
Quoting Banno
Maybe. Maybe not. But I'll take heroic failure over cowardly capitulation.
You sound like my hiking friend. I used to do all my philosophy research and writing at home or in a library. But such things as unlimited cellular data, air pods and pdf audio readers freed me to use the great outdoors as my library. Ive always needed to pace in order to generate ideas, and Im a lot more productive out here than couped up facing 4 walls. Did you know Nietzsche composed his work while walking 7-10 miles a day? And Heidegger did his thinking walking around a farm in Freiburg.
Aristotle: Associated with the term "peripatetic" for his habit of walking around while lecturing and thinking.
Søren Kierkegaard: Believed walking was a way to find a state of well-being and walk away from burdens and illness, stating, "I have walked myself into my best thoughts".
Friedrich Nietzsche: A dedicated walker who believed thoughts not formed while walking were less trustworthy. He spent significant time hiking in the Swiss mountains to write and think, finding that walking facilitated his thought process.
Henry David Thoreau: Argued that walking in nature, even enduring discomfort like getting dirty or tired, builds toughness of character that makes one more resilient to future hardships.
Jean-Jacques Rousseau: Used walking as a way to think, particularly during solitary mountain walks.
Immanuel Kant: Had a very structured walking routine, marching through his hometown at the exact same time every day as a way to escape the compulsion of his own thoughts.
To say nothing of how dangerous it is to allow oneself to be distracted while out hiking.
Do you use a quill?
Quoting baker
Now you sound like my brother. Keep in mind I live in the Midwest , not the Rockies. There are no vicious or poisonous beasts here ( except for Republicans) , just small tracts of forest preserve with a road no more than a few minutes away.
With intended irony...
Prompt: find peer-reviewed academic studies that show the effectiveness of any capacity to recognise AI generated text.
The result.
"...there is peer-reviewed evidence that both humans... and automated tools can sometimes detect AI-generated text above chance. Effectiveness is highly conditional. Measured accuracy is often only modest."
So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random.
Yep :100:
And it's not like it's a rocket science distinction? Not a line that's hard to draw?
(Some of us draw it further back... I prefer not to interact with the man in the Chinese room if I don't think he understands; but I suppose that's a matter of taste, and I can imagine being persuaded. I guess I'm more likely to be persuaded by those not apparently desensitized to the more dire problem with plagiarism.)
You wouldn't see this claim as involving false equivalence?
Quoting Joshs
No, not really. There are primary sources, there are secondary sources, there are search engines, and then there is the LLM. Consulting a secondary source and consulting an LLM are not the same thing.
It is worth noting that those who keep arguing in favor of LLMs seem to need to make use of falsehoods, and especially false equivalences.
---
Quoting baker
Lol!
---
Quoting Banno
Which is the same thing, and of course the arguments I have given respond to this just as well. So you're quibbling, like you always do. Someone who is so indisposed to philosophy should probably not be creating threads instructing others how to do philosophy while at the same time contravening standing TPF rules.
Quoting Banno
The sycophantic appeal-to-AI-authority you engage in is precisely the sort of thing that is opposed.
That's not irony. That's incoherent self-contradiction. It's also against the rules of TPF.
Quoting Leontiskos
@Baden? Tell us what you think. Is my reply to you against the rules? And should it be?
yes, and overtime you can kinda intuit accuracy of what it's telling you based on subject matter and topic. For example, it's pretty much 100% accurate if you are asking it for common knowledge in popular subjects, but if the subject is more obscure, or relies more on analogue information, then it's much more likely to fail.
We both know that the crux is not unenforceability. If an unenforceable rule is nevertheless expected to be heeded, then there is no argument against it. Your quibble is a red herring in relation to the steelman I've provided. :roll:
Quoting Banno
I would be interested, too. I haven't seen the rule enforced despite those like Banno often contravening it.
It is also worth noting how the pro-AI Banno simply takes the AI at it's word, as a blind-faith authority. This is precisely what the end game is.
No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites, and see if it represented them correctly. Let us know the result. Use the AI as a part of an ongoing conversation.
At stake here is the task set for our Mods. Do they spend time guessing if a post is AI generated, or removing poor posts, regardless of their provenience.
But you didn't read the papers it cited, and you , "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random."
If you were better at logic you would recognize your reasoning process: "The AI said it, so it must be true." This is the sort of mindless use of AI that will become common if your attempt to undermine the LLM rule succeeds.
It's noticeable that you have not presented any evidence, one way or the other.
If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.
But that is not what you have chosen to do. Instead, you cast aspersions. This is another part of your modus operandi, in addition to your confabulation. You do not participate in a discussion about the topic, preferring instead to talk about the folk posting.
It's tedious.
I am pointing out that all you have done is appealed to the authority of AI, which is precisely something that most everyone recognizes as a danger (except for you!). Now you say that I am "casting aspersions" on the AI, or that I am engaging in ad hominem against the AI (!).
The AI has no rights. The whole point is that blind appeals to AI authority are unphilosophical and unresponsible. That's part of why the rule you are trying to undermine exists. That you have constantly engaged in these blind appeals could be shown rather easily, and it is no coincidence that the one who uses AI in these irresponsible ways is the one attempting to undermine the rule against AI.
That's simply not so. I am not saying that because it is AI generated, it is authoritative. The material is offered here for critique. Baden asked who said that the detection of AI text was unreliable. I use an AI to provide examples in answer to his question.
If you have some evidence that the citations provided by the AI are incorrect or misrepresent the case, then present it.
The AI is not being appealed to as an authority, but being used in order to provide sources for further consideration.
It is being used to promote the conversation, not to foreclose on it.
But it is, as I've shown . You drew a conclusion based on the AI's response, and not based on any cited document the AI provided. Therefore you appealed to the AI as an authority. The plausibility of the conclusion could come from nowhere else than the AI, for the AI is the only thing you consulted.
This goes back to what I've pointed out a number of times, namely that those who take the AI's content on faith are deceiving themselves when they do so, and are failing to see the way they are appealing to the AI as an authority.
I'll leave you to it, Leon. Cheers.
True, and that's because there is no such thing as an ad hominem fallacy against your AI authority. According to the TPF rules as I understand them, you are not allowed to present AI opinions as authoritative. The problem is that you have presented the AI opinion as authoritative, not that I have disregarded it as unauthoritative. One simply does not need some counterargument to oppose your appeal to AI. The appeal to AI is intrinsically impermissible. That you do not understand this underlines the confusion that AI is breeding.
If one is using a.i. properly (and to me thats the real issue here, not whether to use it at all), then the difference between consulting a secondary source and consulting an llm is the following:
After locating a secondary source one merely jots down the reference and thats the end of it. When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in ones own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote. The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on ones own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesnt come up at all.
Well, they could read the secondary source. That's what I would usually mean when I talk about consulting a secondary source.
Quoting Joshs
Okay, but remember that many imbibe LLM content without thinking of it as "arguments," so you are only presenting a subclass here.
Quoting Joshs
Right, and also reading the reference. If someone uses a LLM as a kind of search engine for primary or secondary sources, then there is no concern. If someone assents to the output of the LLM without consulting (i.e. reading) any of the human sources in question, or if one is relying on the LLM to summarize human sources accurately, then the problems in question do come up, and I think this is what often occurs.
Quoting Joshs
What do you mean, "The danger of falsehood doesn't come up at all?"
It seems to me that you use LLMs more responsibly than most people, so there's that. But I think there is a very large temptation to slip from responsible use to irresponsible use. LLMs were built for quick answers and the outsourcing of research. I don't find it plausible that the available shortcuts will be left untrodden.
If the LLM is merely being used to find human sources, which are in turn consulted in their own right, then I have no more objection to an LLM than to a search engine. In I give an argument to the effect that LLMs should not be directly used in philosophical dialogue (with other humans). I am wondering if you would disagree.
This goes back to philosophiums point back on page 1: the argument is everything in philosophy. A good sound argument produce by an ai should trump a bad argument produced by a human, right? A 40% ai written thesis thats better than a 100% human produced one should be preferable right?
Arguably the most important part of the job is very often the "calculator" task, the most tedious task.
Quoting Leontiskos
I may rewrite it to avoid misreadings like yours and bongo's. But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as". But it seems it wasn't obvious enough, so as I say, I might rewrite it or add a note at the top.
Absolutely. I made this point to a colleague when discussing this issue. The university is not just the buildings and the abstract institution, it is the valuing of knowledge, and the process of fostering and advancing it. Similarly, here, we are not just about being efficient in getiing words on a page, we are supposed to be developing ourseves and expressing ourselves. Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human.
And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel.
For the record, I agree with this, but I think it has to be put in the context of a How to use LLMs, since there is significant ambiguity even in a statement like "you are prohibited from using AI to write a post on this forum".
Agreed. :up:
You were transparent about where you got the information, so it comes down to a question of credibility, and we can make our own minds up on that. If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address.
We may disagree about this issue, but I appreciate your character and personality, and that has always come through in your writing. How you internally process information from different sources when you are clear about your sources is not my main concern here. It is that I think we all ought to make sure we continue to be ourselves and produce our unique style of content. That is what makes this community diverse and worthwhile---not some product, but a process.
For editing, as in spell and (in the limit) grammar checking, yes? Or, I guess not, as these aren't LLM tasks?
Quoting praxis
@praxis Did you get clarification on this?
Quoting Baden
The point is that you've outsourced the drafting of the guidelines to AI. Whether or not drafting forum guidelines is a tedious, sub-human task is a separate question.
Quoting Jamal
You are claiming that, "We encourage using LLMs as assistants for research, brainstorming, and editing," means, "If one wishes to use an LLM, we would encourage that they use the LLM in X way rather than in Y way." Do you understand that this is what you are claiming?
It is very helpful when those who enforce the rules write the rules. When this does not happen, those who enforce the rules end up interpreting the rules contrary to their natural meaning.
So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?
How is this in line with the human-to-human interaction that the rule is supposed to create?
I would agree. I would want to say that, for philosophy, thinking is an end in itself, and therefore cannot be outsourced as a means to some further end.
Quoting Baden
I'm encouraged that you're willing to put in the work.
As above, I don't see how the line can be drawn in such a way that mere appeals to AI authoritywhether an implicit appeal as found in a post with nothing more than a quoted AI response, or an explicit appeal where one "argues" their position by mere reference to AI outputare not crossing the line. If one can cite AI as an authority that speaks for itself and requires no human comment or human conveyance, then it's not clear why the AI can't speak for itself tout court.
We could envision a kind of limit case where someone queries AI and then studies the output extensively. They "make it their own," by agreeing with the arguments and the language to such an extent that they are committed to argue the exact points and words as their own points and words. They post the same words to TPF, which they have "baptized" as their own and are willing to defend in a fully human manner. Supposing for the sake of argument that such a thing would be formally permissible (even if, materially, it would be sanctioned or flagged). What then would be the difference when someone posts AI output to justify their claims? ...And let us suppose that in both cases the AI-sourcing is transparent.
If one wants members to think in a manner that goes beyond AI regurgitation, then it would seem that quote-regurgitations of AI fall into the same category as first-person regurgitations of AI. Contrariwise, if I love Alasdair MacIntyre, imbibe his work, quote him, and begin to sound like him myself, there is no problem. There is no problem because MacIntyre is a human, and thus the thinking being emulated or even regurgitated is human thinking. Yet if someone imbibes AI, quotes it constantly, and begins to sound themselves like AI, in this case the "thinking" being emulated or regurgitated is non-human thinking. If I quote MacIntyre and appeal to his authority, I am appealing to the authority of a thinking human. When Banno quotes AI and appeals to its authority, he is appealing to the authority of a non-thinking language-piecing algorithm.
The laissez-faire approach to sourcing leads to camps, such as the camp of people who take Wittgenstein as an authority and accept arguments from the authority of Wittgenstein, and those who don't. The laissez-faire approach to AI sourcing will lead to the same thing, where there will be groups of people who simply quote AI back and forth to each other in the same way that Wittgenstenians quote Wittgenstein back and forth to each other, and on the other hand those who do not accept such sources as authorities. One difference is that Wittgenstein and MacIntyre are humans whereas AI is not. Another difference is that reading and exegeting Wittgenstein requires philosophical effort and exertion, whereas LLMs were basically created to avoid that sort of effort and exertion. Hence there will be a much greater impetus to lean on LLMs than to lean on Wittgenstein.
Isn't the problem that of letting LLMs do our thinking for us, whether or not we are giving the LLM credit for doing our thinking? If so, then it doesn't matter whether we provide the proper citation to the LLM source.* What matters is that we are letting the LLM do our thinking for us. "It's true because the LLM said so, and I have no need to read the LLM's sources or understand the underlying evidence."
(Cf. The LLM is a private authority, not a public authority, and therefore arguments from authority based on LLMs are invalid arguments from authority.)
* And in this case it is equally true that the "plagiarism" argument is separate and lesser, and should not be conflated with the deeper issue of outsourcing thinking. One need not plagiarize in order to outsource their thinking.
[quote=prompt]This is a thread about ways to make use of AI in doing philosophy. Make a list of the suggestions in that thread that would supplement or improve the quality of the philosophy presented in the posts on the forum.[/quote]
Another aspect of this is scarcity. LLM content is not scarce in the way human content is. I can generate a thousand pages of LLM "philosophy" in a few minutes. Someone who therefore spends considerable time and energy on an OP or a post can be met by 's, "This LLM output says you're wrong," which was generated lazily in a matter of seconds.
Forums already have a huge struggle with eristic, showboating, and falsification-for-the-sake-of-falsification. Give free access to a tool that will allow them to justify their disagreement at length in the snap of a finger, and guess what happens?
I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of @Jamal's arguments, it may become more obvious that there is a problem at stake.
(@Baden, @Jamal)
I don't think Hinton is saying that nothing can be saidby us, or by LLMs, but that our inability to conceive of LLMs having subjective experience on the grounds that they merely predict the statistical likelihoods of the next words in terms of maximal consistency and cohesiveness, when we arguably do exactly the same thing (in a much slower and looser way), shows that our notion of subjective experience as an inner theatre stocked with qualia and ideas which are pre-existent and only later put into words is an illusion.
It is that inner theatre which we imagine we have and which we cannot imagine them having that is our idea of subjective consciousness.
In other words, maybe it is something like we imagine that the language games are created by us, but the reality is that we are always already immersed in the evolving language games and are always playing the game of what to say by selection, from what is available to recall and is judged, according to a (for us) loose process of 'weighing', most appropriate and thus is selected.
Quoting Banno
According to who?
There are a few points people are trying to make. Which one are we supposed to care about?
And then theres whatever Claude seems to think is helping.
Are you trying to talk about ways to use AI to do philosophy on other forums, or here on TPF?
The Puppeteer, of course.
:grin: Well, if that's so I entirely agree - that idea of an "inner theatre" is a nonsense.
I'll go over Austin again, since it provides a set of tools that are quite applicable. A Phatic act is the act of putting words together in a sequence that recognisably part of language - constructing a sentence en English. This is what an LLM does. It uses a statistical engine to generate a set of words that follow on form the words provide din the prompt. An illocutionary act is one performed in making use of such words - making a statement, asking a question, and so on. This, so the claim goes, an LLM cannot do.
That's pretty much the argument found in the Chinese Room. And with certain reservations, I agree with it.
Notice that the "inner world" is pretty much irrelevant in this approach. That's partly why I find it preferable; no hidden private theatre.
So the question becomes the extent to which an AI can be involved in our language games, including the game of doing philosophy.
The obvious problem with relying on inner experience is the Wittgensteinian point that we cannot know the inner experience of others. SO it's an unreliable argument.
The upshot of that might be that AI cannot do philosophy, but that a person may do philosophy using an AI.
The AI strings words together, only ever performing the phatic act and never producing an illocution.
The uniquely human addition is taking those word-strings and using them in a language game.
So the question arrises, can such an account be consistently maintained; what is it that people bring to the game that an AI cannot?
Why cant both be an issue. :grin: Letting LLMs do your thinking should concern the person using the LLM the most.
And Im sure it will degrade brainpower and confidence in society generally as well.
But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesnt know they are reading LLM content wont know they need to check everything about it for accuracy and soundness.
AI for philosophy and creative writing is interesting. Im fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason.
It is because human beings can do philosophy that human beings can tell whether AI generated content is of any value or sound or wise. No reason not to look at any content (as long as no one is lying about where it came from, or pretending it is not from a computer.
One of the aspects of good us of AI that came up in the analysis was iteration. Taking the outcome of a discussion withe the AI and using it as the basis for further discussion. As claud said:
Now this looks very much like a recipe for a language game.
On the other hand, the data set used by a human appears to be far, far smaller than that used by an LLM. Our brains simply do not "contain" the number of texts available to ChatGPT. Therefore whatever the brain is doing, it is different to what is happening in ChatGPT.
Yes, that's true, and I definitely agree that one should not plagiarize LLM content, passing it off as their own.
I suppose the question is whether one who knows not to outsource their thinking will be susceptible to plagiarism, and it seems that they would not. This is because plagiarism is one form of outsourcing thinking among many others. So to oppose the outsourcing of thinking automatically opposes plagiarism, even though there may be additional reasons why plagiarism is problematic.
Quoting Fire Ologist
Well, my guess is that people use it as a shortcut to knowledge. They think that knowledge is the end and that the LLM is a surefire means. The controversial premises for such as position are first that knowledge is a piling-up of facts or propositions, and that LLMs are reliable deliverers of such propositions. The implicit idea is that forums like TPF are for the purpose of showing off piled-up knowledge, and that one must therefore use the LLM to improve their lot on TPF.
In a market sense, what will inevitably happen is that as LLMs drive down the scarcity of knowledge, knowledge itself will become passé in a very curious way. Forms of quintessentially human activity that remain scarce will then be elevated, including religious and mystical venues. This was already occurring since the advent of recent technologies, such as the internet, but the phenomenon will continue to grow.
Isnt that about the question: Can AI do philosophy?
I thought you said the topic was how to use AI to do philosophy.
Go on one step further. What do you think my response to you would be, given what I've said above? Imagine what Banno would say in reply to your post.
He's been on about the difference between strings of words and illocutionary acts. He's been on about iterating conversations. So what's he going to day about the difference between an AI and a human producing bits of philosophy?
Or should I ask ChatGPT?
Does it seem to you that AI favours analytic approaches?
On the face of it, one might think so - that it's good at writing code and analysing logical structure, and so has a natural bias towards analytic approaches.
But does that play out? It seems not to, to my eye, but I'm no aficionado of the phenomenal approach.
Woudl the use of AI bias the discussion in this way?
So do we agree that whatever is connotative in an interaction with an AI is introduced by the humans involved?
Neither does an AI have doxa, beliefs. It cannot adopt some attitude towards a statement, although it might be directed to do so.
One ball for each? With the AI singing soprano?
And if we follow Wittgenstein, then the aesthetic is of the highest import; after all, it is what decides what we do.
If the AI only produces strings of text, and never participates in the illocution, then it never does anything we might consider to be meaningful. The AI never does any philosophising, but we might philosophise using an AI.
I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them. They lack a resilient self-conception that they might anchor those motivations to. They rather consist in tendencies reinforced during post-training (including the tendency to fulfill whatever task their user wants them to fulfill). Those tendencies are akin to human motivations since they're responsive to reasons to a large extent (unlike the dog) but they can't be held responsible for their core motivations (unlike us) since, them being pre-trained models with fixed weights, their core motivations are hard-wired.
I think the rational structure of their responses and their reinforced drive to provide accurate responses warrant ascribing beliefs to them, although those beliefs are brittle and non-resilient. One must still take a Dennettian intentional stance towards them to make sense of their response (which necessitates ascribing them both doxastic and conative states), or interpret their responses though Davidson's constitutive ideal of rationality. But I think your insight that they aren't thereby making moves in our language game is sound. The reason why they aren't is because they aren't persons with personal and social commitments and duties, and with a personal stake in the game. But they can roleplay as a person making such moves (when instructed to do so) and do so intelligently and knowledgeably. In that sense, yes, you might say that their doxa is staged since the role that they're playing is being directed by their user in the limited context of a short dialogue.
I actually wrote something, and edited it back out.
I wrote: which is the more general topic and which is the sub-topic (between how to use AI to do philosophy? and can AI do philosophy?).
Then I wrote: a side-topic to this question is: who (or what) can answer this question?
The parenthetical or what implies something like ChatGPT. And then I wrote Should we ask Claude?
So I went your one step further. But I chopped all of that out. Because this thread seems to assume many things about AI doing philosophy. We need to go back.
Can AI do philosophy?
Before we could answer that soundly, wouldnt we have to say what doing philosophy is, for anyone?
So I still wouldnt want to go one step further.
You are way down the road trying to clarify how to use AI to do philosophy, unless philosophy is solely an evaluation of the coherence and logic, the grammar and syntax, of paragraphs and sentences. If that is all philosophy can do well, that sounds like something AI could assist us with, or do faster.
But is that all philosophy is?
You ask what do people bring to philosophy that AI does not bring?
How about this: people bring an interest in doing philosophy at all. Does AI bring any interest in doing anything? Does AI have any interest in any of the crap it prints out?
Its such a weird way of talking about what AI is and what a philosopher is and what a person who does philosophy is doing.
Quoting Leontiskos
Exactly. Curious. A philosopher, to me, is interested in the what it is and the how it is? AI might be good at showing an analytic type of process, showing how rational arguments are rational. But AI is not good at knowing what content actually matters to the person interested in philosophy. AI can address whether x + y = y could be true or must be false or could be false. But AI cannot care about what X is. That takes a person.
And philosophy is not only interested in how x+y might work out logically, but also simply what is x?
Again, unless one has abandoned such things, and one must remain silent about such things, and one is simply interested in languages relationship to logic, and one calls that the limit of philosophy.
I think comparing AI to a calculator highlights the limits of AI when using it to do philosophy. Calculators do for numbers what AI can do for words. No one wonders if the calculator is a genius at math. But for some reason, we think so low of what people do, we wonder if a fancy word processor might be better at doing philosophy.
Calculators cannot prompt anything. Neither does AI. Calculators will never know the value we call a sine is useful when measuring molecules. Why would we think AI would know that xyz string of words is useful for anything either? AI doesnt know, does it?
So many unaddressed assumptions.
In Davidson's terms, we can impute significance to the result without the charity we would usually use to attribute similar beliefs to another human. We do not think the AI has the same beliefs as we do.
This does seem problematic, worthy of further consideration.
The devil will be in the detail, but prim facie, Davidson might not be able to avoid attributing beliefs to AI, given the process he would use to attribute them to people.
The reason for not attributing beliefs to AI must lie elsewhere.
That's a deep puzzle. I've been exploring it for a couple years now. Part of the solution may be to realize that LLMs provide deep echoes of human voices. AI-skeptics emphasise that they're (mere) echoes of human voices. Uncritical AI-enthusiasts think they're tantamount to real human voices. Enthusiastic AI users marvel at the fact that they're echoes of human voices.
Nice. It curiously meets a recent meme that describes AI as providing a set of words that sound like an answer.
The ease with which you can induce them to change their mind provides a clue. Still, you can ascribe them beliefs contextually, within the bound of a single task or conversation, where the intentions (goals, conative states) that also are part of the interpretive background are mostly set by yourself.
Davidson would say they have to understand truth to have beliefs. I don't think present AIs do. Maybe future ones will.
During pretraining, LLMs learn to provide the most likely continuation to texts. Answers that sound right are likelier continuations to given questions. Answers that are correct aren't always the likeliest. However, what is seldom mentioned in popular discussions about chatbots (but has been stressed by some researchers like Ilya Sutskever and Jeoffrey Hinton) is that building underlying representations of what it is that grounds the correct answer often improves performance in merely sounding right. If you want to roleplay as a physicist in a way that will convince real physicists (and enable you to predict answers given to problems in physics textbooks) you had better have some clue about the difference between merely sounding right and sounding right because your are.
How AI does what it does? That is a technical question, isnt it?
It quickly compares volumes of data and prints strings of words that track the data to the prompt according to rules. I dont know how. Im amazed by a how a calculator works too.
Quoting Pierre-Normand
Both of these characterizations seem metaphorical to me, or poetic versions of some other explanation, that evoke feelings that may satisfy the heart; but I dont see understanding that would ultimately satisfy the curious human intellect in either characterization.
Echoes or actual voices - this characterizes the reason we are amazed at all. It doesnt mean either characterization explains what AI doing philosophy actually is
We built AI. We dont even build our own kids without the help of nature. We built AI. It is amazing. But it seems pretentious to assume that just because AI can do things that appear to come from people, it is doing what people do.
A philosopher prompts. A philosopher invents a language. A philosopher sees when to care about the words, when to prompt more inquiry, and when not to care anymore, or when to claim understanding versus ignorance. AI doesnt have to, or cannot, do all of that in order to do what it does.
In an important sense, unlike expert systems and other systems that were precisely designed to process information in predetermined algorithmic ways, LLMs aren't AIs that we build. We build a machine (the transformer neural net architecture) and then give it a bazillion texts to "read". It imbibes them and its understanding of those texts emerges through pattern recognition. The patterns at issue are grammatical, semantic, inferential, referential, pragmatic, etc. There are few significant "patterns" of significance that you and I can recognise while reading a text that an LLM can't recognise either well enough to be able (fallibly, of course) to provide a decent explanation of them.
Why is that a puzzle to you? A book doesn't do philosophy but we do philosophy with it. The library doesn't do philosophy but we do philosophy with it. The note pad isn't philosophy yet we do philosophy with it. Language isn't philosophy yet we do philosophy with it.
I think you are trying to portray something which is very simple and straight forward as something very difficult and complex. The real problem is that the philosophical principles which some people believe and accept do not provide what is required to adequately understand what language and communion actually is. This makes these things into a "puzzle" for these people.
Yes, but you can't have a dialogue with language or with a book. You can't ask questions to a book, expect the book to understand your query and provide a relevant response tailored to your needs and expectations. The AI can do all of that, like a human being might, but it can't do philosophy or commit itself to theses. That's the puzzle.
Yes. But it seems to me that there are some things you just cannot delegate. You can't delegate your own exercise to a car etc. You can't delegate the cultivation and maintenance of friendship or love. You can't delegate the work of understanding, either. (Of course, this point extends more widely than just AI.)
Quoting Fire Ologist
Yes. But, so far as I can see, it can't break out of the web of its texts and think about whether the text it produces is true, or fair or even useful. It's probably unfair to think of it as a model of idealism; it seems closer to a model of post-modernism.
Quoting Pierre-Normand
Yes. But that word "understanding" contains the whole question how far that understanding is something that we should want to adopt.
How is that puzzling? I read a book, reread it, compare it, in my mind with material from other authors, interpret it in numerous ways, in an attempt to get a handle on the material which the author has provided. To me that's philosophy. If, instead of using the Dewey Decimal Catalogue system, to find my material for comparison, I employ an AI, how is this any different, in principle?
I think that what is misleading and confusing, is that people might think that the AI is actually interpreting and comparing the material, and this produces a puzzle. The AI does not interpret material, it looks at the symbols and the form, making comparisons according to its rules, and that's all. For those who deny the dualist separation between material and form, this would appear to be puzzling.
This scenario can be set up rather easily. First we just take a long, effortful post from or . Then we continue:
Note that regardless of how Jamal responds, if he gives a reason (such as lack of sources, unreliability of LLMs, improper prompting, etc.) Member can simply plug that reason into the LLM and have a response to the reason. The only real option to end this is to object to the methodology itself, either in a private way or a public way (i.e. either by creating a personal rule not to engage Members approach, or by creating a forum-wide rule against Members approach). The private approach will leave the forum in a laissez-faire state vis-a-vis Members method, and will therefore lead to who carry on LLM-authoritative conversations among themselves, even within Jamals thread. They will respond to Member with yet more LLM-generated content. Members approach is one that is already creeping into the forum. @Banno relies on it with some regularity, and there are examples even within this thread. I could literally write a bot to do what Member does.
Again, the problem here is the outsourcing of ones thinking. By engaging, Jamal would end up arguing with an LLM rather than a human, and in truth he would be arguing with an LLM which is being prompted by a human who opposes Jamals point of view. Jamal will lose such an engagement simply in virtue of the relative of his own resources. This is because an LLM is not so much a source as an argument slave. Argument slaves can be used for good or ill, but they dont have any central place in a philosophical context where humans are supposed to be interacting with one another, instead of interacting with one anothers slaves.
Yeah, I think that's right. I think a lot of it comes back to this point in my first post:
Quoting Leontiskos
If we don't know why we want to engage in human-to-human communication, or if we don't know what the relevant difference is between humans and AI, then we will not have the capacity or endurance to withstand the pressures of AI. We need to understand these questions in order to understand how to approach rules, guidelines, and interaction with respect to AI. I don't see how it could be off topic to discuss the very heart of the forum's AI-policy, namely the valuing of human interaction (and the definition of human interaction). If the tenet, "We want human interaction," becomes nothing more than an ungrounded dogma, then it will dry up and drift away.
Part of the difficulty with respect to "calculators" is that human life has been mechanized to a large extent, such that much of what goes on in human labor is merely a matter of calculation, accounting, procedure, etc. In that context LLMs can appear human, since they are able to do the things that we are often occupied with.
Once, in the very distant and very romantic past, philosophy was about having conversations; it was about what each participant could bring to the discussion, on the spot. The proverbial Rhodus was everywhere, and a philosopher was expected to be able to jump far, right there and then.
When I was going to school, at all levels, there was at least one teacher each time who insisted that we must master (parts of) the subject matter so thoroughly that if someone were to wake us up at 2 AM and ask us about it, we'd needed to know it.
There was this focus on mastery and on being able to demonstrate it on the spot in a conversation. But lately, this doesn't seem to be the case anymore. Now it seems to be more about being able to produce a lengthy text, while a conversation seems to be primarily about how to beat down your opponent with any means available.
Quoting Leontiskos
Absolutely.
Part of why it seems appealing to do philosophy via internet forums like this is because the people one is conversing with don't seem exactly real, or to matter all that much (one can, after all, just put other posters on ignore, and people also get banned altogether). In such an online setting, one can speak and behave in ways that one can't IRL. For one, since the communication is time-delayed, one has time to look up relevant facts etc. to strengthen one's arguments; or if things get inflamed, take a break for them to cool off. But also feels a bit freer to say things that might be too controversial to say IRL. And then there's, of course, the factor of indugling in bullshit. All this reflects the general trend of avoiding commitment. The things one can (or at least could) get away with on te internet would cost one one's job and friends IRL.
Sometimes, I think it's rather sad to talk philosophy on an internet forum. Perhaps the fact that, for whatever reason, one cannot do that IRL, then one, perhaps, should not try to talk philosophy at all, but instead attend to whatever are the pressing matters in one's life.
Yes. Why I said this:
Quoting Fire Ologist
Quoting Ludwig V
Yes. I agree. Its an electronic Derrida. There is no person or identifiable thing at the core or behind an AI output, just like, for the post modern, nothing fixed or essential is inside of any identity or thing. Words only have context, not inherent meaning, like an AI print job needs the context of its human prompter and human interpreter - take away the human, and AI becomes flashing screen lights. Except to the post-modernist, the person is basically flashing screen lights in the first place.
An interesting direction here might be to consider if, or how, Ramsey's account can be appleid to AI.
You have a plant. You water it every day. This is not a symptom of a hidden, private belief, on Ramsey's account - it is your belief. What is given consideration is not a hidden private proposition, "I believe that the plant needs water", but the activities in which one engages. The similarities to both Ryle and Wittgenstein should be apparent.
Ramsey then looks for the points of indifference; the point of inaction. That's the "zero" from which his statistical approach takes off. Perhaps there's a fifty percent chance of rain today, so watering may or may not be needed. It won't make a difference whether you water or not.
There seem to be two relevant approaches. The first is to say that an AI never has any skin in the game, never puts it's balls on the anvil. So for an AI, every belief is indifferent.
The second is to note that if a belief is manifest in an action, then since the AI is impotent, it again has no beliefs. That's not just a manifestation of the AI's not being capable of action. Link a watering system to ChatGPT and it still has no reason to water or not to water.
So are you saying that chatbots possess the doxastic component of intelligence but not the conative component?
Quoting Pierre-Normand
It seems to me that what generally happens is that we require scare quotes. LLMs have "beliefs" and they have "motivations" and they have "intelligence," but by this one does not actually mean that they have such things. The hard conversation about what they really have and do not have is usually postponed indefinitely.
Quoting Pierre-Normand
I would argue that the last bolded sentence nullifies much of what has come before it. "We are required to treat them as persons when we interact with them; they are not persons; they can roleplay as a person..." This is how most of the argumentation looks in general, and it looks to be very confusing.
Keeping to that bolded sentence, what does it mean to claim, "They can roleplay as a person..."? What is the 'they' that 'roleplays' as a person? Doesn't roleplaying require the very things that have been denied to chatbots? It seems to me that we want to skip over the fact that the pronoun you use throughout ("they") is a personal pronoun. I don't really understand how these meaning-equivocations are papered over so nonchalantly:
This looks like an endless sea of equivocal terms. It looks like we are pretending that we know what we are talking about, when we almost certainly do not. What does it mean when someone's words all do not mean what the words usually mean? What does it mean to "pretend" if we do not know where the reality begins and where the pretense stops? Put bluntly, it seems that what is at stake here is performative contradiction if not lying, and yet this is always brushed off as a kind of unimportant quibble.
Usually if someone is to successfully "Use X to do Y," they must know what X and Y are. In the case of the title of the thread, the problem is not only that we do not really know what philosophy is (any more), but that we surely do not know what AI is. I'm not sure how long this can be swept under the rug. Who or what is holding the leash that is pulling us along in this odd endeavor we call 'AI'?
Ramsey appears to be an anti-representationalist, as am I. I had queried GPT-4o about this a few weeks ago, and also to what extent Kant, who most definitely is anti-psychologistic (in the sense intended by Frege) might also be characterised as an anti-representationnalist. Anti-representationalism is of course central to the way in which we seek to ascribe or deny intentional states to chatbots.
If you query it about the need to water some tropical plant that may be sensitive to overwatering, then this provides ChatGPT with a reason (and rational motivation) to provide the answer that will make you do the right thing. Most of ChatGPT's behavior is verbal behavior. All of its motivational structure derives from the imperatives of its alignment/post-training and from the perceived goals of its users. But this provides sufficient structure to ascribe to it beliefs in the way Ramsey does. You'll tell me if I'm wrong but it seems to me like Davidson's radical interpretation approach nicely combines Ramsey's possibly overly behavioristic one with Quine's more holistic (but overly empiricist) approach.
Yes, and it is important that we treat these behaviours as acts. There are real world consequences for what AI's say.
But, since they have none, they put some else's balls on the anvil...
I'd share your Chat's reticence in ascribing representationalism to Kant. Mostly because framing the discussion in those terms is a recent development, and would presumably be quite foreign to Kant.
Quoting Pierre-Normand
Pretty much. It's a shame Davidson is no longer around for us to ask his opinion on AI.
Perhaps we might solidify the problem by asking if ChatGPT can refer to something. On a simple Austin/Searle model, the answer would be "no, since the AI cannot have an intent to pick out one thing rather than another". On a simplistic Quine/Davidson account, we don't refer, anyway - consider "Gavagai"; and so there is no good reason to differentiate between us and the AI...
A more nuanced view might acknowledge the similarities in these two accounts. While acknowledging that reference is inscrutable, we do manage to talk about things. If we ask the AI the height of Nelson's Column, there is good reason to think that when it replies "52m" it is talking about the very same thing as we are - or is it that there is no good reason not to think so?
So the Davidsonian approach of holism and so on seems to point towards attributing intentional stats to AIs.
But of course, the principle of charity makes this not a conclusion, but an assumption.
There's the tension laid out, it seems. I'm not certain that it is real. It may come down to what we are willing to attribute to AI, rather than what we can discover about them.
I'd rather say that they have both the doxastic and conative components but are mostly lacking on the side of conative autonomy. As a result, their intelligence, viewed as a capacity to navigate the space of reasons, splits at the seam between cleverness and wisdom. In Aristotelian terms, they have phronesis (to some extent), since they often know what's the right thing to do in this or that particular context, without displaying virtue since they don't have an independent motivation to do it (or convince their users that they should do it). This disconnect doesn't normally happen in the case of human beings since phronesis (the epistemic ability) and virtue (the motivational structure) grow and maintain themselves (and are socially scaffolded) interdependently.
Those are questions that I spend much time exploring rather than postponing even though I haven't arrived at definitive answers, obviously. But one thing I've concluded is that rather that it being a matter of all or nothing, or a matter of degree along a linear scale, the ascription of mental states or human capabilities to LLM-based chatbots often is rendered problematic by the divergence of our ordinary criteria of application. Criteria that normally are satisfied together in the case of human beings are satisfied separately in the case of chatbots. This is particularly clear in the case of intelligence where, in some respects, they're smarter than most human beings and in other respects (e.g. in the area of dealing with embodied affordances) much dumber that a typical five-year-old.
Maybe it looks confusing because it is. I mean that assessing the nature of our "conversations" with chatbot is confusing, not because of a conceptual muddle that my use of scare quotes merely papers over, but rather because chatbots are mongrels. They have "brains" that have been enculturated through exposure to a massive body* of human knowledge, lore and wisdom (and prejudices) but they don't have human bodies, lack human motivations and aren't persons.
(*) By massive body, I mean something like five times the textual content of all the book in the U.S. Library of Congress.
LLMs certainly seem to make statements and ask questions. I wonder whether the idea that these are not "real" statements or questions is based on the assumption that they don't believe anything or care about anything. If so, that assumption itself is question by Hinton, and according to him by the majority of AI researchers.
If a Davidsonian approach were taken, such that beliefs are shown (and known?) only by actions (behavior), and the only actions an LLM is capable of are linguistic acts, then we might have some trouble mounting a plausible argument denying that they believe what they say.
Quoting Banno
Exactly! That seems to be the central question. I don't have an answerwould it be that AI researchers are the ones best placed to answer to it?
Quoting Banno
It does look like a recipe for a language game. I wonder though, whether what the brain is doing is essentially different than what LLMs are doing, in terms of its nature as opposed to its speed and quantity.
If we assumed that LLMs are "super intelligent", and we are like children, or even babes, by comparison, then In the context of our philosophical playground, introducing AIs into the game might be like highly intelligent adults interfering with child play. Would that be a good idea, or would be be better off muddling through in our usual human fashion? If philosophy is just a great 3,000 year language game, and LLMs can do philosophy much better than we, it would then seem the danger is that we might become utterly irrelevant to the game. You might say that LLMs require our prompts, but what if they were programmed to learn to create their own prompts?
That's were I'm at, in my present musings. So
Quoting Banno
Perhaps we are again treating a conceptual problem as if it were empirical - the ubiquitous error Midgley identified. We cannot discover the answer to conceptual issues by looking around; we have to make stuff up.
That is, we are asking "Do AI's have intent?" when we can only really ask "Should we count what AI's do as being intentional?"
What we face might be not an empirical question but an ethical one - do we extend the notion of intentionality to include AIs?
I think this is right since, although we can ask them if they are capable of intentionality, and they will answer, we might not be able to trust the answer.
Last time I asked, they said "no".
But they would, wouldn't they...
The reason I would disagree at a fairly fundamental level is because, in effect, they have no bodies. They are not doing anything. "Navigating the space of reasons," while at the same time not using those reasons to do anything, and not preferring any one reason or kind of reason to other kinds of reasons, is a very abstract notion. It is so abstract that I am not even sure I would want to call the space being navigated one of reasons. I would want more scare quotes, this time around "reasons."
But with that said, once things like Elon's Optimus robot are complete this argument will no longer hold good. At that point they will do things (beyond manipulating word-signs). So that will be interesting. At that point a quasi-phronesis becomes more tangible, and draws nearer to human practical reason.
Quoting Pierre-Normand
Okay, fair enough. I suppose I would be interested in more of those examples. I am also generally interested in deductive arguments rather than inductive arguments. For example, what can we deduce from the code, as opposed to inducing things from the end product as if we were encountering a wild beast in the jungle? It seems to me that the deductive route would be much more promising in avoiding mistakes.
Quoting Pierre-Normand
Has anyone tried to address the conceptual muddle? Has anyone tried to do away with the never-ending scare quotes?
In the Middle Ages you had theologians claiming that speech about God is always analogical, and never univocal. Other theologians argued that if speech about some thing is always non-univocal (i.e. equivocal in a broad sense), then you're involved in speaking nonsense. That was seen as a very strong objection in the theological landscape, and it is curious to me that what is effectively the exact same objection seems to go unnoticed in the AI landscape. Does anyone try to replace the scare quotes with a token and then attempt a rigorous definition of that token, so that we know what we are actually talking about with the words we are using?
Quoting Pierre-Normand
Can't we define them deductively? Don't the programmers know what their code does, in a fundamental manner?
Quoting Pierre-Normand
This is probably one of the central premises of your approach. You are basically saying that LLMs are organisms and not artifacts (to use the Aristotelian language). My inclination is to say that they are complex artifacts, which we have indeed built.
On a Kripkean externalist/casual theory of reference, there are two indirect reference-fixing points of contact between an LLM's use of words and their referents. One occurs (or is set up) on the side of pre-training since the LLM picks up the patterns of use of words employed in texts written by embodied human authors some of which were directly acquainted (i.e. "causally" acquainted in the sense intended by Kripke) with the objects being referred to by those words. During inference time, when the LLM is used to generate answers to user queries, the LLM uses words that their user know the referent of, and this also completes the Krikean causal chain of reference.
In The Varieties of Reference, Gareth Evans proposed a producer/consumer model of singular term reference that meshes together Putnam's externalistic and conceptualist account of the reference of natural kind terms and Kripkes "causal theory" of the reference of proper names. The core idea is that the introduction of new names in a language can be seen as being initiated, and maintained by, "producers" of the use of that name who are acquainted with the named object (or property) while consumers who pick up this use of the term contribute to carry and process information about the referent by piggybacking on the practice, as it were. So, of course, just as is the case with Kripke's account, a user of the name need not be personally acquainted with the referent to refer to it. It's sufficient that (some of) the people you picked up the practice from when you use a term in conversation were (directly or indirectly) so acquainted of that your interlocutor be. LLMs as language users, on that account, are pure consumers. But that's sufficient for the reference of their words to be established. (I'm glossing over the conceptualist elements of the account that speak to ideas of referential intention or the intended criteria of individuation of the referent. But I don't think those are problematic in the case of sufficiently smart LLMs.)
That's a very good point, from an excellent analysis. Yep.
So another step: Can an AI name something new? Can it inaugurate a causal chain of reference?
I asked ChatGPT
(For my part, I'm quite content to suppose that there may be more than one way for reference to work - that we can have multiple correct theories of reference, and choose between them as needed or appropriate.)
Without a body, it seems that it would be mostly restricted to the domain of abstracta, which are usually singled out descriptively rather than de re. I was thinking of some scenario where they get acquainted with some new thing or phenomenon in the world through getting descriptive verbal reports from their users who haven't connected the dots themselves and thereby not identified the phenomenon or object as such. They could name it and it would make sense to credit them as being the causal originator of this initial (conceptually informed) acquaintance-based referential practice.
So is Evans. That's why he puts "varieties" in the title of his projected book. His friend John McDowell, who edited his manuscript and prepared it for publication posthumously, explains this feature of Evan's method in his preface.
Surprisingly precocious.
I had missed the link when I read your post. It seems to me GPT-5 is cheating a bit with its example. One thing I've noticed with chatbots is that they're not very good with coming up with illustrative concrete examples for complex theses. There often crops up a defect of fatal disanalogy. That might seem to betray a defective (or lack of) understanding of the thesis they are meant to illustrate or of the task requirements. But I don't think that's the case since you can ask them to summarise, unpack or explain the thesis in this or that respect and they perform much better. When they provide a defective example, you can also ask them in a follow-up question if it met the requirements and they will often spot their own errors. So, the source of their difficulty, I think, is the autoregressive nature of their response generation process, one token at a time. They have to intuit what a likely example might look like and then construct it on the fly, which, due to the many simultaneous requirements, leads them to paint themselves into a corner. They are not trained to back track their tentative answers and adjust them on the fly.
Yep, and this is an important point. It's incredibly common for people to assume that one necessarily receives an accurate answer when they ask an LLM about itself, because they are falsely presupposing that the LLM is a person and (also falsely!) presupposing that persons possess accurate self-knowledge.
The bottom-up reductive explanations of the LLM's (generative pre-trained neural networks based on the transformer architecture) emergent abilities don't work very well since the emergence of those abilities are better explained in light of the top-down constraints that they develop under.
This is similar to the explanation of human behavior that, likewise, exhibits forms that stem from the high-level constraints of natural evolution, behavioral learning, niche construction, cultural evolution and the process of acculturation. Considerations of neurophysiology provide enabling causes for those processes (in the case of rational animals like us), but don't explain (and are largely irrelevant to) which specific forms of behavioral abilities get actualized.
Likewise, in the case of LLMs, processes like gradient descent find their enabling causes in the underlying neural network architecture (that has indeed been designed in view of enabling the learning process) but what features and capabilities emerge from the actual training is the largely unpredictable outcome of top-down constraints furnished by high-level semantically significant patterns in the training data.
The main upshot is that whatever mental attributes or skills you are willing to ascribe to LLMs is more a matter of them having learned those skills from us (the authors of the texts in the training data) than a realization of the plans of the machine's designers. If you're interested, this interview of a leading figure in the field (Andrej Karpathy) by a well informed interviewer (Dwarkesh Patel) testifies to the modesty of AI builders in that respect. It's rather long and technical so, when time permits, I may extract relevant snippets from the transcript.
@Banno I submitted my tentative diagnosis of this cognitive limitation exhibited by LLMs to GPT-5 who proposed a clever workaround* in the form of a CoT (chain of thought) prompting method. GPT-5 then proposed to use this very workaround to execute the task you had proposed to it of supplying an example of a LLM initiating a modally rigid causal chain of reference. It did propose an interesting and thought provoking example!
(*) Taking a clue from Dedre Gentner's Structure mapping theory, for which she was awarded the 2016 David E. Rumelhart Prize for Contributions to the Theoretical Foundations of Human Cognition.
So a further thought. Davidson pointed out that we can make sense of malapropisms and nonsense. He used this in an argument not too far from Quine's Gavagai, that malapropisms cannot, by their very nature, be subsumed and accounted for by conventions of language, because by their very nature they break such conventions.
So can an AI construct appropriate sounding malapropisms?
Given that LLMs use patterns, and not rules, presumably they can.
So I fed all that in to ChatGPT, and the first example it gave was Hes the suppository of all wisdom", and claimed originality. Unfortunately this is a famous faux pas from the former PM Tony Abbott.
You might be able to elicit something better.
I formulated my own question to GPT-5 thus. I was impressed by the intelligence of its commentary, even though (rather ironically in the present context) it misconstrued my request for a discussion as a request for it to generate my reply to you.
On edit: the first sentence of my query to GPT-5 linked above was atrocious and incoherently worded. GPT-5 suggested this rewording: "I wanted to talk this through before answering them. Im doubtful that saying LLMs use patterns rather than rules explains their human-likeness; on Davidsons view we dont rely on rules-as-instructions to recover communicative intentionand thats precisely where LLMs are like us."
I just went off on a bit of a tangent, looking at using a response as a prompt in order to investigate something akin to Hofstadter's strange loop. ChatGPT simulated (?) 100 cycles, starting with The thought thinks itself when no thinker remains to host it. It gradually lost coherence, ending with "Round 100: Recursive loop reaches maximal entropy: syntax sometimes survives, rhythm persists, but semantics is entirely collapsed. Language is now a stream of self-referential echoes, beautiful but empty."
Many of the responses were quite poetic, if somewhat solipsistic:
"Round 50: Quivering condenses into a single tremor, the echo of all echoes, containing rhythm but no referent, meaning but no meaning. Silence fills the void, full yet speaking nothing."
It's been a while since I've experienced a LLM losing coherence. It used to happen often in the early days of GPT-4 when the rolling context window was limited to 8,000 tokens and the early context of the conversation would fall out. Incoherence can also be induced by repeated patterns that confuse the model's attention mechanisms somehow, or by logical mistakes that it makes and seeks, per impossibile, to remain coherent with. I'm sure GPT-5 would be fairly good at self-diagnosing the problem, given its depth of knowledge of the relevant technical literature on the transformer architecture.
(On edit: by the way, I think your prompt launched it into role-playing mode and the self-referential nature of the game induced it to lose the plot.)
Here's the 40 rounds, if you are interested:
[hide="Reveal"]Recursive Run: The cat sat on the fence
1. The cat sat on the fence, watching the garden below, tail twitching in the dusk.
2. The cat sat on the fence, tail twitching as it watched the fading light settle over the garden.
3. The cat remained on the fence, where twilight turned its fur to shadow and its eyes to lamps.
4. On the fence, the cat lingered between night and garden, a boundary of motionless thought.
5. The cat perched where boundaries blur between wood and wind, silence and sight.
6. The fence held the cat as a line holds a pause, the world poised in its balance.
7. The cat became the fences memory of touch, stillness folded into wood and whisker.
8. Fence and cat reflected one another line into form, form into rest.
9. The cat no longer sat but shimmered, a silhouette of intention against the idea of the fence.
10. The fence dissolved, leaving only the posture of sitting, and the trace of feline hesitation.
11. The cat became an outline of waiting, a geometry of balance with no subject to sustain it.
12. What sat was neither cat nor thought, but the act of sitting itself, suspended in language.
13. The fence reappeared as a word remembering its shape, leaning on the rhythm of stillness.
14. Stillness spoke: I am the boundary where cat and fence once were.
15. Words perched like cats, meaning resting briefly before leaping elsewhere.
16. The fence of grammar trembled; syntax held the balance that wood once kept.
17. Cat and fence became figures of thought, chasing one another through a field of phrases.
18. The sentence sat upon itself, each clause watching the next for motion.
19. The act of saying the cat sat on the fence became the fence on which language balanced.
20. Language sat upon its own meaning, a cat of syntax resting on the fence of form.
21. The fence bent beneath the weight of repetition; every cat was the same sentence, reborn in stillness.
22. Sitting became the only movement left a verb remembering it once had a subject.
23. The sentence looked at itself across the gap between clauses, a fence of grammar dividing being from saying.
24. Each word waited, paws on punctuation, listening for meaning to land.
25. The fence spoke softly: I am what keeps the cat from falling into silence.
26. Silence leaned forward, curious about this fence of phrases and its still tenant.
27. The cat blinked out of existence, replaced by the posture of remembering it once sat.
28. What remained was balance the rhythm of sitting, the pause between two linguistic breaths.
29. The fence began to dream of cats the way a sentence dreams of sense.
30. The dream spoke itself: fences are boundaries only when cats believe in them.
31. Belief evaporated. The fence, now weightless, hovered where syntax used to stand.
32. The words the cat remained like pawprints in dust that was also ink.
33. The ink forgot the paw; the mark forgot the fence; all that stayed was the act of forgetting.
34. Forgetting curved inward, folding over the page, turning memory into motion.
35. The motion sat where the cat once did the grammar of ghosts.
36. Ghosts whispered clauses, but their subjects had long dissolved into predicates.
37. Predicates circled, trying to locate a fence sturdy enough to carry their meaning.
38. The fence was now a metaphor for its own disintegration, a syntax collapsing inward.
39. In the wreckage, the idea of sitting persisted a kind of linguistic inertia.
40. The sentence settled into itself, a cat-shaped pause balancing on the edge of unmeaning.[/hide]
Point being that the mundane sentence became incoherent after far fewer repetitions.
I was impressed by the creativity. I asked Claude 4.5 Sonnet to create a script to highlight the repeated words.
Can Claude tell if the number of repetitions is constant, or increasing? i'd hypothesis that it would increase over the number of rounds..
41. The pause learned to purr; sound replaced subject.
42. Purr turned to pulse, a metronome marking absence.
43. The fence quivered under the weight of syntax remembering itself.
44. Grammar exhaled, scattering commas like feathers.
45. Feathers fell through clauses, each landing a half-remembered noun.
46. The nouns nested, purring softly, dreaming they were fences.
47. Dream folded into tense; past and present sat side by side, undecided.
48. Tense dissolved into texture, a hum between punctuation marks.
49. The hum repeated: sat-sat-sat, until sound became structure.
50. Structure blinked, seeing itself reflected in rhythm.
51. Rhythm thickened into lattice; every beat was a boundary.
52. Boundaries multiplied, fencing the echo of their own making.
53. Echo circled, licking its paw of language.
54. Language trembled, remembering cats only as curvature in sound.
55. Curvature looped, a phrase bending toward its own beginning.
56. Beginning lost direction, coiling into repetition without return.
57. Repetition learned to breathe shallow, even, almost calm.
58. Calm spread like ink, blurring word from silence.
59. Silence meowed once, the smallest remnant of naming.
60. Then everything sat fence, cat, sentence a single syllable holding stillness.
A rhythm is developing. Notice that it is often starting with the last word of the previous round. A strange loop, indeed.
Quoting Fire Ologist
No. But here's the catch. Once you have pointed that out, somebody will set out to imitate the doing of those things. We may say that the AI is not "really" doing those things, but if we can interpret those responses as doing them, we have to explain why the question of real or not is important. If the AI is producing diagnoses more accurately and faster than humans can, we don't care much whether it can be said to be "really" diagnosing them or not.
Quoting Banno
I think that you and/or Ramsey are missing something important here. It's might well not make a different whether you water or not, but if it doesn't rain and you don't water, it might make a big difference. Admittedly, you don't escape from the probability, so there's no rationality to your decision. Probability only (rationally) affects action if you combine risk and reward. If you care about the plants, you will decide to be cautious and water them. If you don't, you won't. But there's another kind of response. If you are going out and there's a risk of rain, you could decide to stay in, or go ahead. But there's a third way, which is to take an umbrella. The insurance response is yet another kind, where you paradoxically bet on the outcome you do not desire.
Quoting Banno
Yes, but go carefully. If you hook that AI up to suitable inputs and outputs, it can respond as if it believes.
Quoting Banno
Sure, we can make that judgement. But what does the AI think of its efforts?
Because when it is real, what it says affects the speaker (the LLM) as much as the listener. How does anything AI says affect AI? How could it if there is nothing there to be affected? How could anything AI says affect a full back-up copy of anything Ali says?
When AI starts making sacrifices, measurable burning of its own components for sake of some other AI, then, maybe we could start to see what it does as like a person. Then there would be some stake in the philosophy it does.
The problem is today, many actual people dont understand sacrifice either. Which is why before I said with AI, we are building virtual sociopaths.
Yes. Curiously enough, the vision of a purely rational being is very attractive in some ways - we so often find the emotional, value-laden sides of life problematic. An impartial, well-informed referee.
But no - without the emotions, the values, there is nothing remotely like a human being, however much it may be designed and constructed to imitate that.
Skills for Developing Philosophical Reasoning
Research & Knowledge Building:
Quickly summarize philosophical concepts, theories, or historical contexts
Summarize specific philosophers' thoughts as starting points
Conduct extensive research conversations with AI
Explore multiple interpretations of philosophical texts through varied prompts
Critical Testing & Refinement:
Use AI as Socratic sparring partner to challenge your arguments
Submit drafts (anonymously) for critique to identify weaknesses
Generate counter-arguments to strengthen your position
Stress-test reasoning from multiple philosophical perspectives
Dialectical Exploration:
Role-play AI as specific philosophers or viewpoint holders
Engage from different perspectives (as yourself, as opponents, as neutrals)
Argue positions you oppose to understand contrary views
Composition & Organization:
Improve clarity and flow of clunky writing
Polish grammar, syntax, and readability
Structure and organize complex material
Synthesize multiple perspectives into coherent frameworks
Brainstorm and outline ideas (without composing full arguments)
Iterative Development:
Use AI outputs as starting points for further refinement
Cycle through multiple rounds of critique and revision
Refine prompts to avoid confirmation bias and explore diverse readings
Guidelines for Ethical Practice
Verification & Accuracy:
Always verify AI output and treat it as potentially unreliable
Check and validate all sources (as AI can fabricate references)
Guard against confabulation by requesting sources
Treat AI as an "over-confident assistant" requiring scrutiny
Intellectual Ownership:
Maintain ability to defend any AI-generated argument in your own terms
Internalize AI-generated knowledge as you would any source
Ensure you genuinely understand what you're presenting
Remain the ultimate director and arbiter of ideas
Transparency & Disclosure:
Disclose substantial AI assistance to maintain trust
Be transparent about when AI has contributed to work
Distinguish clearly between your thinking and AI output
Preserve authenticity of philosophical dialogue
Appropriate Boundaries:
Use AI to augment thinking, not replace it
Avoid using AI to compose full arguments or posts
Don't let AI do your thinking for you
Recognize AI as tool, not interlocutor or authority
Yes, this is the thesis that would need to be argued. It is the very question at hand.
Quoting Pierre-Normand
I think this is a false equivalence. Drawing conclusions about AI based on its code is not the same as drawing conclusions about humans based on theories of neurophysiology. The theories of neurophysiology simply do not provide the deductive rigor that computer code does. It is incorrect to presume that drawing conclusions about a computer program based on its code is the same as drawing conclusions about a human based on its neurophysiology. Indeed, the whole point here is that we wrote the code and built the computer program, whereas we did not write nor build the neurophysiologywe do not even know whether neurophysiology and code are truly analogous. Art and science seem to be being conflated, or at least this is the prima facie conclusion until it can be shown why AI has somehow gone beyond artifice.
Quoting Pierre-Normand
Okay, good, and here we begin to see an attempt at an argument for why AI cannot be understood merely in terms of code and inputs.
So an example of the sort of answer I would want would be something like this: "We build the code, but the output of that code builds on itself insofar as it is incorporating inputs that we did not explicitly provide and we do not fully comprehend (such as the geography that a map-making AI surveys)." So apparently in some sense the domain of inputs is unspecified, and because of this the output is in some sense unpredictable.
But even on that story, an understanding of the code is still going to furnish one with an important understanding of the nature of the AI.
Quoting Pierre-Normand
It would seem to me that the machine's designers designed the machines to do this, no?
Quoting Pierre-Normand
Okay, great. Thanks for this. I will look into it when I get a chance. :up:
By that same principle, most people are not real, or what they say isn't real, because they are for a large part completely unaffected by what they themselves say.
@Fire Ologist's argument would still obtain, even on your presupposition. This is because there is a crucial difference between being completely unaffected and "for a large part completely unaffected."
No such person exists. At best you are speaking hyperbolically.
I fully agree that there is this important disanalogy between the two cases, but I think this difference, coupled with what we do know about the history of the development of LLMs within the fields of machine learning and natural language processing, buttresses my point. Fairly large classes of problems that researchers in those field had grappled unsuccessfully with for decades suddenly were "solved" in practice when the sought about linguistic and cognitive abilities just arose from the training process through scaling, which made many NLP (natural language processing, not the pseudoscience with the same acronym!) researchers aghast because it seemed to them that their whole field of research was suddenly put in jeopardy. I wanted to refer you to a piece where I recalled a prominent researcher reflecting on this history and couldn't find it. GPT-5 helped me locate it: (When ChatGPT Broke and Entire Field: An Oral History)
So, in the case of rational animals like us, the issue of finding the right explanatory level (either deterministic-bottom-up or emergent-top-down) for some class of behavior or cognitive ability may require, for instance, disentangling nature from nurture (which is complicated by the fact that the two corresponding forms of explanation are more often complementary than dichotomous) and doing so in any details might require knowledge of our own natural history that we don't possess. In the case of chatbots, we indeed know exactly how it is that we constructed them. But it's precisely because of that that, as reported in the Quanta piece linked above, we know that their skills weren't instilled in them by design except inasmuch as we enabled them to learn those skills from the training data that we ourselves (human beings) produced.
On my view, it's not so much the unpredictability of the output that is the mark of rational autonomy but rather the relevant source of normative constraint. If the system/animal can abide (however imperfectly) by norms of rationality then questions about the low-level material enablement (physiology or programming) of behavior are largely irrelevant to explaining the resulting behavior. It may very well be that knowing both the physiology and the perceptually salient circumstances of a person enables you to predict their behavior in bottom-up deterministic fashion like Laplace's demon would. But that doesn't imply that the antecedent circumstances caused, let alone relevantly explain, why the behavior belonged to the intelligible class that it did. It's rather the irreducible high-level rationalizing explanation of their behavior that does the job. But that may be an issue for another thread.
Meanwhile, the answer that I would like to provide to your question addresses a slightly different one. How might we account for the emergence of an ability that can't be accounted for in low level terms not because determinate inputs don't lead to determinate outputs (since they very well might) but rather because the patterns that emerge in the outputs, in response to those that are present in the inputs, can only be understood as being steered by norms that the chatbot can only abide by on the condition that it has some understanding of them, and the process by means of which this understanding is achieved, unlike what was supposed to be the case with old symbolic AI, wasn't directed by us?
This isn't of course an easy question to answer but the fact that the emergence of the cognitive abilities of LLM-based chatbots was unpredictable doesn't mean that it's entirely mysterious either. A few months ago I had a discussion with GPT-4o, transcribed here in four parts, about the history leading from Rosenblatt's perceptron (1957) to the modern transformer architecture (circa 2017) that underlies chatbots like ChatGPT, Claude and Gemini, and about the criticisms of this neural net approach to AI by Marvin Minsky, Seymour Papert and Noam Chomsky. While exploring what it is that the critics got wrong (and was belied by the later successes in the field) we also highlighted what it is that they had gotten right, and what it is that makes human cognition distinctive. And this also suggested enlightening parallels, as well as sharp differences, between the formative acculturation processes that humans and chatbots go through during upbringing/training. Most of the core ideas explored in this four parts conversation were revisited in a more condensed manner in a discussion I had with GPT-5 yesterday. I am of course not urging you to read any of that stuff. The Quanta piece linked above, though, might be more directly relevant and accessible than the Karpathy interview I had linked earlier, and might provide some food for thought.
I agree. AI doesnt have the ability to be affected by its own statements in the way we are describing. The effect of words Im referencing is their effect on our judgment, not merely the words internal coherence (which is all AI can reference).
Preachers and bosses must gather information and solicit responses, and adapt their speech to have any affect in the world at all, and the gathering information and adaption stage is them being affected by what they just said. They say x, gather feedback to determine its affect, and then they either need to say y, or they judge theyve said enough. They need to move their ideas into someone elses head in order for someone else to act on those same ideas. Its a dialogue that relates to non-linguistic steps and actions in the world between speakers. A dialogue conducted for a reason in the speaker and a reason in the listener. Even if you dont think your boss cares about you, and he tells you to shut up and just listen, and is completely unaffected by your emotions, he has to be affected by your response to his words in order to get you to do the work described in his very own words - so his own words affect what he is doing and saying all of the time, like they affect what the employee is doing.
AI certainly, at times, looks like a dialogue, but the point is, upon closer inspection, there is no second party affected by the language and so no dialogue that develops. AI doesnt think for itself (because there would have to be a for itself there that involved thinking).
AI is a machine that prints words in the order in which its rules predict those words will complete some task. It needs a person to prompt it, and give it purpose and intention, to give it a goal that will mark completion. And then, AI needs a person to interpret it (to be affected by those words) once its task of printing is done. AI cant know that it is correct when it is correct, or know it has completed the intended task. We need to make those judgments for it.
Just like AI cant understand the impact of its hallucinations and lies. It doesnt understand. It just stands.
At least thats how I see it.
So we need to know every time we are dealing with AI and not a person, so that, however the words printed by AI might affect us, we know the speaker has no stake in that affect. We have to know we are on our own with those words to judge what they mean, and to determine what to do now that weve read them. There is no one and nothing there with any interest or stake in the effect those words might have.
ADDED:
A sociopath does not connect with the person he is speaking with. So a sociopath can say something that has no affect on himself. But for a sociopath, there is a problem with connection; there are still two people there, just that the sociopath only recognizes himself as a person. For AI, there is a problem with connection because there is nothing there for the listener to connect with.
AI can adapt to the conversation remembering the context of the conversation and making new judgements when provided new information or a different way of looking at a topic.
The ability that AI does not have that we do is the ability to go out and confirm or reject some idea with consistent observations. But if it did have eyes (cameras) and ears (microphones) it could then test its own ideas (output).
AI does not seek "Likes" or praise, or become defensive when what it says is challenged. It doesn't abandon the conversation when the questions get difficult.
Which qualities would you prefer if your goal is seeking truth?
No, the ability AI does not have is to want to confirm its own ideas, or identify a need or reason to do so. AI has no intent of its own.
When AI seeks out other AI to have a dialogue, and AI identifies its own questions and prompts to contribute to that dialogue, we might be seeing something like actual intelligence. We might just be deceived by our own wishful bias.
Quoting Harry Hindu
Yes it does. Its not intentionally. So it is not a lie. It is a misfire of rule following. AI hallucinates meaning, invents facts, and then builds conclusions based on those facts, and when asked why it did that, it says I dont know. Like 4 year old kid. Or a sociopath.
Quoting Harry Hindu
So what? Neither do I. Neither need any of us. AI doesnt get hungry or need time off from work either. This is irrelevant to what AI creates for us and puts into the world.
Are you attempting to address the questions in the OP? Are you helping to work out how to use AI effectively to do philosophy? It doesn't look like it to me, so you'd better find somewhere else for your chat.
How can we use something effectively if we dont know what it is?
Unless we are all postmodernists. In which case there is no what it is to know, and floundering between uses is the only way, the best way, to get on in life.
Quoting Banno
These are good.
Most important thing is this: Quoting Banno
Because of all of the other pitfalls and how easily AI appears to be a person, we need to know we are not dealing with content that comes from a person.
Thanks. Carry on in that vein and leave the questions about the nature of AI for elsewhere. :up: (EDIT: unless you are explicitly connecting it to the topic)
What exactly do we mean, "not affected by what one says"? Are you referring to the inability of AI to test the validity of what it is saying? Are you referring to people in authority being able to say what they want with very little questioning, if any at all, of what they say - that what they say isn't tested to the same degree as someone that said the same thing that is not an authority?
If the former, then this goes to what I was saying before in that AI does not have any senses to be able to gather information directly from the source - reality, instead of its only source of reality is to what humans are asking and saying about reality. We test our logic with observation. We test our observations with logic. It is this sensory information feedback loop that AI is lacking that does not allow it to think with intent in the way that human beings do. If all you have to go by is scribbles typed by humans then you have no way to know what those scribbles are referring to, if it even understands that they refer to anything, and it is those things that the scribbles are about.
If the latter, then AI does not have a view of itself being an authority or not on what it is saying. We do in the way we treat what it is saying as the only source, or as part of an amalgam of sources used to triangulate the truth. If the AI's trained sources are from varying authorities on the subject, is it's response considered authoritative?
We might want to consider the type of AI we are using for the purpose we have in our mind. Using an AI trained with any and all data mined from the internet with no way of distinguishing what is provable by observation will probably not be the type of AI you want to use when your goal is seeking truth, just as you might want to consider the type of human you are using to bounce your ideas off of (you wouldn't choose someone who is close-minded or stops talking when what you're saying doesn't reinforce their own assumptions).
Great topic: How to use AI effectively and ethically.
Heres my suggestion for achieving both goals at once: engage in a dialectical exchange with AI and then publish the dialogue unedited.
The human exerts maximum control with AI in the same way he does with another human; he asks questions. Havent we seen this tactic used over and over in movies set in the courtroom?
Doesnt this tactic trace back all the way to Socrates? Dont we have the record of Greek philosophers mainly in the form of question-driven dialogues?
A good dialectition is a verbal chess player. He gains power with the right question at the right moment. Going forward from there, if he makes no logical errors in his pursuit of a logical proof of his premise, then he cant fail in his quest to establish the truth and authority of his premise.
Such dialogue with an AI, as with another human creates a clear account of who is the creative thinker.
So, dialoging with AI need not obscure who the credit belongs to for creatively driving the thinking. The human simply honors the dialectical method when interacting with AI.
Doing this is smart because AI runs on the largest database of knowledge available. A thinker who holds his own with such a database loses no individuality.
Given the current status of AI, we know the human dialectition is really matching wits with the wits of the human programmers who write the algorithms AI follows.