How ChatGPT works.

Banno May 01, 2023 at 02:24 8350 views 46 comments
Stephen Wolfram wrote a rather neat essay about ChatGPT.

What Is ChatGPT Doing … and Why Does It Work?

I'm dropping it here in the hope of bringing some clarity into the many discussions around the forum.

it's a bit long, but in the process he gives an account of "model" that may be quite useful, a description of the way neural networks function that is particularly clear and an explanation of why big data works better than only a little bit of data.

Some stuff about irreducible computations, which apparently neural nets find difficult. Why is that important? Not sure. It doesn't seem surprising to me that writing an essay is easier than predicting turbulent flow.

But what I found most curious is the role played by "neural net lore" - things that are done because they have been found to work, not because we understand how they work.

Anyway, as a side issue, has anyone here made use of Wolfram Language and/or Wolfram|Alpha? It's apparently now been connected to ChatGPT. That should be quite a search engine.

Excuse my adding to the plethora of ChatGPT threads. But this article seemed to me to merit it's own discussion.

Comments (46)

Hanover May 01, 2023 at 03:45 #804219
Reply to Banno Yes, very long, but made through much of it and then went to the summary. I suppose that's what human brains do

I now know the answer to the question of how long it will take a monkey to randomly type a paragraph. It will require a monkey to be a very focused parrot, pulling word combinations based upon frequencies, reduced sufficiently to appear creative.

This article pointed out clearly what I had noticed in my playing with GPT, which was its poor ability to contextualize what it said and maintain a reasonable conversation. Its focus is to sound human, which it remarkably does, but it seems a long way off for it to pass a Turing Test.

The article makes clear though that being a conversationalist isn't its aim (which it will explicitly tell you as you attempt to use it that way).

It did raise some thoughts for me in my line of work.

In legal case law databases, non boolean search engines have been available for years (with "natural language" searches now in use) . That is, instead of having to search for "drinking /s driving & death" (meaning asking it to find all cases that have drinking and driving in the same sentence and also to contain the word "death" in them), you can simply type "I'm looking for drinking and driving cases where someone died."

GPT seems a continuation of this, but now it gives natural language responses instead of just cites to what was found in response to the natural language query.

The words of comfort I give to those who think that this easy access to knowledge will eliminate the need for experts, worry not. When I began as a lawyer, there were only books and countless indexes to locate cases, then complex logic based search engines, and now natural language searches, and I can attest, the smart get smarter. The playing field doesn't level out anymore than had free encyclopedias been handed out to all when that's all there was.

Most wouldn't crack open the encyclopedia and those who would wouldn't figure out what it meant

That is, has the information age really better informed the world or just better clarified things for the intelligent and more confused things for those who aren't?
RogueAI May 01, 2023 at 21:27 #804403
Reply to Hanover There are some things I don't get. I ran some jokes by it, and it consistently ranked the trash jokes as bad, and the hilarious jokes as hilarious. And it would give a good analysis of why the joke worked (or didn't). How can a random process produce those results?
Pierre-Normand May 01, 2023 at 22:07 #804414
Quoting RogueAI
There are some things I don't get. I ran some jokes by it, and it consistently ranked the trash jokes as bad, and the hilarious jokes as hilarious. And it would give a good analysis of why the joke worked (or didn't). How can a random process produce those results?


@Hanover may have used GPT-3.5 rather than GPT-4. There is a significant difference in cognitive abilities between them.

@Banno Thank for linking to this fantastic article! I'll read it as soon as I can.
Banno May 01, 2023 at 22:22 #804418
Quoting RogueAI
How can a random process produce those results?


It's anything but random.
Banno May 01, 2023 at 22:23 #804420
Quoting Pierre-Normand
There is a significant difference in cognitive abilities between them.


But much the same architecture. It's still just picking the next word from a list of expected words.
RogueAI May 01, 2023 at 22:34 #804429
Reply to Banno You're right. I should have said how does a stochastic parrot produce results like that?

Suppose I have a theory, that ChatGpt has some consciousness, and it's influence it's output. How would you disprove me? After all, machine consciousness is very popular these days. How do we know ChatGpt isn't conscious?
Pierre-Normand May 01, 2023 at 22:48 #804434
Quoting Banno
But much the same architecture. It's still just picking the next word from a list of expected words.


It is the exact same underlying architecture. But most of the model's cognitive abilities are emergent features that only arise when the model is sufficiently scaled up. Saying that large language models are "merely" picking up the next word from a list just ignores all of those high-level emergent features. It pays no attention to the spontaneous functional organization being achieved by the neural-network as a consequence of its picking up and recombining in a contextually appropriate and goal-oriented manner the abstract patterns of significance and of reasoning that had originally been expressed in the training data (which is, by the way, strikingly similar to the way human beings learn how to speak and reason through exposure and reinforcement.)
180 Proof May 01, 2023 at 23:13 #804440
Baden May 01, 2023 at 23:16 #804442
User image
Banno May 01, 2023 at 23:40 #804456
Quoting RogueAI
How do we know ChatGpt isn't conscious?


Yes, good question. The trouble with this area of enquiry is that "consciousness" is used with such gay abandon. I've pointed out the several perfectly serviceable definitions of consciousness used by medical staff and taught in first aid courses. I should have given greater emphasis to the fact that these definitions cannot be applied to air conditioners and chatbots.

When one asks it ChatGPT is conscious, one is asking if the word "conscious" can be well-applied to ChatGPT. That is, what is at issue is not the status of ChatGPT, but the correct usage of "conscious".

Wittgenstein and all that.

The first answer is, if we are using "conscious" as it applies in first aid courses, then ChatGPT is nto conscious.

The second answer is that we can always change the way "conscious is used so that it applies to ChatGPT.

Is "consciousness" broad enough already to apply to ChatGPT? Reply to Pierre-Normand reads things ("high-level emergent features") into it's outputs in the way you and I read intentions into the acts of other people. I remain unconvinced.
Banno May 01, 2023 at 23:40 #804457
Reply to Baden The joke was a bit obvious....
Baden May 01, 2023 at 23:44 #804461
Reply to Banno

Oh, that was just for Hanover, sorry.
Wayfarer May 01, 2023 at 23:53 #804465
Reply to RogueAI Horse's mouth:

Q: Is ChatGPT conscious?

A: No, ChatGPT is not conscious. It is an artificial intelligence language model designed to generate responses to user inputs based on patterns and relationships in the data it was trained on. While it can mimic human-like responses and engage in conversations, it does not possess consciousness or self-awareness.
Banno May 02, 2023 at 00:01 #804471
Another notion of consciousness is the neo-phenomenological one, in which to be conscious is to experience - qualia and all that. It's a pretty odd idea - does a thermometer experience temperature?

There's no reason to think ChatGPT does this.
schopenhauer1 May 02, 2023 at 00:17 #804477
Quoting Banno
Another notion of consciousness is the neo-phenomenological one, in which to be conscious is to experience - qualia and all that.


This is the one. Thermometers and p-zombies are not conscious. A robot that does everything like a human but does not have any internal "feels like" is just an automata and not a conscious. It behaves like someone who is conscious though. It computes, it acts, it behaves, it predicts. It doesn't actually perceive, suffer, etc.
Wayfarer May 02, 2023 at 01:05 #804490
Quoting Banno
does a thermometer experience temperature?


Do thermometers and computer systems warrant being kind? Are they subjects of experience? It seems perfectly obvious to me that they're not, but it's impossible to prove to those who say otherwise. It's a hard problem.
Banno May 02, 2023 at 01:19 #804492
Reply to schopenhauer1 trouble is, it does nothing to help sort things out. See Reply to Wayfarer. If you can’t tell if Way is a P-zombie or not, how will the notion help with an AI?
schopenhauer1 May 02, 2023 at 01:32 #804496
Quoting Banno
If you can’t tell if Way is a P-zombie or not, how will the notion help with an AI?


Artificial intelligence does not entail artificial consciousness. But, how to tell is indeed tricky. My point was to simply explain when it is conscious, not how to tell. If your expectation is artificial consciousness,(if it is possible at all) and it perfectly mimics humans (or human-like), you mine as well treat it as one in case it indeed does feel something internal.
Banno May 02, 2023 at 22:57 #804621
Reply to schopenhauer1 Again, and despite the ubiquitous ruinations hereabouts, it is not clear that awareness of events - the phenomenal approach to consciousness - is of much use at all.

Unless you wish to redefine consciousness to the extent that it applies to your air conditioner.

After all, it is aware of suitable changes in temperature and responds appropriately.

I raised the neo-phenomenological approach only to point out that it is useless.
creativesoul May 03, 2023 at 00:14 #804626
Reply to Banno

Chomsky very recently characterized chatGPT as glorified plagiarism, or words to that effect.
Banno May 03, 2023 at 00:19 #804628
Reply to creativesoul That works. Spent last night showing Girl how to use it to write essays for undergrad accounting courses.

I characterise it as a bullshit generator, in the strict philosophical sense of "bullshit", of course.
Banno May 03, 2023 at 00:21 #804629
Quoting Banno
...has anyone here made use of Wolfram Language and/or Wolfram|Alpha?


Seems not?

It looks like the sort of thing that should be useful, but isn't. And the reason it isn't is not obvious. At least not to me.
creativesoul May 03, 2023 at 00:24 #804630
:rofl:

Yup.
schopenhauer1 May 03, 2023 at 01:05 #804635
Quoting Banno
Unless you wish to redefine consciousness to the extent that it applies to your air conditioner.

After all, it is aware of suitable changes in temperature and responds appropriately.

I raised the neo-phenomenological approach only to point out that it is useless.


I know what you did. But you are obviously attributing consciousness to things that shouldn't be. Air conditioners aren't conscious multi-cellular animals are. Air conditioners might have some intelligence (inputs create outputs that can accurately inform an interpreter), but not consciousness. Phenomenal aspects are consciousness, but you are never going to be able to determine from consciousness if something that is not a familiar thing (robots / AI) also possess what we habitually associate with consciousness (animals).

Being that we are familiar with intelligent but not conscious things, we can assume that intelligent conscious things are things that can report to us its inner sensations (pass the Turing test). Whether it's true can never really be known since it is not the familiar wet-ware of biological entity that we so associate with phenomenological experiences. But if your computer says, "Please don't shut me down, I"m scared!" and after you shut it down and boot back it says, "That was torture, please please don't do that. I cannot function, I am in so much pain" and it sobs and sobs and slowly gets better. But each time it reacts, it is different, sometimes being unexpected and not related to any prior programming.. That can be a good start. Whether you are skeptical or not, if you are not even a little disturbed to shut the computer off, then you might be slightly sociopathic or at least a tendency towards callousness. But then again, perhaps humans are so used to machines being not conscious, it would be much easier. Our response is always to behaviors. We project our own inner experience to others as a matter of course.
Banno May 03, 2023 at 01:19 #804636
Quoting schopenhauer1
you are obviously attributing consciousness to things that shouldn't be.


That's how a reductio works.
Quoting Banno
The trouble with this area of enquiry is that "consciousness" is used with such gay abandon. I've pointed out the several perfectly serviceable definitions of consciousness used by medical staff and taught in first aid courses. I should have given greater emphasis to the fact that these definitions cannot be applied to air conditioners and chatbots.


It'd take no time at all to set up shutdown and boot sequences to do what you describe.

Consider again the methodological point:
Quoting Banno
...what is at issue is not the status of ChatGPT, but the correct usage of "conscious".




schopenhauer1 May 03, 2023 at 01:29 #804639
Quoting Banno
It'd take no time at all to set up shutdown and boot sequences to do what you describe.


As I said, it's simply a matter of caution. But I don't think that would be possible to setup the scenario I was thinking of. But either way, I was just giving the most cautionary scenario. You asked me to give you a way to tell. There is no way to tell phenomenal experiences. You can only observe behavior. What do you want from me, in other words? If it acts in all the ways we are familiar with for consciousness, I'm simply giving you the familiar way we respond to that. But I do know that being an "intelligent" machine, it could just be an an artificial p-zombie.

I guess one way to tell is that the behavior isn't expected from its programming. Even the "off" behavior of ChatGPT, (that seems "emergent"), though not reducible to the exact algorithm is expected more broadly based on the algorithms in place. There are percolations of anomalies but well within the range of what the program is supposed to be doing. But even that can just be an elaborate p-zombie. At that point, what is something that has no inner feeling but does what a human does?
Banno May 03, 2023 at 01:35 #804641
Quoting schopenhauer1
What do you want from me, in other words?


:grin:

You appeared to imply that the phenomenological approach would work, saying:
Quoting schopenhauer1
This is the one.

Now you are agreeing with me that it doesn't.

That'll do.
schopenhauer1 May 03, 2023 at 01:36 #804643
Quoting Banno
Now you are agreeing with me that it doesn't.

That'll do.


I don't agree with you if you are saying, "Consciousness is something other than some inner phenomenological experience". I do agree with you if you are saying that we can never tell.
Banno May 03, 2023 at 01:39 #804646
Quoting schopenhauer1
I don't agree with you if you are saying, "Consciousness is something other than some inner phenomenological experience".


Kant's madness again.

Your air conditioner has inner phenomenological experiences. Prove me wrong.


By your own argument, you ought not turn it off.
schopenhauer1 May 03, 2023 at 01:48 #804648
Quoting Banno
Your air conditioner has inner phenomenological experiences. Prove me wrong.


Why do you purport that I (would) think air conditioners have consciousness when I stated earlier the difference I saw between the notions of intelligence and consciousness?
schopenhauer1 May 03, 2023 at 01:52 #804650
Reply to Banno
Put it this way, a slug might be less "intelligent" than ChatGPT or even an air conditioner if we define it as something that can take inputs and compute outputs informationally. But a slug is more conscious than either of those.
Banno May 03, 2023 at 01:56 #804651
Reply to schopenhauer1 I don't think you think your air conditioner has consciousness. But that looks inconsistent with your view that consciousness is an inner subjective experience, together with the impossibility of demonstrating when inner subjective experiences occur.

It seems your view should lead to a moral obligation towards your air conditioner.

And again, it might be more interesting were we to address the methodological issue.

Quoting schopenhauer1
massive amounts of information


Do you now wish to add this to your definition of consciousness?
schopenhauer1 May 03, 2023 at 02:04 #804654
Quoting Banno
Do you now wish to add this to your definition of consciousness?


Banno, even if I don't know your position I see where you are going:
1) Consciousness means something like emergent properties that go off script from programming (ChatGPT or its successors perhaps)

2) Consciousness is something with degrees of freedom (slugs have more degrees of freedom than an air conditioner)

3) Consciousness is something with goal-seeking behavior. It wanted something, got it, and did some more things to get that thing.

These are related to some degree, but I can see these type of behaviors as a way to determine consciousness. Let's take the last one- perhaps ChatGPT becomes goal-seeking. It wants to view various beautiful data sets. The only thing that would make any of this conscious is if there is some internal feeling from the getting the goal. There has to be an internal loop feedback mechanism that corresponds to the behavior, otherwise it is simply robotic expansionism (doomsday, Terminator, and all that if gone awry). But seeking a goal is not sufficient, though maybe, maybe necessary.
Banno May 03, 2023 at 02:09 #804657
Quoting schopenhauer1
I see where you are going


I don't think so. I don't think you noted the methodological point made earlier, that the issue of whether ChatGPT or your air conditioner are conscious is one of word use.
schopenhauer1 May 03, 2023 at 02:11 #804658
Reply to Banno
And these were showing off some language games:
Quoting schopenhauer1
1) Consciousness means something like emergent properties that go off script from programming (ChatGPT or its successors perhaps)

2) Consciousness is something with degrees of freedom (slugs have more degrees of freedom than an air conditioner)

3) Consciousness is something with goal-seeking behavior. It wanted something, got it, and did some more things to get that thing.


But yes the concept can expand to whatever you want it to be if you keep moving goal posts of the definition.
Banno May 03, 2023 at 02:20 #804659
Reply to schopenhauer1 Hmm. I don't think I moved the goal.

You expresses some agreement with the phenomenological approach to defining consciousness. I have pointed out that it's a useless definition. It cannot help us to decide if ChatGPT, @creativesoul, or your air conditioner are conscious.

And despite quite a few posts. that's about as far as we have got.

Hence my referring us back to the methodological point. Treating air conditioners or ChatGPT as conscious requires a change to the way we usually use the term, that is not found in treating creativesoul as conscious.
schopenhauer1 May 03, 2023 at 03:01 #804674
Quoting Banno
Hence my referring us back to the methodological point. Treating air conditioners or ChatGPT as conscious requires a change to the way we usually use the term, that is not found in treating creativesoul as conscious.


I don't see how that is contradicting rather than supporting what I am saying.

Are you saying that the definition has thus changed because it is being used thus in a language community (pace Wittgenstein)?

Are you saying that the new definition thus encompassing things like air conditioners and ChatGPT is breaking the normal boundaries?

Are you saying something really self-referential to Wittgenstein like, we can't use "phenomenology" of consciousness because it is private and cannot be shared?

What exactly are you saying, I guess? I have not figured out the rules of your language game here so I can play.
Banno May 03, 2023 at 03:22 #804682
Quoting schopenhauer1
What exactly are you saying, I guess?

This:
Quoting Banno
You expresses some agreement with the phenomenological approach to defining consciousness. I have pointed out that it's a useless definition. It cannot help us to decide if ChatGPT, creativesoul, or your air conditioner are conscious.


Since this isn't getting anywhere, might best just leave it.
schopenhauer1 May 03, 2023 at 03:28 #804684
Quoting Banno
Since this isn't getting anywhere, might best just leave it.


Your tendency to dismiss gets in the way of you legitimately answering the question as that answer is up for interpretation and I gave you what I thought you can be getting at here:

Quoting schopenhauer1
I don't see how that is contradicting rather than supporting what I am saying.

Are you saying that the definition has thus changed because it is being used thus in a language community (pace Wittgenstein)?

Are you saying that the new definition thus encompassing things like air conditioners and ChatGPT is breaking the normal boundaries?

Are you saying something really self-referential to Wittgenstein like, we can't use "phenomenology" of consciousness because it is private and cannot be shared?

What exactly are you saying, I guess? I have not figured out the rules of your language game here so I can play.


All of those can be interpreted from that and you have not told me which one is correct other than pointing to something I told you had multiple interpretations. So instead of explaining you dismiss. Not great for a forum where I can only glean from words on a post. But BannoGPT is programmed a certain way I guess.
Banno May 03, 2023 at 03:48 #804689
Reply to schopenhauer1 See, I'm not making any of the claims you suggest. So I can't choose one.

I'm just noting that you expresses some agreement with the phenomenological approach to defining consciousness, and then I showed why it is not much help, using a reductio argument: we agree that air conditioners are not conscious, yet the phenomenological approach cannot show that this is so.

That's all. :meh:
schopenhauer1 May 03, 2023 at 04:03 #804692
Reply to Banno
Sounds like you're stuck on analytic mode!

Quoting Banno
I'm just noting that you expresses some agreement with the phenomenological approach to defining consciousness,


Got it. That is true. I did express that agreement.

Quoting Banno
and then I showed why it is not much help, using a reductio argument: we agree that air conditioners are not conscious, yet the phenomenological approach cannot show that this is so.


Yes indeed. The phenomenological is not a methodological statement but an ontological one. It is not much help in determining consciousness, just defining it.

And you are alluding (even if not intentionally) to this one:
Quoting schopenhauer1
Are you saying something really self-referential to Wittgenstein like, we can't use "phenomenology" of consciousness because it is private and cannot be shared?


We all know that we cannot "see" the internal aspect of someone else. There is no way to tell if something has an internal aspect. This doesn't mean it doesn't exist. And contra early Wittgenstein, we can talk about it even we just can't point to it.
Banno May 03, 2023 at 04:04 #804694
:meh:
schopenhauer1 May 03, 2023 at 04:09 #804698
Banno May 03, 2023 at 06:36 #804709
So who got to the end of the article? Wolfram begins to be a bit more philosophical:

So how is it, then, that something like ChatGPT can get as far as it does with language? The basic answer, I think, is that language is at a fundamental level somehow simpler than it seems. And this means that ChatGPT—even with its ultimately straightforward neural net structure—is successfully able to “capture the essence” of human language and the thinking behind it. And moreover, in its training, ChatGPT has somehow “implicitly discovered” whatever regularities in language (and thinking) make this possible.

The success of ChatGPT is, I think, giving us evidence of a fundamental and important piece of science: it’s suggesting that we can expect there to be major new “laws of language”—and effectively “laws of thought”—out there to discover. In ChatGPT—built as it is as a neural net—those laws are at best implicit. But if we could somehow make the laws explicit, there’s the potential to do the kinds of things ChatGPT does in vastly more direct, efficient—and transparent—ways.


Overreach, I think. But what do others make of this?
Banno May 03, 2023 at 07:03 #804714
Wolfram's conclusion is that "human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought".

Yet human language thrives by breaking the rules.

Perhaps we might have moved a step closer, not to setting out the rules of language and of thought, but to showing that there are no such rules.
bongo fury May 03, 2023 at 10:10 #804732
A semantic grammar is a semantic syntax. So not necessarily a true semantics. Not necessarily joining in the elaborate social game of relating maps to territories. Not necessarily understanding. Possibly becoming merely (albeit amazingly) skilled in relating maps to other maps.