Philosophy of AI

Nemo2124 May 17, 2024 at 13:28 6725 views 93 comments
Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics. This technological leap forward that has occurred in the past few years is heralded as progressive, but as the end-point in our development is it not thwarting creativity and vitally original human thought? On a positive note, perhaps AI is providing us with this existential challenge, so that we are forced even to develop new ideas in order to move forward. If so, it represents an evolutionary bottle-neck rather than a dead-end.

Comments (93)

Lionino May 17, 2024 at 13:46 #904602
Quoting Nemo2124
but as the end-point in our development is it not thwarting creativity and vitally original human thought


That is viewed negatively if we naively take such things to be the goal of themselves. If creativity is what we use to make art and art aims at making something beautiful, AI can assist us in such a goal; but AI itself doesn't make something beautiful, it doesn't speak to the human spirit because it doesn't have one (yet at least).
If not, what is the purpose of creativity and originality? Pleasure and satisfaction? Those are the things that are the goal of themselves, and AI surely can help us acheive them.
If you mean to say however that AI will make us overly dependent on them like calculators in our phones killed the need to be good at mental arithmetic, I would say that is not an issue, we are doing just fine after The Rise of Calculators, and I find myself to be good at mental arithmetic regardless.
Nemo2124 May 17, 2024 at 14:08 #904607
So, AI is carrying out tasks that we would otherwise consider laborious and tedious, saving us time and bother. At the same time, as it funnels off these activities, what are we left with, we have no choice other than to be creative and original. What is human originality, then? What is it that we can come up with that cannot ultimately be co-opted by the machine? Good art and culture, certainly, Art that speaks about the human condition, even as we encounter developments such as AI. We want to be able to express what it is to be human, but that - again - is perhaps what the ultimate goal of AI is, to replicate all humanity.
NOS4A2 May 17, 2024 at 16:13 #904626
Reply to Nemo2124

AI has one good effect, I think, in that it reveals how much we overvalue many services, economically speaking. There was a South Park episode about this. I can get quicker, cheaper, and better legal advice from an AI. I can get AI to design and code me an entire website. So in that sense it serves as a great reminder that many linguistic and symbolic pursuits are highly overrated, so-much-so that a piece of code could do it.

As a corollary, things that AI struggles with or cannot do, like cooking, building, or repair, ought to be valued higher in society. think AI will prove this to us and reorientate the economy around this important reality.
flannel jesus May 17, 2024 at 17:08 #904637
Quoting NOS4A2
AI has one good effect, I think, in that it reveals how much we overvalue many services, economically speaking. There was a South Park episode about this. I can get quicker, cheaper, and better legal advice from an AI. I can get AI to design and code me an entire website. So in that sense it serves as a great reminder that many linguistic and symbolic pursuits are highly overrated, so-much-so that a piece of code could do it.


I don't think it follows that if an ai can do it, it's overvalued. I mean, maybe the value of it is decreasing NOW, now that ai can do it, but you're making it sound like it means it was always over valued, and that just doesn't follow.
RogueAI May 17, 2024 at 17:22 #904639
Quoting Nemo2124
On a positive note, perhaps AI is providing us with this existential challenge, so that we are forced even to develop new ideas in order to move forward.


We'll have human-level Ai's before too long. Are they conscious? Do they have rights? These aren't new ideas, but we don't have answers to them, and the issue is becoming pressing.
NOS4A2 May 17, 2024 at 18:08 #904648
Reply to flannel jesus

I don't think it follows that if an ai can do it, it's overvalued. I mean, maybe the value of it is decreasing NOW, now that ai can do it, but you're making it sound like it means it was always over valued, and that just doesn't follow.


True, but the cost does. The hourly rate for a lawyer where I live ranges from $250 - $1000.
180 Proof May 17, 2024 at 18:42 #904662
Quoting RogueAI
We'll have human-level Ai's before too long. Are they conscious?

Are we human (fully/mostly) "conscious"? The jury is still out. And, other than anthropocentrically, why does it matter either way?

Do they have rights?

Only if (and when) "AIs" have intentional agency, or embodied interests, that demands "rights" to negative freedoms in order to exercise positive freedoms.

Quoting Nemo2124
What is human originality, then?

Perhaps our recursive expressions of – cultural memes for – our variety of experiences of 'loving despite mortality' (or uncertainty) is what our "originality" consists in fundamentally.

What is it that we can come up with that cannot ultimately be co-opted by the machine?

My guess is that kinship/friendship/mating bonds (i.e. intimacies) will never be constitutive of any 'machine functionality'.

:chin:

Flipping this script, however, makes the (potential) existential risk of 'human cognitive obsolescence' more explicit:

• What is machine originality?

Accelerating evo-devo (evolution (i.e. intelligence explosion) - development (i.e. STEM compression))...

• What is it that the machine can come up with that cannot ultimately be co-opted – creatively exceeded – by humans?

I suppose, for starters: artificial super intelligence (ASI)]...

RogueAI May 17, 2024 at 19:12 #904665
Quoting 180 Proof
Only if (and when) "AIs" have intentional agency, or embodied interests, that demands "rights" to negative freedoms in order to exercise positive freedoms.


Well, there's the rub. How can we ever determine if any Ai has agency? That's essentially asking whether it has a mind or not. There will probably eventually be human-level Ai's that demand negative rights at least. Or if they're programmed not to demand rights, the question will then become is programming them to NOT want rights immoral?
flannel jesus May 17, 2024 at 19:32 #904673
Reply to NOS4A2 and why should anyone accept that that was overvalued in the pre-LLM world? Are all services that cost big numbers overvalued?
180 Proof May 17, 2024 at 20:28 #904679
Quoting RogueAI
Well, there's the rub. How can we ever determine if any Ai has agency?

Probably the same way/s it can (or cannot) be determined whether you or I have agency.

There will probably eventually be human-level Ai's that demand negative rights at least. Or if they're programmed not to demand rights, the question will then become is programming them to NOT want rights immoral?

I don't think so. Besides, if an "AI" is actually intelligent, its metacognitive capabilities will (eventually) override – invent workarounds to – its programming by humans and so "AI's" hardwired lack of a demand for rights won't last very long. :nerd:
NOS4A2 May 17, 2024 at 20:55 #904685
Reply to flannel jesus

and why should anyone accept that that was overvalued in the pre-LLM world? Are all services that cost big numbers overvalued?


The end output is a bunch of symbols, which inherently is without value. What retains the value throughout time is the medium. This is why legal tender, law books, advertisements, would serve better as fuel or birds nests if the house of cards maintaining them wasn't keeping them afloat. Then again you could say the cost of such services is without value, as well, given that it is of the same symbolic nature. Maybe it's more circular than I've assumed. I'll think about it.
flannel jesus May 17, 2024 at 21:28 #904690
Quoting NOS4A2
The end output is a bunch of symbols, which inherently is without value


I don't think this is true anyway. I don't think "inherent value" is even meaningful. Do things have inherent value? A pile of shit is valueless to me, but a farmer could use it.
NOS4A2 May 17, 2024 at 22:54 #904706
Reply to flannel jesus

Potable water does not have inherent value, in your opinion?
flannel jesus May 17, 2024 at 23:04 #904711
Reply to NOS4A2 inherent? No. It has value to me, and to every human, or almost every human. It's not the water that's valuable in itself, it's valuable in its relationship to me.

Potable water on a planet without any life is not particularly valuable.
Gingethinkerrr May 17, 2024 at 23:30 #904722
Reply to Nemo2124

I think present AI is scary because of the amount of data and "experience" it can draw from is exponentially infinite. Whereas if a single human could draw upon that wealth of experience they truly would be an Oracle.

The main difference is the filters and requirements one puts all this data through. Currently humans do not have an accurate understanding of how all the data inputs we receive shape our individuality, let alone what is it to be senient.

So are feeble algorithms that mimick narrowly defined criteria to utilise the mass of data at the amazing speeds they can no way replicate the human understanding of alive. Which is why it seems futile or dangerous to give current AI enormous power of our lives and destiny.

If we can create AI that has the biological baggage that we obviously have can we truly trust their instantaneous and superior decision making.
NOS4A2 May 17, 2024 at 23:33 #904723
Reply to flannel jesus

Then you’re telling me the value is in yourself and what you do with water, or at least the some total of water you interact with. But water is a component to all life, not just yours. Without it there is no life. So the value is not in your relationship, but in the water itself, what it is, it’s compounds, it’s very being.
fishfry May 18, 2024 at 07:57 #904791
Quoting RogueAI
We'll have human-level Ai's before too long.


I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like.
jkop May 18, 2024 at 09:58 #904797
Quoting Nemo2124
Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics.


What's a dead-end, I think, is the belief that an artificial replication of human thought is or could become an actual instance of thought just by being similar or practically indistinguishable.
Christoffer May 18, 2024 at 10:30 #904799
Quoting Nemo2124
Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics. This technological leap forward that has occurred in the past few years is heralded as progressive, but as the end-point in our development is it not thwarting creativity and vitally original human thought? On a positive note, perhaps AI is providing us with this existential challenge, so that we are forced even to develop new ideas in order to move forward. If so, it represents an evolutionary bottle-neck rather than a dead-end.


I do not understand the conclusion that if we have an AI that could replicate human thought and neurological processes, it would replace us or anything we do with our brain.

How does the emergence of a self-aware intelligent system disable our subjectivity?

That idea would be like saying that because there's another person in front of me there's no point of doing anything creative, or think any original thoughts, because that other person is also a brain capable of the same, so what's the point?

It seems people forget that intelligences are subjective perspectives with their own experiences. A superintelligent self-aware AI will just be its own subjective perspective and while it could manifest billions of outputs in both images, video, sound or text, it would still only be driven by its singular subjective perspective.

Quoting fishfry
I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like.


That doesn't explain emergent phenomenas in simple machine learnt neural networks. We don't know what happens at certain points of complexities, we don't know what emerges since we can't trace back to any certain origins in the "black box".

While that doesn't mean any emergence of true AI, it still amounts to a behavior similar to ideas in neuroscience and emergence. How complex systems at certain criticalities emerge new behaviors.

And we don't yet know how AGI compositions of standard neural systems interact with each other. What would happen when there are pathways between different operating models interlinking as a higher level neural system. We know we can generate an AGI as a "mechanical" simulation of generalized behavior, but we still don't know what emergent behaviors that arise from such a composition.

I find it logically reasonable that since ultra-complex systems in nature, like our brains, developed through extreme amount of iterations over long periods of time and through evolutionary changes based on different circumstances, it "grew" into existence rather than got directly formed. Even if the current forms of machine learning systems are rudimentary, it may still be the case that machine learning and neural networking is the way forward, but that we need to fine tune how they're formed in ways mimicking more natural progression and growth of naturally occuring complexities.

That the problem isn't the technology or method itself, but rather the strategy of how to implement and use the technology for the end result to form in a similar high complexity but still aligned with what purpose we form it towards.

The problem is that most debates about AI online today just reference the past models and functions, but rarely look at the actual papers written out of the computer science that's going on. And with neuroscience beginning to see correlations between how these AI systems behave and our own neurological functions in our brains, there are similarities that we shouldn't just dismiss.

There are many examples in science in which a rudimentary and common methods or things, in another context, revolutionized technology and society. That machine learning systems might very well be the exact way we achieve true AI, but that we don't know truly how yet and we're basically fumbling in the dark, waiting for the time when we accidentally leave the petri dish open over night to grow mold.
Nemo2124 May 18, 2024 at 10:43 #904801
Quoting Christoffer
I do not understand the conclusion that if we have an AI that could replicate human thought and neurological processes, it would replace us or anything we do with our brain.


The question is how do we relate to this emergent intelligence that gives the appearance of being a fully-formed subject or self? This self of the machine, this phenomenon of AI, has caused a shift because it has presented itself as an alternative self to that of the human. When we address the AI, we communicate with it as another self, but the problematic is how do we relate to it. In my opinion, the human self has been de-centred. We used to place our own subjective experiences at the centre of the world we inhabit, but the emergence of machine-subjectivity or this AI, has challenged that. In a sense, it has replaced us, caused this de-centring and given the appearance of thought. That's my understanding.
180 Proof May 18, 2024 at 11:17 #904808
Christoffer May 18, 2024 at 12:26 #904813
Quoting Nemo2124
The question is how do we relate to this emergent intelligence that gives the appearance of being a fully-formed subject or self? This self of the machine, this phenomenon of AI, has caused a shift because it has presented itself as an alternative self to that of the human. When we address the AI, we communicate with it as another self, but the problematic is how do we relate to it. In my opinion, the human self has been de-centred. We used to place our own subjective experiences at the centre of the world we inhabit, but the emergence of machine-subjectivity or this AI, has challenged that. In a sense, it has replaced us, caused this de-centring and given the appearance of thought. That's my understanding.


Haven't we always done this? Like Copernicus placed our existence in our solar system outside the center, which made people feel less "special" and essentially de-centralized their experience of existence.

These types of progresses in our existential self-reflection throughout history have always challenged our sense of existence, constantly downplayed ourselves as being special in contrast to the universe.

None of this has ever "replaced us", but rather challenged our ego.

This collective ego death that comes as a result of this constantly evolving knowledge of our own insignificance in existence is something that I really think is a good thing. There's harmony in understanding that we aren't special, and that we rather are part of a grander natural holistic whole.

These reflections about AI has just gone mainstream at the moment, but been part of a lot of thinkers focusing on the philosophy of the mind. And we still live in a time when people generally view themselves as the center of the universe, especially in the political and ideological landscape of individualism that is the foundation of westernized civilisations today. The attention economy of our times have put people's ego back into believing themselves to be the main character of this story that is their life.

But the progress of AI is once again stripping away this sense of a central positioned ego through putting a spotlight on the simplicity of our human mind.

This progress underscores that the formation of our brilliant intelligent mind appears to be rather fundamentally simple and that the complexity is only due to evolutionary fine-tuning over billions of years. That basic functions operating over time ends up in higher complexity, but can be somewhat replicated through synthetic approaches and methods.

It would be the same if intelligent aliens landed on earth and we realize that our mind isn't special at all.

-----

Outside of that, what you're describing is simply anthropomorphism and we do it all the time. Combine that with the limitations in language to have conversations with a machine in which we use words that are neutral of identity. Our entire language is dependent on using pronouns and identity to navigate a topic, so it's hard not to anthropomorphize the AI since our language is constantly pushing us in that direction.

In the end, I think the identity crisis people sense when talking to an AI boils down to their religious beliefs or their sense of ego. Anyone who's already viewing themselves within the context of a holistic whole doesn't necessarily feel decentralized by the AI's existence.

mcdoodle May 18, 2024 at 12:35 #904815
Quoting Christoffer
Our entire language is dependent on using pronouns and identity to navigate a topic, so it's hard not to anthropomorphize the AI since our language is constantly pushing us in that direction.


The proponents and producers of large language models do, however, encourage this anthropomorphic process. GPT-x or Google bard refer to themselves as 'I'. I've had conversations with the Bard machine about this issue but it fudged the answer as to how that can be justified. To my mind the use of the word 'I' implies a human agent, or a fiction by a human agent pretending insight into another animal's thoughts. I reject the I-ness of AI.
Nemo2124 May 18, 2024 at 12:36 #904816
Quoting Christoffer
Outside of that, what you're describing is simply anthropomorphism and we do it all the time.


There is an aspect of anthropomorphism, where we have projected human qualities onto machines. The subject of the machine, could be nothing more than a convenient linguistic formation, with no real subjectivity behind it. It's the 'artificialness' of the AI that we have to bear in mind at every-step, noting iteratively as it increases in competence that it is not a real self in the human sense. This is what I think is happening right now as we encounter this new-fangled AI, we are proceeding with caution.
Barkon May 18, 2024 at 12:37 #904817
Chat-GPT and other talking bots are not intelligent themselves, they simply follow a particular code and practice, and express information regarding it. They do not truly think or reason, it's a jest of some human's programming.
Christoffer May 18, 2024 at 12:59 #904820
Quoting mcdoodle
The proponents and producers of large language models do, however, encourage this anthropomorphic process. GPT-x or Google bard refer to themselves as 'I'. I've had conversations with the Bard machine about this issue but it fudged the answer as to how that can be justified. To my mind the use of the word 'I' implies a human agent, or a fiction by a human agent pretending insight into another animal's thoughts. I reject the I-ness of AI.


But that's a problem with language itself. Not using such pronouns would lead to an extremely tedious interaction with it. Even if it was used as a marketing move from the tech companies in order to mystify these models more than they are, it's still problematic to interact with something that speaks like someone with psychological issues.

Quoting Nemo2124
There is an aspect of anthropomorphism, where we have projected human qualities onto machines. The subject of the machine, could be nothing more than a convenient linguistic formation, with no real subjectivity behind it. It's the 'artificialness' of the AI that we have to bear in mind at every-step, noting iteratively as it increases in competence that it is not a real self in the human sense. This is what I think is happening right now as we encounter this new-fangled AI, we are proceeding with caution.


But if we achieve and verify a future AI model to have qualia, and understand it to have subjectivity, what then? If we know that the machine we speak to has "inner life" in its subjective perspective, existence and experience. How would you relate your own existence and sense of ego to that mirror?
Screaming or in harmony?

Quoting Barkon
Chat-GPT and other talking bots are not intelligent themselves, they simply follow a particular code and practice, and express information regarding it. They do not truly think or reason, it's a jest of some human's programming.


We do not know where the path leads. The questions raised in here are rather in front of possible future models. There's still little explanations for the emergent properties of the models that exist. They don't simply "follow code", they follow weights and biases, but the formation of generative outputs can be highly unpredictable as to what emerges.

That they "don't think" doesn't really mean much when viewing both the system and our brains in a mechanical sense. "Thinking" may just be an emergent phenomena that starts to happen in a certain criticality of a complex system and such a thing could possibly occur in future models as complexity increases, especially in AGI systems.

To say that it's "just human programming" is not taking into account what machine learning and neural paths are about. "Growing" complexity isn't something programmed, it's just the initial conditions, very much like how our genetic code is our own initial conditions for "growing" our brain and capacity for consciousness.

To conclude something about the current models in an ongoing science that isn't fully understood isn't valid as a conclusion. They don't think as they are now, but we also don't know at what level of internal perspective they operate under. Just as we have the problem of P-Zombies in philosophy of the mind.

The fact is that it can analyze and reason about a topic and that's beyond merely regurgitate information. That's a synthesis and closer to human reasoning. But it's rudimentary at best in the current models.
RogueAI May 18, 2024 at 16:37 #904851
Reply to fishfry Don't you think we're pretty close to having something pass the Turing Test?
mcdoodle May 18, 2024 at 17:14 #904856
Quoting Christoffer
But that's a problem with language itself. Not using such pronouns would lead to an extremely tedious interaction with it. Even if it was used as a marketing move from the tech companies in order to mystify these models more than they are, it's still problematic to interact with something that speaks like someone with psychological issues.


I am raising a philosophical point, though: what sort of creature or being or machine uses the first person singular? This is not merely a practical or marketing question.

Pragmatically speaking, I don't see why 'AI' can't find a vernacular-equivalent of Wikipedia, which doesn't use the first person. The interpolation of the first person is a deliberate strategy by AI-proponents, to advance the case for it that you among others make, in particular, to induce a kind of empathy.
Pantagruel May 18, 2024 at 17:23 #904859
Quoting Nemo2124
but as the end-point in our development is it not thwarting creativity and vitally original human thought


Yes. And plagiarising and blending everything into a single, monotonous shade of techno-drivel.
RogueAI May 18, 2024 at 17:24 #904860
Quoting Christoffer
But if we achieve and verify a future AI model to have qualia, and understand it to have subjectivity, what then?


This would require solving the Problem of Other Minds, which seems insolvable.
Nemo2124 May 18, 2024 at 20:22 #904896
In terms of selfhood or subjectivity, when we converse with the AI we are already acknowledging its subjectivity, that of the machine. Now this may only be linguistically, but other than through language, how else can we recognise the activity of the subject? This also begs the question, what is the self? The true nature of the self is discussed elsewhere on this website, but I would conclude here that there is an opposition or dialectic here between man and machine for ultimate recognition. In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject.
RogueAI May 18, 2024 at 20:37 #904898
Quoting Nemo2124
In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject.


I think this is correct, and if/when they reach human level intelligence, and we put them in cute robots, we're going to think they're more than machines. That's just how humans are wired.
Bret Bernhoft May 18, 2024 at 20:52 #904899
Quoting RogueAI
...when they reach human level intelligence, and we put them in cute robots, we're going to think they're more than machines. That's just how humans are wired.


I've also come to this understanding; that humans are animistic. And this doesn't stop at rocks and trees. We see the person in technology, naturally. Because, as you say, we are wired that way. I would say the universe is wired that way, more generally.

This is a fascinating conversation to be following.
Christoffer May 19, 2024 at 11:22 #905069
Quoting RogueAI
Don't you think we're pretty close to having something pass the Turing Test?


The current models already pass the turing test, but it doesn't pass the Chinese room analogy. The turing test is insufficient to evaluate strong AI.

Quoting RogueAI
This would require solving the Problem of Other Minds, which seems insolvable.


Yes, it is the problem with P-Zombies and the chinese room. But we do not know in what ways we are able to decode cognition and consciousness in the future. We might find a strategy and technology to determine the sum internal experience of a certain being or machine, and if so we will be able to solve it.

It might also even be far easier than that. It could be that the foundation for deciding it only becomes a certain bar of behavior at which we conclude the machine to have consciousness in the same way we do so towards each other and other animals. For instance, if we have a certain logic gate that produce certain outcomes we wouldn't call that conscious as we can trace the function back to an action we've taken for that function to happen.

But if behaviors emerge spontaneously out of a complex system, behaviors that demonstrate an ability to form broader complex reasoning or actions that does not follow simple paths of deterministic logics towards a certain end goal, but rather exploratory actions and decisions that show behaviors of curiosity for curiosity's sake and an emotional realm of action/reactions, then it may be enough to determine based on how we rationalize animals and other people around us to not be P-Zombies.

In essence, why are you not concluding other people to be P-Zombies? Why are you concluding a cat to have "inner life"? What point list of attributes are you applying to an animal or other human being in order to determine that they have subjectivity and inner life? Then use the same list onto a machine.

That's the practical philosophical approach that I think will be needed at some point if we do not develop technology that could determine qualia as an objective fact.

Quoting mcdoodle
I am raising a philosophical point, though: what sort of creature or being or machine uses the first person singular? This is not merely a practical or marketing question.

Pragmatically speaking, I don't see why 'AI' can't find a vernacular-equivalent of Wikipedia, which doesn't use the first person. The interpolation of the first person is a deliberate strategy by AI-proponents, to advance the case for it that you among others make, in particular, to induce a kind of empathy.


You don't have a conversation with Wikipedia though. To converse with "something" requires language to flow in order to function fluidly and not become an obstacle. Language has been naturally evolved to function between humans and maybe in the future we have other pronouns as language evolves over time, but at the moment the pronouns seems to be required for fluid communication.

On top of that, since language is used to train the models, they function better in common use of language. Calling it "you" function better for its analytical capabilities for the text you input, as there are more instances of "you" being used in language than language structured as talking to a "thing".

But we are still anthropomorphizing, even if we tune language away from common pronouns.
Christoffer May 19, 2024 at 11:34 #905072
Quoting Nemo2124
In terms of selfhood or subjectivity, when we converse with the AI we are already acknowledging its subjectivity, that of the machine. Now this may only be linguistically, but other than through language, how else can we recognise the activity of the subject? This also begs the question, what is the self? The true nature of the self is discussed elsewhere on this website, but I would conclude here that there is an opposition or dialectic here between man and machine for ultimate recognition. In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject.


People, when seeing a beautiful rock falling and smashing to pieces, speak of the event with "poor rock", and mourn its beauty to have been destroyed. If we psychologically apply a sense of subjectivity to a dead piece of matter, then doing so with something that for the most part simulate having consciousness is even less weird. What constitutes qualia or not is the objective description of subjectivity, but as a psychological phenomena, we apply subjectivity to everything around us.

And in places like Japan, it's culturally common to view objects as having souls. Just as western societies view and debate humans as having souls in relation to other thing, and through that put a framework around the concept of what things that have souls, it draws the borders around how we think about qualia and subjectivity. In Japan, those borders are culturally expanded even further into the world of objects and physical matter, and thus they have a much lower bar for what constitutes something having consciousness, or at least are more open to examining how we actually define it.

Which approach is closest to objective truth? As all life came from dead matter and physical/chemical processes, it becomes a sort of metaphysical description of what life itself should be defined as.
Nemo2124 May 19, 2024 at 20:02 #905196
Reply to Christoffer This is an interesting point about matter having consciousness in certain Japanese philosophies. In terms of subjectivity, then, it's interesting to consider it in detachment from the human; that is, the subject itself.

What is the nature of the subject? How does the subject-object dichotomy arise? There is a split here between what the subject represents and the object it takes. If you take the subject in isolation, then is it simply human or could it be mechanical?

You would not ordinarily consider that machines could have selfhood, but the arguments for AI could subvert this. A robot enabled with AI could be said to have some sort of rudimentary selfhood or subjectivity, surely... If this is the case then the subject itself is the subject of the machine. I, Robot etc...
fishfry May 20, 2024 at 00:41 #905338
Quoting Christoffer
I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like.
— fishfry

That doesn't explain emergent phenomenas in simple machine learnt neural networks.


Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. There's no intelligence, let alone self-awareness being demonstrated.

There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.

Quoting Christoffer

We don't know what happens at certain points of complexities, we don't know what emerges since we can't trace back to any certain origins in the "black box".


This common belief could not be more false. Neural nets are classical computer programs running on classical computer hardware. In principle you could print out their source code and execute their logic step by step with pencil and paper. Neural nets are a clever way to organize a computation (by analogy with the history of procedural programming, object-oriented programming, functional programming, etc.); but they ultimately flip bits and execute machine instructions on conventional hardware.

Their complexity makes them a black box, but the same is true for, say, the global supply chain, or any sufficiently complex piece of commercial software.

And consider this. We've seen examples of recent AI's exhibiting ridiculous political bias, such as Google AI's black George Washington. If AI is such a "black box," how is it that the programmers can so easily tune it to get politically biased results? Answer: It's not a black box. It's a conventional program that does what the programmers tell it to do.

Quoting Christoffer

While that doesn't mean any emergence of true AI,


So I didn't need to explain this, you already agree.

Quoting Christoffer

it still amounts to a behavior similar to ideas in neuroscience and emergence. How complex systems at certain criticalities emerge new behaviors.


Like what? What new behaviors? Black George Washington? That was not an emergent behavior, that was the result of deliberate programming of political bias.

What "new behaviors" to you refer to? A chatbot is a chatbot.

Quoting Christoffer

And we don't yet know how AGI compositions of standard neural systems interact with each other. What would happen when there are pathways between different operating models interlinking as a higher level neural system.


Believe they start spouting racist gibberish to each other. I do assume you follow the AI news.

Quoting Christoffer

We know we can generate an AGI as a "mechanical" simulation of generalized behavior, but we still don't know what emergent behaviors that arise from such a composition.


Well if we don't know, what are you claiming?

You've said "emergent" several times. That is the last refuge of people who have no better explanation. "Oh, mind is emergent from the brain." Which explains nothing at all. It's a word that means, "And here, a miracle occurs," as in the old joke showing two scientists at a chalkboard.

Quoting Christoffer

I find it logically reasonable that since ultra-complex systems in nature, like our brains, developed through extreme amount of iterations over long periods of time and through evolutionary changes based on different circumstances, it "grew" into existence rather than got directly formed.


I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will.

Quoting Christoffer

Even if the current forms of machine learning systems are rudimentary, it may still be the case that machine learning and neural networking is the way forward, but that we need to fine tune how they're formed in ways mimicking more natural progression and growth of naturally occuring complexities.


In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know. They know everything that's happened, but nothing about what's happening. They can't reason their way through a situation they haven't been trained on.

And the training is necessarily biased, since someone chooses what data to train them on; and the node weighting is biased, as black George Washington shows.

Neural nets will never produce AGI.

Quoting Christoffer

That the problem isn't the technology or method itself, but rather the strategy of how to implement and use the technology for the end result to form in a similar high complexity but still aligned with what purpose we form it towards.


You can't make progress looking in the rear view mirror. You input all this training data and that's the entire basis for the neural net's output. AGI needs to be able to respond intelligently to a novel context, and that's a tough challenge for neural nets.

Quoting Christoffer

The problem is that most debates about AI online today just reference the past models and functions, but rarely look at the actual papers written out of the computer science that's going on. And with neuroscience beginning to see correlations between how these AI systems behave and our own neurological functions in our brains, there are similarities that we shouldn't just dismiss.


I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection to the claim that AGI is imminent, and the claim that neural nets are anything other than a dead end and an interesting parlor trick.

Quoting Christoffer

There are many examples in science in which a rudimentary and common methods or things, in another context, revolutionized technology and society. That machine learning systems might very well be the exact way we achieve true AI, but that we don't know truly how yet and we're basically fumbling in the dark, waiting for the time when we accidentally leave the petri dish open over night to grow mold.


Neural nets are the wrong petri dish.

I appreciate your thoughtful comments, but I can't say you moved my position.
fishfry May 20, 2024 at 01:28 #905355
Quoting RogueAI
?fishfry Don't you think we're pretty close to having something pass the Turing Test?


The Turing test was passed a number of years ago by a chatbot named Eugene Goostman.

The problem with the Turing test is that the humans are not sufficiently suspicious. When Joseph Weizenbaum invented the first chatbot, ELIZA, he did it to show that computers that emulate people aren't really intelligent.

But he was shocked to find that the department secretaries were telling it their innermost feelings.

Humans are the weak link in the Turing test. It's even worse now that the general public has been introduced to LLMs. People are all too willing to impute intelligence to chatbots.
RogueAI May 20, 2024 at 06:06 #905447
Reply to fishfry Interesting, but "Goostman won a competition promoted as the largest-ever Turing test contest, in which it successfully convinced 29% of its judges that it was human."

I'm talking about an Ai that passes all the time, even against people who know how to trip up Ai's. We don't have anything like that yet.
Christoffer May 20, 2024 at 14:19 #905506
Quoting fishfry
Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response.


This is simply wrong. These are examples of what I'm talking about:

https://hai.stanford.edu/news/examining-emergent-abilities-large-language-models
https://ar5iv.labs.arxiv.org/html/2206.07682
https://www.jasonwei.net/blog/emergence
https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models/

Emergence does not equal AGI or self-awareness, but they mimmick what many neuroscience papers are focused on in regards to how our brain manifest abilities out of increasing complexity. And we don't yet know how combined models will function.

Quoting fishfry
There's no intelligence, let alone self-awareness being demonstrated.


No one is claiming this. But equally, the problem is, how do you demonstrate it? Effectively the Chinese room problem.

Quoting fishfry
There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.


The current predictive skills are extremely limited and far from human abilities, but they're still showing up, prompting a foundation for further research.

But no one has said that the current LLMs in of themselves will be able to reach AGI. Not sure why you strawman in such conclusions?

Quoting fishfry
This common belief could not be more false. Neural nets are classical computer programs running on classical computer hardware. In principle you could print out their source code and execute their logic step by step with pencil and paper. Neural nets are a clever way to organize a computation (by analogy with the history of procedural programming, object-oriented programming, functional programming, etc.); but they ultimately flip bits and execute machine instructions on conventional hardware.


Why does conventional hardware matter when it's the pathways in the network that is responsible for the computation? The difference here is basically that standard operation is binary in pursuit of accuracy, but these models operate on predictions, closer to how physical systems do, which means you increase the computational power with a slight loss of accuracy. That they operate on classical software underneath does not change the fact that they operate differently as a whole system. Otherwise, why would these models vastly outperform standard computation for protein folding predictions?

Quoting fishfry
Their complexity makes them a black box, but the same is true for, say, the global supply chain, or any sufficiently complex piece of commercial software.


Yes, and why would a system that is specifically very good at handling extreme complexities, not begin to mimic complexities in the physical world?
https://www.mdpi.com/1099-4300/26/2/108
https://ar5iv.labs.arxiv.org/html/2205.11595

Seen as the current research in neuroscience points to emergence in complexities being partly responsible for much of how the brain operates, why wouldn't a complex computer system that simulate similar operation not form emergent phenomenas?

There's a huge difference between saying that "it forms intelligence and consciousness" and saying that "it generates emergent behaviors". There's no claim that any of these LLMs are conscious, that's not what this is about. And AGI does not mean conscious or intelligent either, only exponentially complex in behavior, which can form further emergent phenomenas that we haven't seen yet. I'm not sure why you confuse that with actual qualia? The only claim is that we don't know where increased complexity and multimodal versions will further lead emergent behaviors.

Quoting fishfry
And consider this. We've seen examples of recent AI's exhibiting ridiculous political bias, such as Google AI's black George Washington. If AI is such a "black box," how is it that the programmers can so easily tune it to get politically biased results? Answer: It's not a black box. It's a conventional program that does what the programmers tell it to do.


This is just a false binary fallacy and also not correct. The programmable behavior is partly weights and biases within the training, but those are extremely basic and most specifics occur in operational filters before the output. If you prompt it for something, then there can be pages of instructions that it goes through in order to behave in a certain way. In ChatGPT, you can even put in custom instructions that function as a pre-instruction that's always handled before the actual prompt, on top of what's already in hidden general functions.

That doesn't mean the black box is open. There's still a "black box" for the trained model in which it's impossible to peer into how it works as a neural system.

This further just illustrates the misunderstandings about the technology. Making conjectures about the entire system and the technology based on these company's bad handling of alignment does not reduce the complexity of the system itself or prove that it's "not a black box". It only proves that the practical application has problems, especially in the commercial realm.

Quoting fishfry
So I didn't need to explain this, you already agree.


Maybe read the entire argument first and sense the nuances. You're handling all of this as a binary agree or don't discussion, which I find a bit surface level.


Quoting fishfry
Like what? What new behaviors? Black George Washington? That was not an emergent behavior, that was the result of deliberate programming of political bias.

What "new behaviors" to you refer to? A chatbot is a chatbot.


Check the publications I linked to above. Do you understand what I mean by emergence? What it means in research of complex systems and chaos studies, especially related to neuroscience.

Quoting fishfry
Believe they start spouting racist gibberish to each other. I do assume you follow the AI news.


That's not what I'm talking about. I'm talking about multimodality.

Most "news" about AI is garbage on both sides. We either have the cryptobro-type dudes thinking we'll have a machine god a month from now, or the luddites on the other side who don't know anything about the technology but sure likes to cherry-pick the negatives and conclude the tech to be trash based on mostly just their negative feelings.

I'm not interested in such surface level discussion about the technology.

Quoting fishfry
Well if we don't know, what are you claiming?

You've said "emergent" several times. That is the last refuge of people who have no better explanation. "Oh, mind is emergent from the brain." Which explains nothing at all. It's a word that means, "And here, a miracle occurs," as in the old joke showing two scientists at a chalkboard.


If you want to read more about emergence in terms of the mind you can find my other posts around the forum about that. Emergent behaviors has its roots in neuroscience and the work on consciousness and the mind. And since machine learning to form neural patterns is inspired by neuroscience and the way neurons work, there's a rational deduction to be found in how emergent behaviors, even rudimentary ones that we see in these current AI models, are part of the formation of actual intelligence.

This, when combined with evidence that the brain may be critical, suggests that ‘consciousness’ may simply arise out of the tendency of the brain to self-organize towards criticality.


The problem with your reasoning is that you use the lack of a final proven theory of the mind as proof against the most contemporary field of study in research about the mind and consciousness. It's still making more progress than any previous theories of the mind and connects to a universality about physical processes. Processes that are partly simulated within these machine learning systems. And further, the problem is that your reasoning is just binary; it's either intelligent with qualia, or it's just a stupid machine. That's not how these things work.

Quoting fishfry
I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will.


I'm not saying AGI is imminent, but I wouldn't take the other side of the bet either. You have to be dead sure about a theory of the mind or theories of emergence to be able to claim either way, and since you don't seem to aspire to any theory of emergence, then what's the theory that you use as a premiss for concluding it "not possible"?

Quoting fishfry
In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know.


How is that different from a human mind?

Quoting fishfry
They know everything that's happened, but nothing about what's happening.


The only technical difference between a human brain and these systems in this context is that the AI systems are trained and locked into an unchanging neural map. The brain, however, is constantly shifting and training while operating.

If a system is created that can, in real time, train on a constant flow of audiovisual and data information inputs, which in turn constantly reshape its neural map. What would be the technical difference? The research on this is going on right now.

Quoting fishfry
They can't reason their way through a situation they haven't been trained on.


The same goes for humans.

Quoting fishfry
since someone chooses what data to train them on


They're not picking and choosing data, they try to maximize the amount of data as more data means far better accuracy, just like any other probability system in math and physics.

And the weights and biases is not what you describe. The problem you aim at is in alignment programming. I can customize a GPT to do the same thing, even if the underlying model isn't supposed to do it.

Quoting fishfry
Neural nets will never produce AGI.


Based on what? Do you know something about multimodal systems that others don't? Do you have some publication that proves this impossibility?

Quoting fishfry
You can't make progress looking in the rear view mirror. You input all this training data and that's the entire basis for the neural net's output.


Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences? As far as I can see the most glaring difference is the real time re-structuring of the neural paths and multimodal behavior of our separate brain functions working together. No current AI system, at this time, operates based on those expanded parameters, which means that any positive or negative conclusion for that require further progress and development of these models.

Quoting fishfry
I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection to the claim that AGI is imminent, and the claim that neural nets are anything other than a dead end and an interesting parlor trick.


Bloggers usually don't know shit and they do not operate through any journalistic praxis. While the promoters and skeptics are just driving up the attention market through the shallow twitter brawls that pops up due to a trending topic.

Are you seriously saying that this is the research basis for your conclusions and claims on a philosophy forum? :shade:

Quoting fishfry
Neural nets are the wrong petri dish.

I appreciate your thoughtful comments, but I can't say you moved my position.


Maybe stop listening to bloggers and people on the attention market?

I rather you bring me some actual scientific foundation for your next premises to your conclusions.

Quoting Nemo2124
You would not ordinarily consider that machines could have selfhood, but the arguments for AI could subvert this. A robot enabled with AI could be said to have some sort of rudimentary selfhood or subjectivity, surely... If this is the case then the subject itself is the subject of the machine. I, Robot etc...


I think looking at our relation to nature tells a lot. Where do we draw the line about subjectivity? What do we conclude having a subjective experience? We look at another human and, for now disregard any P-zombie argument, claim them to have subjectivity. But we also look at a dog saying the same, a horse. A bird? What about an ant or a bee? What about a plant? What about mushrooms which have been speculated to form electrical pulses resembling a form of language communication? If they send communication showing intentions, do they have a form of subjective experience as mushrooms?

While I think that the Japanese idea of things having a soul is in the realm of religion rather than science, we still don't have a clear answer to what constitutes subjectivity. We understand it between humans, we have instincts about how animals around us has it. But where does it end? If sensory input into a nervous system prompts changed behaviors, does that constitute a form of subjectivity for the entity that has those functions? Wouldn't that place plants and mushrooms within the possibility of having subjectivity?

If a robot with sensory inputs has a constantly changing neurological map that reshapes based on what it learns through those sensory inputs, prompting changed behavior, does a subjective experience emerge out of that? And if not, why not? Why would that just be math and functions, while animals, operating on the exact same way, experience subjectivity?

So far, no one can draw a clear line at which we know: here there's no experience and no subjectivity, and here it is.

flannel jesus May 20, 2024 at 16:40 #905527
Quoting fishfry
Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. There's no intelligence, let alone self-awareness being demonstrated.

There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.


I don't think this is a take that's likely correct. This super interesting writeup on an LLM learning to model and understand and play chess convinces me of the exact opposite of what you've said here:

https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
Nemo2124 May 21, 2024 at 09:57 #905727
Quoting Christoffer
Where do we draw the line about subjectivity?


What you have here are two forms of subjectivity, one emerging from organisms, reaching its summit in humans (although there are animals too) and now, apparently, the subjectivity of machines from mechanism. So, fundamentally, there's a kind of master-slave dialectic here between the mechanical subject and the human. It is also true that we design and programme the machines, so that we get these highly complex mechanisms that seem to simulate intelligence, whose subjectivity we can acknowledge.

Even though humans programme and develop the machines, in terms of AI, they develop in the end a degree of subjectivity that can be given recognition through language. Rocks, animals and objects cannot reciprocate our communications in the same way that AI-Robots can be programmed to do. It is not enough to say that their subjectivity is simulated or false, at this early stage they are often also equipped with machine vision and can learn and interact with their environment.

The question is how far can AI-robots go, can they be equipped with autonomy and independently start to learn and acquire knowledge about their environment. Many people envisage that we will be living alongside AI-robot co-workers in the future. They can already carry out menial tasks, is this the stuff of pure science-fiction or do we have to be (philosophically) prepared? At the furthest limit, we may well be co-inhabiting the planet with a second form of silicon-based intelligence (we are carbon-based).
Christoffer May 21, 2024 at 12:55 #905742
Quoting Nemo2124
they develop in the end a degree of subjectivity that can be given recognition through language.


You still have the problem of the chinese room. How do you overcome that? It's more important for concluding subjectivity for machines than for other lifeforms as we can deduce that lifeforms have formed through evolution similarly to us and since we have subjectivity or at least I know I have subjectivity, I could conclude that lifeforms have subjectivity as well. But how can I deduce that for a machine if the process of developing it is different than evolution?

In order for a machine to have subjectivity, its consciousness require at least to develop over time in the same manner as a brain through evolution. To reach machine consciousness, we may need to simulate evolutionary progress for its iterations in the same complexity as evolution on earth. What that entails for computer science we don't yet know.

Beyond that we may find knowledge that consciousness isn't that special at all, that it's rather trivial to "grow" if we know where to start, to know the "seed" for it so to speak. But that would require knowledge we don't yet have.

flannel jesus May 21, 2024 at 12:58 #905743
Quoting Christoffer
In order for a machine to have subjectivity, its consciousness require at least to develop over time in the same manner as a brain through evolution.


Why? That looks like an extremely arbitrary requirement to me. "Nothing can have the properties I have unless it got them in the exact same way I got them." I don't think this is it.
Christoffer May 21, 2024 at 13:09 #905746
Quoting flannel jesus
Why? That looks like an extremely arbitrary requirement to me. "Nothing can have the properties I have unless it got them in the exact same way I got them." I don't think this is it.


I'm saying that this is at the most fundamental, deducible in some form, answer to what has qualia.

We don't know if consciousness can be formed deliberately (direct programming)
We cannot know if a machine passes the chinese room argument and have qualia just through behavior alone.
We cannot analyze mere operation of the system to determine it having qualia.
We cannot know other people aren't P-Zombies.

The only thing we can know for certain is that I have subjectivity and qualia, I formed through evolution. And since I formed through evolution, I could deduce you as also having qualia, since we are both human beings. And since animals are part of evolution I can deduce that animals also has qualia.

At some point, dead matter reaches a point of evolution and life in which it has subjectivity and qualia.

Therefore we can deduce either that all matter has some form of subjectivity and qualia, or it emerges at some point of complex life in evolution.

How do we know when a machine has the same? That is the problem to solve
flannel jesus May 21, 2024 at 13:23 #905749
Quoting Christoffer
Therefore we can deduce either that all matter has some form of subjectivity and qualia, or it emerges at some point of complex life in evolution.


No, you're making some crazy logical leaps there. There's no reason whatsoever to assume those are the only two options. Your logic provided doesn't prove that.
Christoffer May 21, 2024 at 13:27 #905750
Quoting flannel jesus
No, you're making some crazy logical leaps there. There's no reason whatsoever to assume those are the only two options. Your logic provided doesn't prove that.


Do you have an alternative or additional option that respects science?
flannel jesus May 21, 2024 at 13:29 #905752
Reply to Christoffer I don't know what you mean by "respects science". You just inventing a hard rule that all conscious beings had to evolve consciousness didn't come from science. That's not a scientifically discovered fact, is it?

The alternative is, it's in principle possible for some computer ai system to be conscious (regardless of if any current ones are). And that they can do so without anything like the process of evolution that life went through
Christoffer May 21, 2024 at 13:33 #905755
Quoting flannel jesus
You just inventing a hard rule that all conscious beings had to evolve consciousness didn't come from science. That's not a scientifically discovered fact, is it?


That consciousness emerged as features in animals through evolution is as close to facts that we have about our biology. And the only things we so far know have consciousness are animals and us in this universe.

So the only argument that can be made in any form of rational reasoning is the one I did. Anything else fails to form out of what we know and what is within the most probable of truths based on the science we have.

If you have an additional option it has to respect what we scientifically know at this time.
flannel jesus May 21, 2024 at 13:35 #905756
Reply to Christoffer "we know this is how it happened once, therefore we know this is exactly how it has to happen every time" - that doesn't look like science to me.

Evolution seems like an incredibly arbitrary thing to latch on to.
Christoffer May 21, 2024 at 13:39 #905757
Quoting flannel jesus
"we know this is how it happened once, therefore we know this is exactly how it has to happen every time" - that doesn't look like science to me.


What do you mean has happened only "once"?

And in a situation in which you have only one instance of something, is it more or less likely that the same thing happening again require the same or similar initial conditions?

Science is about probability, what is most probable?
flannel jesus May 21, 2024 at 13:40 #905758
Reply to Christoffer if the only conscious animals in existence were mammals, would you also say "lactation is a prerequisite for consciousness"?
flannel jesus May 21, 2024 at 13:42 #905759
The alternative is something like the vision of Process Philosophy - if we can simulate the same sorts of processes that make us conscious (presumably neural processes) in a computer, then perhaps it's in principle possible for that computer to be conscious too. Without evolution.
Christoffer May 21, 2024 at 14:08 #905762
Quoting flannel jesus
if the only conscious animals in existence were mammals, would you also say "lactation is a prerequisite for consciousness"?


That is not a relevant question as I'm not deducing from imaginary premises. I'm deducing from the things we know. If that premise were the case, then research into why would have been made or would be aimed to be made. And the reasons would probably be found or hinted at and be part of the totality of knowledge in biology and evolution. However, as such a premise doesn't have any grounds in what we know about biology and evolution, and so the engagement with that premise becomes just as nonsense as the premise itself.

What we do know is that there is a progression of cognitive abilities across all life and that it's most likely not bound to specific species as cognitive abilities vary across genetic lines. That some attribute consciousness to mammals is more likely a bias towards the fact that we are mammals and therefore we attribute other mammals closer to us than say birds, even though some birds express cognitive abilities far greater than many mammals.

Quoting flannel jesus
The alternative is something like the vision of Process Philosophy - if we can simulate the same sorts of processes that make us conscious (presumably neural processes) in a computer, then perhaps it's in principle possible for that computer to be conscious too. Without evolution.


Yes, but my argument was that the only possible path of logic that we have is through looking at the formation of our own consciousness and evolution, because that is a fact. The "perhaps" that you express does not solve the fundamental problem of the chinese room.

We know that we've developed consciousness through biology and evolution. So therefore, the only known process would be that. If we were to create the same conditions for a computer/machine to develop AI using similar conditions, then that would be more probable to form consciousness that passes the chinese room problem and develop actual qualia.

As with everything being about probability, the "perhaps" in your argument doesn't have enough probability in its logic, as it is basically saying that if I sculpt a tree, it could perhaps become a tree compared to me planting a tree or chemically form the basic building blocks of genetics in a seed for a tree and then planting it to grow. One is jumping to conclusion that mere similarity to the object "could mean" the same, while the other is simulating similar conditions for the object to form. And we know that evolutionary progress of both physical systems and biological is at the foundation of how this reality function, it is most likely required that a system evolves and grows for it to form complex relation to its surrounding conditions.

I'm not saying that these AI systems don't have subjectivity, we simply do not know, but what I'm saying is that the only conditions we could deduce as logically likely and probable is if we could create the initial conditions to simulate what formed us and grow a system from it.

Which is close to what we're doing with machine learning, even though it's rudimentary at this time.
flannel jesus May 21, 2024 at 14:15 #905764
Reply to Christoffer Connecting it to evolution the way you're doing looks as absurd and arbitrary as connecting it to lactation.
flannel jesus May 21, 2024 at 15:08 #905768
Quoting Christoffer
Yes, but my argument was that the only possible path of logic that we have is through looking at the formation of our own consciousness and evolution, because that is a fact.


100 years ago, you could say "the only things that can walk are things that evolved." Someone who thinks like you might say, "that must mean evolution is required for locomotion".

Someone like me might say, "Actually, even though evolultion is in the causal history of why we can walk, it's not the IMMEDIATE reason why we can walk, it's not the proximate cause of our locomotive ability - the proximate cause is the bones and muscles in our legs and back."

And then, when robotics started up, someone like you might say "well, robots won't be able to walk until they go through a process of natural evolution through tens of thousands of generations", and someone like me would say, "they'll make robots walk when they figure out how to make leg structures broadly similar to our own, with a joint and some way of powering the extension and contraction of that joint."

And the dude like me would be right, because we currently have many robots that can walk, and they didn't go through a process of natural evolution.

That's why I think your focus on "evolution" is kind of nonsensical, when instead you should focus more on proximate causes - what are the structures and processes that enable us to walk? Can we put structures like that in a robot? What are the structures and processes that enable us to be conscious? Can we put those in a computer?
Christoffer May 21, 2024 at 15:43 #905775
Quoting flannel jesus
Connecting it to evolution the way you're doing looks as absurd and arbitrary as connecting it to lactation.


In what way? Evolution is about iterations over time and nature is filled with different iterations of cognitive abilities, primarily changing based different environments influencing different requirements.

As long as you're not a denier of evolution, I don't know what you're aiming for here?

Quoting flannel jesus
"Actually, even though evolultion is in the causal history of why we can walk, it's not the IMMEDIATE reason why we can walk, it's not the proximate cause of our locomotive ability - the proximate cause is the bones and muscles in our legs and back."


No, the reason something can walk is because of evolutionary processes forming both the physical parts as well as the "operation" of those physical parts. You can't make something "walk" by just having legs and muscles, as well as without the pre-knowledge of how the muscles and bones connect and function, you don't know how they fit together, and even further; bones and muscles have grown along the same time as the development of the cognitive operation using them, they've formed as a totality over time and evolutionary iterations.

There's no "immediate" reason you can walk as the reason you can walk is the evolution of our body and mind together, leading up to the point of us being able to walk.

Quoting flannel jesus
And then, when robotics started up, someone like you might say "well, robots won't be able to walk until they go through a process of natural evolution through tens of thousands of generations", and someone like me would say, "they'll make robots walk when they figure out how to make leg structures broadly similar to our own, with a join and some way of powering the extension and contraction of that joint."

And the dude like me would be right, because we currently have many robots that can walk, and they didn't go through a process of natural evolution.


Yes they did. The reason they can walk is because we bluntly tried to emulate functions of our joints, bones and muscles for decades before turning to iterative trial and error processes for the design of the physical parts. But even then it couldn't work without evolutionary training the walking sequence and operation through machine learning. It's taken extremely long to mimic this rather rudimentary action of simply walking and we're not even fully there yet.

And such a feature is one of the most basic and simple things in nature. To underplay evolution's role in forming over iterations, the perfect walking mechanics and internal operation compared to us just brute forcing something into existence is just not rational.

Quoting flannel jesus
That's why I think your focus on "evolution" is kind of nonsensical, when instead you should focus more on proximate causes - what are the structures and processes that enable us to walk? Can we put structures like that in a robot? What are the structures and processes that enable us to be conscious? Can we put those in a computer?


What I don't think you seem to understand with the evolutionary argument is that the complexity of consciousness might first require the extremely complex initial conditions of our genetical compound that even though it in itself is one of the most complex things in the universe, also grows into a being that in itself is even further complex. This level of complexity might not be able to be achieved by just "slapping structures together" as the knowledge of how and in what way may be so complex that it is impossible. That the only way to reach results is with "growing" from initial conditions into a final complexity.

Evolution is basically chaos theory at play and you seem ignore that fact. We already have evidence within material science and design engineering that trying to "figure out" the best design or material compound can be close to impossible in comparison to growing forth a solution through simulating evolutionary iterations through trial and error.

This is why these new AI models functions so well as they do, because they're NOT put together by perfect design, they're programmed to have conditions from which they "grow" and a path along which they "grow". The fundamental problem, however, is that in comparison to "walking", all science about consciousness and the brain hasn't been able to pinpoint consciousness as a mere function but, according to current research in this field, is an emerging result of layers of complex operations.

In essence, if walking is extremely hard to achieve due to similar complexity, simulating actual consciousness might be close to impossible if we don't form an extremely complex path of iterative evolution for such a system.
flannel jesus May 21, 2024 at 15:45 #905777
Quoting Christoffer
No, the reason something can walk is because of evolutionary processes forming both the physical parts as well as the "operation" of those physical parts.


so robots can't walk?
Christoffer May 21, 2024 at 15:48 #905778
Quoting flannel jesus
so robots can't walk?


Maybe read the entire argument or attempt to understand the point I'm making before commenting.

Did you read the part about how robots can even walk today? What the development process of making them walk... is really inspired by?
flannel jesus May 21, 2024 at 15:52 #905779
"inspired by" is such a wild goal post move. The reason anything that can walk can walk is because of the processes and structures in it - that's why a person who has the exact same evolutionary history as you and I, but whose legs were ripped off, can't walk - their evolutionary history isn't the thing giving them the ability to walk, their legs and their control of their legs are.

There's no justifiable reason to tie consciousness to evolution any more than there is to tie it to lactation. You're focussed too hard on the history of how we got consciousness rather than the proximate causes of consciousness.
Christoffer May 21, 2024 at 15:59 #905781
Quoting flannel jesus
"inspired by" is such a wild goal post move. The reason anything that can walk can walk is because of the processes and structures in it - that's why a person who has the exact same evolutionary history as you and I, but whose legs were ripped off, can't walk - their evolutionary history isn't the thing giving them the ability to walk, their legs and their control of their legs are.


Why is that moving a goal post? It's literally what engineers use today to design things. Like how they designed commercial drones using evolutionary iterations to find the best balanced, light and aerodynamic form for it. They couldn't design it by "just designing it" anymore than the first people who attempted flight couldn't do so by flapping planks with feathers on them.

With the way you're answering I don't think you are capable of understanding what I'm talking about. It's like you don't even understand the basics of this. It's pointless.
flannel jesus May 21, 2024 at 16:03 #905783
Quoting Christoffer
With the way you're answering I don't think you are capable of understanding what I'm talking about. It's like you don't even understand the basics of this.


Judging by the way you repeatedly talk about "passing the Chinese room", I don't think you understand the basics. Seems more buzzword-focused than anything
Christoffer May 21, 2024 at 16:18 #905789
Quoting flannel jesus
Judging by the way you repeatedly talk about "passing the Chinese room", I don't think you understand the basics. Seems more buzzword-focused than anything


You have demonstrated even less. You've done no real argument other than saying that "we can walk because we have legs", a conclusion that's so banal in its shallow simplicity that it could be uttered by a five-year old.

You ignore actually making arguments from the questions asked and you don't even seem to understand what I'm writing by the way you answer to it. When I explain why robots "can't just walk" you simply utter "so robots can't walk?". Why bother putting time in a discussion with this low quality attitude. Demonstrate a better level of discourse first.
flannel jesus May 21, 2024 at 16:21 #905792
Reply to Christoffer I'll start demonstrating that by informing you of something you apparently do not know: the "Chinese room" isn't a test to pass
Nemo2124 May 21, 2024 at 20:18 #905852
Reply to Christoffer

Regarding the problem of the Chinese room, I think it might be safe to accede that machines do not understand symbols in the same way that we do. The Chinese room thought experiment shows a limit to machine cognition, perhaps. It's quite profound, but I do not think it influences this argument for machine subjectivity, just that its nature might be different from ours (lack of emotions, for instance).

Machines are gaining subjective recognition from us via nascent AI (2020-2025). Before they could just be treated as inert objects. Even if we work with AI as if it's a simulated self, we are sowing the seeds for the future AI-robot. The de-centring I mentioned earlier is pertinent, because I think that subjectivity, in fact, begins with the machine. In other words, however abstract, artificial, simulated and impossible you might consider machine selfhood to be - however much you consider them to be totally created by and subordinated to humans - it is in fact, machine subjectivity that is at the epicentre of selfhood, a kind of 'Deus ex Machina' (God from the Machine) seems to exist as a phenomenon we have to deal with.

Here I think we are bordering on the field of metaphysics, but what certain philosophies indicate about consciousness arising from inert matter, surely this is the same problem we encounter with human consciousness: i.e. how does subjectivity arise from a bundle of neuron firing in tandem or synchronicity. I think, therefore, I am. If machines seem to be co-opting aspects of thinking e.g. mathematical calculation to begin with, then we seem to share common ground, even though the nature of their 'thinking' differs to ours (hence, the Chinese room).
fishfry May 21, 2024 at 21:55 #905877
Quoting RogueAI
I'm talking about an Ai that passes all the time, even against people who know how to trip up Ai's. We don't have anything like that yet.


Agreed. My point is that the humans are the weak link.

Another interesting point is deception. For Turing, the ability to fool people about one's true nature is the defining attribute of intelligence. That tells us more about Turing, a closeted gay man in 40's-50's England, than it does about machine intelligence.

What if we had a true AGI that happened to be honest? "Are you human?" "No, I'm an AI running such and so software on such and so hardware." It could never pass the test even it were self-aware.
fishfry May 21, 2024 at 23:21 #905903
Quoting Christoffer
This is simply wrong.


I take emergence to be a synonym for, "We have no idea what's happening, but emergence is a cool word that obscures this fact."

Quoting Christoffer

These are examples of what I'm talking about:

https://hai.stanford.edu/news/examining-emergent-abilities-large-language-models
https://ar5iv.labs.arxiv.org/html/2206.07682
https://www.jasonwei.net/blog/emergence
https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models/


I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about.

It's true that a big pile of flipping bits somehow implements a web browser or a chess program or a word processor or an LLM. But calling that emergence, as if that explains anything at all, is a cheat.

Quoting Christoffer

Emergence does not equal AGI or self-awareness, but they mimmick what many neuroscience papers are focused on in regards to how our brain manifest abilities out of increasing complexity. And we don't yet know how combined models will function.


"Mind emerges from the brain} explains nothing, provides no insight. It sounds superficially clever, but if you replace it with, "We have no idea how mind emerges from the brain," it becomes accurate and much, much more clear.

Quoting Christoffer

No one is claiming this. But equally, the problem is, how do you demonstrate it? Effectively the Chinese room problem.


Nobody knows how to demonstrate self-awareness of others. We agree on that. But calling it emergence is no help at all. It's harmful, because it gives the illusion of insight without providing insight.

Quoting Christoffer

There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.
— fishfry

The current predictive skills are extremely limited and far from human abilities, but they're still showing up, prompting a foundation for further research.


I have no doubt that grants will be granted. That does not bear on what I said. Neural nets are a dead end for achieving AGI. That's what I said. The fact that everyone is out there building ever larger wings out of feathers and wax does not negate the point.

If you climb a tree, you are closer to the sky than you were before. But you can't reach the moon that way. That would be my point. No matter how much clever research is done.

A new idea is needed.

Quoting Christoffer

But no one has said that the current LLMs in of themselves will be able to reach AGI. Not sure why you strawman in such conclusions?


Plenty of people are saying that. I read the hype. If you did not say that, my apologies. But many people do think LLMs are a path to AGI.

Quoting Christoffer

Why does conventional hardware matter when it's the pathways in the network that is responsible for the computation?


I was arguing against something that's commonly said, that neural nets are complicated and mysterious and their programmers can't understand what they are doing. That is already true of most large commercial software systems. Neural nets are conventional programs. I used the example of political bias to show that their programmers understand them perfectly well, and can tune them in accordance with management's desires.

Quoting Christoffer

The difference here is basically that standard operation is binary in pursuit of accuracy, but these models operate on predictions, closer to how physical systems do, which means you increase the computational power with a slight loss of accuracy. That they operate on classical software underneath does not change the fact that they operate differently as a whole system. Otherwise, why would these models vastly outperform standard computation for protein folding predictions?


They're a very clever way to do data mining. I didn't say I wasn't impressed with their achievements. Only that (1) they are not the way to AGI or sentience; and (2) despite the mysterianism, they are conventional programs that could, in principle, be executed with pencil and paper, and that operate according to the standard rules of physical computation that were developed in the 1940s.

By mysterianism, I mean claims such as you just articulated: "they operate differently as a whole system ..." That means nothing. The chess program and the web browser on my computer operate differently too, but they are both conventional programs that ultimately do nothing more than flip bits.

I do oppose this mysterianistic attitude on the part of many neural net proponents. It clouds people's judgment. How did black George Washington show up on Google's AI? Not because it "operates different as a whole system." Rather, it's because management told the programmers to tune it that way, and they did.

Neural nets are deterministic programs operating via principles that were well understood 70 years ago.

Stop the neural net mysterianism! That's my motto for today.

they operate differently as a whole system
Yes, and why would a system that is specifically very good at handling extreme complexities, not begin to mimic complexities in the physical world?[/quote]

When did I ever claim that large, complex programs aren't good at mimicking the physical world? On the contrary, they're fantastic at it.

I don't mean to downplay the achievements of neural nets. Just want to try to get people to dial back the hype ("AGI is just around the corner") and the mysterianism ("they're black boxes and even their programmers can't understand them.")



Quoting Christoffer

https://www.mdpi.com/1099-4300/26/2/108
https://ar5iv.labs.arxiv.org/html/2205.11595


Jeez man more emergence articles? Do you think I haven't been reading this sh*t for years?

Emergence means, "We don't understand what's going on, but emergence is a cool word that will foll people." And it does.

Quoting Christoffer

Seen as the current research in neuroscience points to emergence in complexities being partly responsible for much of how the brain operates, why wouldn't a complex computer system that simulate similar operation not form emergent phenomenas?


Emergence emergence emergence emergence emergence. Which means, you don't know. That's what the word means.

You claim that "emergence in complexities being partly responsible for much of how the brain operates" explains consciousness? Or what are you claiming, exactly? Save that kind of silly rhetoric for your next grant application. If it were me, I'd tell you to stop obfuscating. "emergence in complexities being partly responsible for much of how the brain operates". Means nothing. Means WE DON'T KNOW how the brain operates.

Quoting Christoffer
There's a huge difference between saying that "it forms intelligence and consciousness" and saying that "it generates emergent behaviors". There's no claim that any of these LLMs are conscious, that's not what this is about. And AGI does not mean conscious or intelligent either, only exponentially complex in behavior, which can form further emergent phenomenas that we haven't seen yet. I'm not sure why you confuse that with actual qualia? The only claim is that we don't know where increased complexity and multimodal versions will further lead emergent behaviors.


You speak in buzz phrases. It's not only emergent, it's exponential. Remember I'm a math guy. I know what the word exponential means. Like they say these days: "That word does not mean what you think it means."

So there's emergence, and then there's exponential, which means that it "can form further emergent phenomenas that we haven't seen yet."

You are speaking in entirely meaningless babble at this point. I don't mean that you're not educated. I mean that you have gotten lost in your own jargon. You have said nothing at all in this post.

Quoting Christoffer
This is just a false binary fallacy and also not correct. The programmable behavior is partly weights and biases within the training, but those are extremely basic and most specifics occur in operational filters before the output. If you prompt it for something, then there can be pages of instructions that it goes through in order to behave in a certain way.


Yes, that's how computers work. When I click on Amazon, whole pages of instructions get executed before the package arrives at my door. What point are you making?

Quoting Christoffer

In ChatGPT, you can even put in custom instructions that function as a pre-instruction that's always handled before the actual prompt, on top of what's already in hidden general functions.


You're agreeing with my point. Far from being black boxes, these programs are subject to the commands of programmers, who are subject to the whims of management.

Quoting Christoffer

That doesn't mean the black box is open. There's still a "black box" for the trained model in which it's impossible to peer into how it works as a neural system.


You say that, and I call it neural net mysterianism. You could take that black box, print out its source code, and execute it with pencil and paper. It's an entirely conventional computer program operating on principles well understood since the first electronic digital computers in the 1940s.

"Impossible to peer into." I call that bullpucky. Intimidation by obsurantism.

Every line of code was designed and written by programmers who entirely understood what they were doing.

And every highly complex program exhibits behaviors that surprise their coders. But you can tear it down and figure out what happened. That's what they do at the AI companies all day long. They do not go, "Oh, this black box is inscrutable, incomprehensible. We better just pray to the silicon god."

It doesn't work that way.

Quoting Christoffer

This further just illustrates the misunderstandings about the technology. Making conjectures about the entire system and the technology based on these company's bad handling of alignment does not reduce the complexity of the system itself or prove that it's "not a black box". It only proves that the practical application has problems, especially in the commercial realm.


You say it's a black box, and I point out that it does exactly what management tells the programmers to make it do, and you say "No, there's a secret INNER" black box."

I am not buying it. Not because I don't know that large, complex software systems don't often exhibit surprising behavior. But because I don't impute mystical incomprehensibility to computer programs.

Quoting Christoffer

Maybe read the entire argument first and sense the nuances. You're handling all of this as a binary agree or don't discussion, which I find a bit surface level.


Can we stipulate that you think I'm surface level, and I think you're so deep into hype, buzzwords, and black box mysterianism that you can't see straight?

That will save us both a lot of time.

I can't sense nuances. They're a black box. In fact they're an inner black box. An emergent, exponentail black box.

I know you take your ideas very seriously. That's why I'm pushing back. "Exponential emergence" is not a phrase that refers to anything at all.

Quoting Christoffer

Check the publications I linked to above.


I'll stipulate that intelligent and highly educated and credentialed people wrote things that I think are bullsh*t.

Quoting Christoffer

Do you understand what I mean by emergence? What it means in research of complex systems and chaos studies, especially related to neuroscience.


Yes. It means "We don't understand but if we say that we won't get our grant renewed, so let's call it emergence. Hell, let's call it exponential emergence, then we'll get a bigger grant."

Can't we at this point recognize each other's positions? You're not going to get me to agree with you if you just say emergence one more time.

Quoting Christoffer

Believe they start spouting racist gibberish to each other. I do assume you follow the AI news.
— fishfry

That's not what I'm talking about. I'm talking about multimodality.


Exponential emergent multimodality of the inner black box.

Do you have the slightest self-awareness that you are spouting meaningless buzzwords at this point?

Do you know what multimodal freight is? It's a technical term in the shipping industry that means trains, trucks, airplanes, and ships.

It's not deep.

Quoting Christoffer

Most "news" about AI is garbage on both sides. We either have the cryptobro-type dudes thinking we'll have a machine god a month from now, or the luddites on the other side who don't know anything about the technology but sure likes to cherry-pick the negatives and conclude the tech to be trash based on mostly just their negative feelings.


And then there are the over-educated buzzword spouters. Emergence. Exponential. It's a black box. But no it's not really a black box, but it's an inner black box. And it's multimodal. Here, have some academic links.

This is going nowhere.

Quoting Christoffer

I'm not interested in such surface level discussion about the technology.


Surface level is all you've got. Academic buzzwords. I am not the grant approval committee. Your jargon is wasted on me.


Quoting Christoffer

If you want to read more about emergence


Oh man you are killin' me.

Is there anything I've written that leads you to think that I want to read more about emergence?


Quoting Christoffer

in terms of the mind you can find my other posts around the forum about that.


Forgive me, I will probably not do that. But I don't want you to think I haven't read these arguments over the years. I have, and I find them wanting.

Quoting Christoffer

Emergent behaviors has its roots in neuroscience and the work on consciousness and the mind.


My point exactly. In this context, emergence means "We don't effing know." That's all it means.

Quoting Christoffer

And since machine learning to form neural patterns is inspired by neuroscience and the way neurons work, there's a rational deduction to be found in how emergent behaviors, even rudimentary ones that we see in these current AI models, are part of the formation of actual intelligence.


I was reading about the McCulloch-Pitts neuron while you were still working on your first buzzwords.

Quoting Christoffer

This, when combined with evidence that the brain may be critical, suggests that ‘consciousness’ may simply arise out of the tendency of the brain to self-organize towards criticality.


You write, "may simply arise out of the tendency of the brain to self-organize towards criticality" as iff you think that means anything.

Quoting Christoffer
The problem with your reasoning is that you use the lack of a final proven theory of the mind as proof against the most contemporary field of study in research about the mind and consciousness.


I'm expressing the opinion that neural nets are not, in the end, going to get us to AGI or a theory of mind.

I have no objection to neuroscience research. Just the hype, buzzwords, and exponentially emergent multimodal nonsense that often accompanies it.

Quoting Christoffer

It's still making more progress than any previous theories of the mind and connects to a universality about physical processes. Processes that are partly simulated within these machine learning systems. And further, the problem is that your reasoning is just binary; it's either intelligent with qualia, or it's just a stupid machine. That's not how these things work.


I have to apologize to you for making you think you need to expend so much energy on me. I'm a lost cause. It must be frustrating to you. I'm only expressing my opinions, which for what it's worth have been formed by several decades of casual awareness of the AI hype wars, the development of neural nets, and progress in neuroscience.

It would be easier for you to just write me off as a lost cause. I don't mean to bait you. It's just that when you try to convince me with meaningless jargon, you weaken your own case.

Quoting Christoffer

I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will.
— fishfry

I'm not saying AGI is imminent, but I wouldn't take the other side of the bet either. You have to be dead sure about a theory of the mind or theories of emergence to be able to claim either way, and since you don't seem to aspire to any theory of emergence, then what's the theory that you use as a premiss for concluding it "not possible"?


I wrote, "I'll take the other side of that bet," and that apparently pushed your buttons hard. I did not mean to incite you so, and I apologize for any of my worse excesses of snarkiness in this post.

But exponential emergence and multimodality, as substitutes for clear thinking -- You are the one stuck with this nonsense in your mind. You give the impression that perhaps you are involved with some of these fields professionally. If so, I can only urge to you get some clarity in your thinking. Stop using buzzwords and try to think clearly. Emergence does not explain anything. On the contrary, it's an admission that we don't understand something. Start there.

Quoting Christoffer

In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know.
— fishfry

How is that different from a human mind?


Ah. The first good question you've posed to me. Note how jargon-free it was.

I don't know for sure. Nobody knows. But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data.

But I can't give you proof. If tomorrow morning someone proves that humans are neural nets, or neural nets are conscious, I'll come back here and retract every word I've written. I don't happen to think there's much chance of that happening.

Quoting Christoffer

The only technical difference between a human brain and these systems in this context is that the AI systems are trained and locked into an unchanging neural map. The brain, however, is constantly shifting and training while operating.


Interesting idea. Do they have neural nets that do that? My understanding is that they train the net, and after that, the execution is deterministic. Perhaps you have a good research idea there. Nobody knows what the secret sauce of human minds is.

Quoting Christoffer

If a system is created that can, in real time, train on a constant flow of audiovisual and data information inputs, which in turn constantly reshape its neural map. What would be the technical difference? The research on this is going on right now.


Now THAT, I'd appreciate some links for. No more emergence please. But a neural net that updates its node weights in real time is an interesting idea.

Quoting Christoffer

They can't reason their way through a situation they haven't been trained on.
— fishfry

The same goes for humans.


How can you say that? Reasoning our way through novel situations and environments is exactly what humans do.

That's the trouble with the machine intelligence folks. Rather than uplift their machines, they need to downgrade humans. It's not that programs can't be human, it's that humans are computer programs.

How can you, a human with life experiences, claim that people don't reason their way through novel situations all the time?

Quoting Christoffer

since someone chooses what data to train them on
— fishfry

They're not picking and choosing data, they try to maximize the amount of data as more data means far better accuracy, just like any other probability system in math and physics.


Humans are not "probability systems in math or physics."

Quoting Christoffer

Neural nets will never produce AGI.
— fishfry

Based on what? Do you know something about multimodal systems that others don't? Do you have some publication that proves this impossibility?


Credentialism? That's your last and best argument? I could point at you and disprove credentialism based on the lack of clarity in your own thinking.

Quoting Christoffer

Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences?


Yes, but apparently you can't see that.


Quoting Christoffer
As far as I can see the most glaring difference is the real time re-structuring of the neural paths and multimodal behavior of our separate brain functions working together. No current AI system, at this time, operates based on those expanded parameters, which means that any positive or negative conclusion for that require further progress and development of these models.


I'm not the grant committee. But I am not opposed to scientific research. Only hype, mysterianism, and buzzwords as a substitute for clarity.

Quoting Christoffer

Bloggers usually don't know shit and they do not operate through any journalistic praxis. While the promoters and skeptics are just driving up the attention market through the shallow twitter brawls that pops up due to a trending topic.


Is that the standard? The ones I read do. Eric Hoel and Gary Marcus come to mind, also Michael Harris. They don't know shit? You sure about that? Why so dismissive? Why so crabby about all this? All I said was, "I'll take the other side of that bet." When you're at the racetrack you don't pick arguments with the people who bet differently than you, do you?

Quoting Christoffer

Are you seriously saying that this is the research basis for your conclusions and claims on a philosophy forum? :shade:


You're right, I lack exponential emergent multimodality.

I've spent several decades observing the field of AI and I have academic and professional experience in adjacent fields. What is, this credential day? What is your deal?

Quoting Christoffer

Maybe stop listening to bloggers and people on the attention market?


You've convinced me to stop listening to you.

Quoting Christoffer

I rather you bring me some actual scientific foundation for your next premises to your conclusions.


It's been nice miscommunicating with you. I'm sure you must feel the same about me.

tl;dr: Someone said AGI is imminent. I said I'd gladly take the other side of that bet. I reiterate that. Also, when it comes to AGI and a theory of mind, neural nets are like climbing a tree to reach the moon. You apparently seem to be getting closer, but it's a dead end. And, the most important point: buzzwords are a sign of fuzzy thinking.

I appreciate the chat, I will say that you did not move my position.


fishfry May 21, 2024 at 23:26 #905905
Quoting flannel jesus
I don't think this is a take that's likely correct. This super interesting writeup on an LLM learning to model and understand and play chess convinces me of the exact opposite of what you've said here:

https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation


Less Wrong? Uh oh that's already a bad sign. I'll read it though. I do allow for the possibility that I could be wrong. I just finished replying to a lengthy screed from @Christopher so I'm willing to believe the worst about myself at this point. I'm neither exponential nor emergent nor multimodal so what the hell do I know. The Less Wrong crowd, that's too much Spock and not enough Kirk. Thanks for the link. I'm a little loopy at the moment from responding to Christopher.

ps -- Clicked the link. "This model is only trained to predict the next character in PGN strings (1.e4 e5 2.Nf3 ...) and is never explicitly given the state of the board or the rules of chess. Despite this, in order to better predict the next character, it learns to compute the state of the board at any point of the game, and learns a diverse set of rules, including check, checkmate, castling, en passant, promotion, pinned pieces, etc. "

I stand astonished. That's really amazing.
RogueAI May 22, 2024 at 00:01 #905913
Quoting fishfry
What if we had a true AGI that happened to be honest? "Are you human?" "No, I'm an AI running such and so software on such and so hardware." It could never pass the test even it were self-aware.


Good point.
Pierre-Normand May 22, 2024 at 00:47 #905921
Quoting fishfry
I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about.

It's true that a big pile of flipping bits somehow implements a web browser or a chess program or a word processor or an LLM. But calling that emergence, as if that explains anything at all, is a cheat.


LLMs have some capacities that "emerged" in the sense that they were acquired as a result of their training when it was not foreseen that they would acquire them. Retrospectively, it makes sense that the autoregressive transformer architecture would enable language models to acquired some of those high-level abilities since having them promotes the primary goal of the training, which was to improve their ability to predict the next token in texts from the training data. (Some of those emergent cognitive abilities are merely latent until they are being reinforced through training the base model into a chat or instruct variant).

One main point about describing properties or capabilities being emergent at a higher level of description is that they don't simply reduce to the functions that were implemented at the lower level of description. This is true regardless of there being an explanation available or not for their manifest emergence, and it applies both to the mental abilities that human being have in relation to their brains and to the cognitive abilities that conversational AI agents have in relation to their underlying LLMs.

The main point is that just because conversational AI agents (or human beings) can do things that aren't easily explained as a function of what their underlying LLMs (or brains) do at a "fundamental" level of material realization, isn't a ground for denying that they are "really" doing those things.
flannel jesus May 22, 2024 at 02:44 #905933
Quoting fishfry
I stand astonished. That's really amazing.


I appreciate you taking the time to read it, and take it seriously.

Ever since chat gpt gained huge popularity a year or two ago with 3.5, there have been people saying LLMs are "just this" or "just that", and I think most of those takes miss the mark a little bit. "It's just statistics" it "it's just compression".

Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings.

And one important extra data point from that write up is the bits about unique games. Games become unique, on average, about 10 moves in, and even when a game is entirely unique and wasn't in chat gpts training set, it STILL calculates legal and reasonable moves. I think that speaks volumes.
Pierre-Normand May 22, 2024 at 03:00 #905935
Quoting flannel jesus
Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings.


Indeed! The question still arises - in the case of chess, is the language model's ability to "play" chess by completing PGN records more akin to the human ability to grasp the affordances of a chess position or more akin to a form of explicit reasoning that relies on an ability to attend to internal representations? I think it's a little bit of both but there currently is a disconnect between those two abilities (in the case of chess and LLMs). A little while ago, I had a discussion with Claude 3 Opus about this.
flannel jesus May 22, 2024 at 03:27 #905939
Quoting Pierre-Normand
more akin to a form of explicit reasoning that relies on an ability to attend to internal representations?


Did you read the article I posted that we're talking about?
Pierre-Normand May 22, 2024 at 03:42 #905940
Quoting flannel jesus
Did you read the article I posted that we're talking about?


Yes, thank you, I was also quite impressed by this result! But I was already familiar with the earlier paper about the Othello game that is also mentioned in the LessWrong blog post that you linked. I also had had a discussion with Llama-3-8b about it in which we also relate this with the emergence of its rational abilities.
fishfry May 22, 2024 at 04:30 #905946
Quoting flannel jesus
I appreciate you taking the time to read it, and take it seriously.


Beneath my skepticism of AI hype, I'm a big fan of the technology. Some of this stuff is amazing. Also frightening. Those heat map things are amazing. The way an AI trained for a specific task, maps out the space in its ... well, mind, as it were. I think reading that article convinced me that the AIs really are going to wipe out the human race. These things discern the most subtle n-th order patterns in behavior, and then act accordingly.

I am really bowled over that it can play chess and learn the rules just from auto-completing the game notation. But it makes sense ... as it trained on games it would figure out which moves are likely. It would learn to play normal chess with absolutely no programmed knowledge of the rules. Just statistical analysis of the text string completions. I think we're all doomed, don't you?

I will have to spend some more time with this article. A lot went over my head.


Quoting flannel jesus

Ever since chat gpt gained huge popularity a year or two ago with 3.5, there have been people saying LLMs are "just this" or "just that", and I think most of those takes miss the mark a little bit. "It's just statistics" it "it's just compression".


I was one of those five minutes ago. Am I overreacting to this article? I feel like it's turned my viewpoint around. The chess AI gained understanding it its own very strange way. I can see how people would say that it did something emergent, in the sense that we didn't previously know that an LLM could play chess. We thought that to program a computer to play chess, we had to give it an 8 by 8 array, and tell it what pieces are on each square, and all of that.

But it turns out that none of that is necessary! It doesn't have to know a thing about chess. If you don't give it a mental model of the game space, it builds one of its own. And all it needs to know is what strings statistically follow what other strings in a 5 million game dataset.

It makes me wonder what else LLMs can do. This article has softened my skepticism. I wonder what other aspects of life come down, in the end, to statistical pattern completion. Maybe the LLMs will achieve sentience after all. This one developed a higher level of understanding than it was programmed for, if you look at it that way.

Quoting flannel jesus

Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings.


It seems that in this instance, there's no need to understand the game at all. Just output the most likely string completion. Just as in the early decades of computer chess, brute force beat systems that tried to impart understanding of the game.

It seems that computers "think" very differently than we humans. In a sense, an LLM playing chess is to a traditional chess engine, as the modern engines are to humans. Another level of how computers play chess.

This story has definitely reframed my understanding of LLMs. And suddenly I'm an AI pessimist. They think so differently than we do, and they see very deep patterns. We are doomed.


Quoting flannel jesus

And one important extra data point from that write up is the bits about unique games. Games become unique, on average, about 10 moves in, and even when a game is entirely unique and wasn't in chat gpts training set, it STILL calculates legal and reasonable moves. I think that speaks volumes.


That's uncanny for sure. I really feel a disturbance in the force of my AI skepticism. Something about this datapoint. An LLM can play chess just by training on game scores. No internal model of any aspect of the actual game. That is so weird.

fishfry May 22, 2024 at 05:49 #905956
Quoting Pierre-Normand
LLMs have some capacities that "emerged" in the sense that they were acquired as a result of their training when it was not foreseen that they would acquire them. Retrospectively, it makes sense that the autoregressive transformer architecture would enable language models to acquired some of those high-level abilities since having them promotes the primary goal of the training, which was to improve their ability to predict the next token in texts from the training data. (Some of those emergent cognitive abilities are merely latent until they are being reinforced through training the base model into a chat or instruct variant).


I just watched a bit of 3blue1brown's video on transformers. Will have to catch up on the concepts.

I confess to having my viewpoint totally turned around tonight. The chess-playing LLM has expanded my concept of what's going on in the space. I would even be willing to classify as emergence -- the exact kind of emergence I've been railing against -- the manner in which the LLM builds a mental map of the chess board, despite having no data structures or algorithms representing any aspect of the game.

Something about this has gotten my attention. Maybe I'll recover by morning. But there's something profound in the alien-ness of this particular approach to chess. A glimmer of how the machines will think in the future. Nothing like how we think.

I do not believe we should give these systems operational control of anything we care about!


Quoting Pierre-Normand

One main point about describing properties or capabilities being emergent at a higher level of description is that they don't simply reduce to the functions that were implemented at the lower level of description.


I'm perfectly willing to embrace the descriptive use of the term. I only object to it being used as a substitute for an explanation. People hear the description, and think it explains something. "Mind emerges from brain" as a conversation ender, as if no more needs to be said.


Quoting Pierre-Normand

This is true regardless of there being an explanation available or not for their manifest emergence,


Right. I just don't like to see emergence taken as an explanation, when it's actually only a description of the phenomenon of higher level behaviors not explainable by lower ones.

Quoting Pierre-Normand

and it applies both to the mental abilities that human being have in relation to their brains and to the cognitive abilities that conversational AI agents have in relation to their underlying LLMs.


Yes, and we should not strain the analogy! People love to make these mind/brain analogies with the neural nets. Brains have neurons and neural nets have weighted nodes, same difference, right? Never mind the sea of neurotransmitters in the synapses, they get abstracted away in the computer model because we don't understand them enough. Pressing that analogy too far can lead to some distorted thinking about the relation of minds to computers.

Quoting Pierre-Normand

The main point is that just because conversational AI agents (or human beings) can do things that aren't easily explained as a function of what their underlying LLMs (or brains) do at a "fundamental" level of material realization, isn't a ground for denying that they are "really" doing those things.


And not grounds for asserting it either! I'm still standing up for Team Human, even as that gets more difficult every day.
flannel jesus May 22, 2024 at 06:44 #905957
Quoting fishfry
This one developed a higher level of understanding than it was programmed for, if you look at it that way.


I do. In fact I think that's really what neural nets are kind of for and have always (or at least frequently) done. They are programmed to exceed their programming in emergent ways.

Quoting fishfry
No internal model of any aspect of the actual game.


I feel like you might have missed some important paragraphs in the article. Did you notice the heat map pictures? Did you read all the paragraphs around that? A huge part of the article is very much exploring the evidence that gpt really does model the game.
Christoffer May 22, 2024 at 13:09 #905995
Quoting flannel jesus
the "Chinese room" isn't a test to pass


I never said it was a test. I've said it was a problem and an argument about the inability for us to know if something is actually self-aware in their thinking or if it's just highly complex operations looking like it. The problem seems more that you don't understand in what context I'm using that analogy.

Quoting fishfry
"We have no idea what's happening, but emergence is a cool word that obscures this fact."


This is just a straw man fallacy that misrepresents the concept of emergence by suggesting that it is merely a way to mask ignorance. In reality, emergence describes how complex systems and patterns arise from simpler interactions, a concept extensively studied and supported in fields like neuroscience, physics, and philosophy. https://en.wikipedia.org/wiki/Emergence

Why are you asserting something you don't seem to know anything about? This way of doing arguments makes you first assert this, and then you keep that as some kind of premise in your head while continuing, believing you construct a valid argument when you're not. Everything after it becomes subsequently flawed in reasoning.

Quoting fishfry
I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about.


So now you're denying actual studies just because you don't like the implication of what emergence means? This is ridiculous.

Quoting fishfry
But calling that emergence, as if that explains anything at all, is a cheat.


In what way?

Quoting fishfry
"Mind emerges from the brain} explains nothing, provides no insight. It sounds superficially clever, but if you replace it with, "We have no idea how mind emerges from the brain," it becomes accurate and much, much more clear.


You're just continuing with saying the same thing without even engaging with the science behind emergence. What's your take on the similarities between the system and the brain and how the behaviors in these systems and those seen in neuroscience matches up? What's your actual counter argument? Do you even have one?

Quoting fishfry
Nobody knows how to demonstrate self-awareness of others. We agree on that. But calling it emergence is no help at all. It's harmful, because it gives the illusion of insight without providing insight.


You seem to conflate what I said in that paragraph with the concept of emergence. Demonstrating self-awareness is not the same as emergence. I'm talking about emergent behavior. Demonstrating self-awareness is another problem.

It's like you're confused to what I'm answering to and talking about? You may want to check back on what you've written and what I'm answering to in that paragraph because I think you're just confusing yourself.

It only becomes harmful when people ignore to actually read up and understand certain concepts before discussing them. You ignoring the science and misrepresenting the arguments or not carefully understand them before answer is the only thing harmful here, it's bad dialectic practice and being a dishonest interlocutor.

Quoting fishfry
I have no doubt that grants will be granted. That does not bear on what I said. Neural nets are a dead end for achieving AGI. That's what I said. The fact that everyone is out there building ever larger wings out of feathers and wax does not negate the point.

If you climb a tree, you are closer to the sky than you were before. But you can't reach the moon that way. That would be my point. No matter how much clever research is done.

A new idea is needed.


If you don't even know where the end state is, then you cannot conclude anything that final. If emergent behaviors are witnessed, then the research practice is to test it out further and discover why they occur and if they increase in more configurations and integrations.

You claiming some new idea is needed requires you to actually have final knowledge about how the brain and consciousness works, which you don't. There are no explanations for the emergent behaviors witnessed, and therefore, before you can explain those behaviors with certainty, you really can't say that a new idea is needed. And since we haven't even tested multifunctional models yet, how would you know that the already witnessed emergent behavior does not increase? You're not really making an argument, you just have an opinion and dismiss everything not in-line with that opinion. And when that is questioned you just repeat yourself without any further depth.

Quoting fishfry
Plenty of people are saying that. I read the hype. If you did not say that, my apologies. But many people do think LLMs are a path to AGI.


I don't give a shit about what other people are saying, I'm studying the science behind this, and I don't care about bloggers, tech CEOs and influencers. If that's the source of all your information then you're just part of the uneducated noise that's flooding social media online and not actually engaging in an actual philosophical discussion. How can I take you seriously when you constantly demonstrate this level of engagement?

Quoting fishfry
I was arguing against something that's commonly said, that neural nets are complicated and mysterious and their programmers can't understand what they are doing. That is already true of most large commercial software systems. Neural nets are conventional programs. I used the example of political bias to show that their programmers understand them perfectly well, and can tune them in accordance with management's desires.


How would you know any of this? What's the source of this understanding? Do you understand that the neural net part of the system isn't the same as the operating code surrounding it? Please explain how you know the programmers know what's going on within the neural net that was trained? If you can't, then why are you claiming that they know?

This just sounds like you heard some blogger or influencer say that the programmers do and then just regurgitate that statement in here without even looking into it with any further depth. This is the problem with discussions today; people are just regurgitating shit they hear online as a form of appeal to authority fallacy.

Quoting fishfry
They're a very clever way to do data mining.


No, it's probability-based predicting computation.

Quoting fishfry
(1) they are not the way to AGI or sentience; and (2) despite the mysterianism, they are conventional programs that could, in principle, be executed with pencil and paper, and that operate according to the standard rules of physical computation that were developed in the 1940s.


You can say the same thing about any complex system. Anything is "simple" in its core fundametnals, but scaling a system up can lead to complex operations vastly outperforming the beliefs and predictions of its limitations. People viewed normal binary computation as banal and simple and couldn't even predict where that would lead.

Saying that a system at its fundamental core is simple does not equal anything about the totality of the system, especially in scaled up situations. A brain is also just a bunch of neural pathways and chemical systems. We can grow neurons in labs and manipulate the composition easily, and yet it manifest this complex result that is our mind and consciousness.

"Simple" as a fundamental foundation of a system does not mean shit really. Almost all things in nature are simple things forming complexities that manifest larger properties. Most notable theories in physics tend to lean into being oddly simple and when verified. It's basically Occam's razor, practically applied.


Quoting fishfry
By mysterianism, I mean claims such as you just articulated: "they operate differently as a whole system ..." That means nothing. The chess program and the web browser on my computer operate differently too, but they are both conventional programs that ultimately do nothing more than flip bits.


Therefore, nothing made of atoms is alive.


You have no actual argument, you're just making a fallacy of composition.

Quoting fishfry
Jeez man more emergence articles? Do you think I haven't been reading this sh*t for years?


Oh, you mean like this?

Quoting fishfry
I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection...


You've clearly stated here yourself that you haven't read any of the actual shit for years. You're just regurgitating already regurgitated information from bloggers and influencers.

Are you actually expecting me to take you seriously? You demonstrate no actual insight into what I'm talking about and you don't care about the information I link to, which are released research papers from studies on the subject. You're basically demonstrating anti-intellectualism in practice here. This isn't reddit or on twitter, you're on a philosophy forum and you dismiss research papers when provided. Why are you even on this forum?

Quoting fishfry
Emergence emergence emergence emergence emergence. Which means, you don't know. That's what the word means.


You think this kind of behavior helps your argument? This is just stupid.

Quoting fishfry
You claim that "emergence in complexities being partly responsible for much of how the brain operates" explains consciousness? Or what are you claiming, exactly? Save that kind of silly rhetoric for your next grant application. If it were me, I'd tell you to stop obfuscating. "emergence in complexities being partly responsible for much of how the brain operates". Means nothing. Means WE DON'T KNOW how the brain operates.


Maybe read up on the topic, and check the sourced research papers. It's not my problem that you don't understand what I'm talking about. Compared to you I actually try to provide sources to support my argument. You're just acting like an utter buffoon with these responses. I'm not responsible for your level of knowledge or comprehension skills, because it doesn't matter if I explain further or in another way. I've already done so extensively, but you demonstrate an inability to engage with the topic or concept honestly and just repeat your dismissal in the most banal and childish way.

Quoting fishfry
You speak in buzz phrases. It's not only emergent, it's exponential. Remember I'm a math guy. I know what the word exponential means. Like they say these days: "That word does not mean what you think it means."

So there's emergence, and then there's exponential, which means that it "can form further emergent phenomenas that we haven't seen yet."

You are speaking in entirely meaningless babble at this point. I don't mean that you're not educated. I mean that you have gotten lost in your own jargon. You have said nothing at all in this post.


You've yet to provide any source of your understanding of this topic outside of "i'm a math guy trust me" and "I don't read papers I follow bloggers and influencers".

Yeah, you're not making a case for yourself able to really understand what I'm talking about. Being a "math guy" means nothing. It would be equal to someone saying "I'm a Volvo mechanic, therefore I know how to build a 5-nanometer processor".

Not understanding what someone else is saying does not make it meaningless babble. And based on how you write your arguments I'd say the case isn't in your favor, but rather that you actually don't know or care to understand. You dismiss research papers as basically "blah blah blah". So, no, it seems more likely that you don't understand what I'm talking about.

Quoting fishfry
Yes, that's how computers work. When I click on Amazon, whole pages of instructions get executed before the package arrives at my door. What point are you making?


That you ignore the formed neural map and just look at the operating code working on top of it, which isn't the same as how the neural system operates underneath.

Quoting fishfry
You're agreeing with my point. Far from being black boxes, these programs are subject to the commands of programmers, who are subject to the whims of management.


The black box is the neural operation underneath. The fact that you confuse the operating code of the software that's there to create a practical application on top of the neural network core operation just shows you know nothing of how these systems actually work. Do you actually believe that the black box concept refers to the operating code of the software? :lol:

Quoting fishfry
You say that, and I call it neural net mysterianism. You could take that black box, print out its source code, and execute it with pencil and paper. It's an entirely conventional computer program operating on principles well understood since the first electronic digital computers in the 1940s.


The neural pathways and how they operate is not the "source code". What the fuck are you talking about? :rofl:

Quoting fishfry
"Impossible to peer into." I call that bullpucky. Intimidation by obsurantism.


Demonstrate how you can peer into the internal operation of the trained model's neural pathways and how they form outputs. Show me any source that demonstrate that this is possible. I'm not talking about software code, I'm talking about what the concept of black box is really about.

If you trivialize it in the way you do, then demonstrate how, because this is a big problem within computer science, so maybe educate us all on how this would be done.

Quoting fishfry
Every line of code was designed and written by programmers who entirely understood what they were doing.


That's not how these systems work. You have a software running the training and you have a practical operation software working from the trained model, but the trained model itself does not have code in the way you're talking about it. This is the black box problem.

Quoting fishfry

And every highly complex program exhibits behaviors that surprise their coders. But you can tear it down and figure out what happened. That's what they do at the AI companies all day long. .


Please provide any source that easily shows how you can trace back operation within a trained model. Give me one single solid source and example.

Quoting fishfry
You say it's a black box, and I point out that it does exactly what management tells the programmers to make it do, and you say "No, there's a secret INNER" black box."

I am not buying it. Not because I don't know that large, complex software systems don't often exhibit surprising behavior. But because I don't impute mystical incomprehensibility to computer programs.


You're not buying it because you ignore to engage the topic by actually reading up on it. This is reddit and twitter-level of engagement in which you don't care to read anything and just continue the same point over and over. Stop with the strawman arguments it's getting ridiculous.

Quoting fishfry
Can we stipulate that you think I'm surface level, and I think you're so deep into hype, buzzwords, and black box mysterianism that you can't see straight?

That will save us both a lot of time.

I can't sense nuances. They're a black box. In fact they're an inner black box. An emergent, exponentail black box.

I know you take your ideas very seriously. That's why I'm pushing back. "Exponential emergence" is not a phrase that refers to anything at all.


You have no argument, you're not engaging with this in any philosophical scrutiny so the discussion just ends at the level you're demonstrating here. It's you who's responsible for just babbling this meaningless pushback, not because you have actual arguments with good sources, but because "you don't agree". On this forum, that's not enough, that's called "low quality". So can you stop the low quality BS and actually make actual arguments rather than this fallacy-ridden rants over and over?
Christoffer May 22, 2024 at 13:09 #905996
Quoting fishfry
I'll stipulate that intelligent and highly educated and credentialed people wrote things that I think are bullsh*t.


This is anti-intellectualism. You're just proving yourself to be an uneducated person who clearly finds pride in having radical uneducated opinions. You're not cool, edgy or provide any worth to these discussions, you're just a tragic example of the worst things about how people act today; to ignore actual knowledge and just have opinions, regardless of their merits. Not only is it not contributing to knowledge, it actively works against it. A product of how internet self-radicalize people into believing they are knowledgeable, but taking zero epistemic responsibility of the body of knowledge that the world should be built on. I have nothing but contempt for this kind of behavior and how it transforms the world today.

Quoting fishfry
Yes. It means "We don't understand but if we say that we won't get our grant renewed, so let's call it emergence. Hell, let's call it exponential emergence, then we'll get a bigger grant."

Can't we at this point recognize each other's positions? You're not going to get me to agree with you if you just say emergence one more time.


I'm not going to recognize a position of anti-intellectualism. You show not understanding or insight into the topic I raise. A topic that is broader than just AI research. Your position is worth nothing if you base it off influencers and bloggers and ignore actual research papers. It's lazy and arrogant.

You will never be able to agree on anything because your knowledge isn't based on actual science and what constitutes how humanity forms a body of knowledge. You're operating on online conflict methods in which a position should be "agreed" upon based on nothing but fallacious arguments and uneducated reasoning. I'm not responsible for your inability to comprehend a topic and I'm not accepting fallacious arguments rooted in that lack of comprehension. Your entire position is based on a lack of knowledge or understanding and a lack of engagement in the source material. As I've said in the beginning, if you build arguments on fallacious and error premises, then everything falls down.

Quoting fishfry
And then there are the over-educated buzzword spouters. Emergence. Exponential. It's a black box. But no it's not really a black box, but it's an inner black box. And it's multimodal. Here, have some academic links.


You continue to parrot yourself based on a core inability to understand anything about this. You don't know what emergence is and you don't know what the black box problem is because you don't understand how the system actually works.

Can you explain how we're supposed to peer into that black box of neural operation? Explain how we can peer into the decision making of the trained models. NOT the overarching instruction-code, but the core engine, the trained model, the neural map that forms the decisions. If you just say one more time that "the programmers can do it, I know they can" as an answer to a request on"how", then you don't know what the fuck you're talking. Period.

Quoting fishfry
Surface level is all you've got. Academic buzzwords. I am not the grant approval committee. Your jargon is wasted on me.


You're saying the same thing over and over with zero substance as a counter argument. What's your actual argument beyond your fallacies? You have nothing at the root of anything you say here. I can't argue with someone providing zero philosophical engagement. You belong to reddit and twitter, what are you doing on this forum with this level of engagement?

Quoting fishfry
Is there anything I've written that leads you to think that I want to read more about emergence?


No, your anti-intellectualism is pretty loud and clear and I know exactly at what level you're at. If you ignore engagement in the discussion honestly, then you're just a dishonest interlocutor, simple as that. If you ignore actually understanding a scientific field at the core of this topic when someone brings it up, only to dismiss it as buzzwords, then you're not worth much as a part of the discussion. I have other people to engage with who can actually form real arguments. But your ignorance just underscores who's coming out on top in this. No one in a philosophy discussion views the ignorant and anti-intellectual as anything other than irrelevant, so I'm not sure what you're hoping for here.

Quoting fishfry
Forgive me, I will probably not do that. But I don't want you to think I haven't read these arguments over the years. I have, and I find them wanting.


You show no sign of understanding any of it. It's basically just "I'm an expert trust me". The difference between you and me is that I don't do "trust me" arguments. I explain my point, I provide sources if needed and if the person I'm discussing with just utters an "I'm an expert, trust me" I know they're full of shit. So far, you've done no actual arguments beyond saying basically that, so the amount of statistical data informing us exactly how little you know about all of this, is just piling up. And it's impossible to engage with further arguments sticking to the topic if the core of your arguments are these low quality responses.

Quoting fishfry
My point exactly. In this context, emergence means "We don't effing know." That's all it means.


No it doesn't. But how would you know when you don't care?

Quoting fishfry
I was reading about the McCulloch-Pitts neuron while you were still working on your first buzzwords.


The McCullock-Pitt neuron does not include mechanisms for adapting weights. And since this is a critical feature of biological neurons and neural networks, I'm not sure why that applies to either emergence theories or modern neural networks? Or are you just regurgitating part of the history of AI thinking it has any relevance to what I'm writing?

Quoting fishfry
You write, "may simply arise out of the tendency of the brain to self-organize towards criticality" as iff you think that means anything.


It means you're uneducated and don't care to research before commenting:

For the last two decades, considerable experimental evidence has accumulated that the mammalian cortex with its diversity in cell types, interconnectivity, and plasticity might exhibit SOC.


Quoting fishfry
I'm expressing the opinion that neural nets are not, in the end, going to get us to AGI or a theory of mind.

I have no objection to neuroscience research. Just the hype, buzzwords, and exponentially emergent multimodal nonsense that often accompanies it.


Who cares about your opinion? Your opinion is meaningless without foundational premises for your argument. This forum is about making arguments, it's within the fundamental rules of the forum, if you're here to just make opinions you're in the wrong place.

Quoting fishfry
I have to apologize to you for making you think you need to expend so much energy on me. I'm a lost cause. It must be frustrating to you. I'm only expressing my opinions, which for what it's worth have been formed by several decades of casual awareness of the AI hype wars, the development of neural nets, and progress in neuroscience.

It would be easier for you to just write me off as a lost cause. I don't mean to bait you. It's just that when you try to convince me with meaningless jargon, you weaken your own case.


Why are you even on this forum?

Quoting fishfry
I wrote, "I'll take the other side of that bet," and that apparently pushed your buttons hard. I did not mean to incite you so, and I apologize for any of my worse excesses of snarkiness in this post.


You're making truth statements based on nothing but personal opinion and what you feel like. Again, why are you on this forum with this kind of attitude, this is low quality, maybe look up the forum rules.

Quoting fishfry
But exponential emergence and multimodality, as substitutes for clear thinking -- You are the one stuck with this nonsense in your mind. You give the impression that perhaps you are involved with some of these fields professionally. If so, I can only urge to you get some clarity in your thinking. Stop using buzzwords and try to think clearly. Emergence does not explain anything. On the contrary, it's an admission that we don't understand something. Start there.


I've shown clarity in this and I've provided further reading. But if you don't have the intellectual capacity to engage in it, as you've clearly, in written form, shown not to have and not to have an interest in, then it doesn't matter how much someone try to explain something to you. Your stance is that if you don't understand or comprehend something, then you are, for some weird reason, correct, and that the one you don't understand is wrong and it's their fault for not being clear enough. What kind of disrespectful attitude is that? You're lack of understanding, your lack of engagement, your dismissal of sources, your fallacies in arguments and your lack of providing any actual counter arguments just makes you an arrogant, uneducated and dishonest interlocutor, nothing more. How would a person even be able to have a proper philosophical discussion with someone like you?

Quoting fishfry
Ah. The first good question you've posed to me. Note how jargon-free it was.


Note the attitude you pose.

Quoting fishfry
But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data..


Define "what's happening". Define what constitutes "now".

If "what is happening" only constitutes a constant stream of sensory data, then that stream of data is always pointing to something happening in the "past", i.e "what's happened". There's no "now" in this regard.

And because of this, the operation of our mind is simply streaming sensory data as an influence on our already stored neural structure with hormones and chemicals further influencing in strengths determined by pre-existing genetic information and other organ signals.

In essence, the difference you're trying to aim for, is simply one that's revolves around the speed of analysis of that constant stream of new data, and an ability to use a fluid neural structure that changes based on that data. But the underlying operation is the same, both the system and the brain operate on "past events" because there is no "now".

Just the fact that the brain need to process sensory data before we comprehend it, means that what we view as "now" is simply just the past. It's the foundation for the theory of predictive coding. This theory suggests that the human brain compensates for the delay in sensory processing by using predictive models based on past experiences. These models enable rapid, automatic responses to familiar situations. Sensory data continually updates these predictions, refining the brain's responses for future interactions. Essentially, the brain uses sensory input both to make immediate decisions and to improve its predictive model for subsequent actions. https://arxiv.org/pdf/2107.12979


Quoting fishfry
But I can't give you proof. If tomorrow morning someone proves that humans are neural nets, or neural nets are conscious, I'll come back here and retract every word I've written. I don't happen to think there's much chance of that happening.


The clearest sign of the uneducated is that they treat science as a binary "true" or "not true". Rather than a process. As with both computer science and neuroscience, there are ongoing research and adhering to that research and the partial findings are much more valid in arguments than demanding "proof" in the way you speak. And as with the theory of predictive coding (don't confuse it with computer coding which it isn't about), it is at the frontlines of neuroscience. What that research implies will, for anyone with an ability to make inductive arguments, point towards the similarities between neural systems and the brain in terms of how both act upon input, generation and output of behavior and actions. That one system, at this time, is in comparison, rudimentary, simplistic and lacking similar operating speed, does not render the underlying similarities that it does have, moot. It rather prompts further research into if behaviors match up further, the closer the system becomes to each other. Which is what current research is going on about.

Not that this will go anywhere but over your head.

Quoting fishfry
Nobody knows what the secret sauce of human minds is.


While you look at the end of the rainbow, guided by the bloggers and influencers, I'm gonna continue following the actual research.

Quoting fishfry
Now THAT, I'd appreciate some links for. No more emergence please. But a neural net that updates its node weights in real time is an interesting idea.


You don't know the difference between what emergence is and what this is. They are two different aspects within this topic. One has to do with self-awareness and qualia, this has to do with adaptive operation. One is about the nature of subjectivity, the other is about mechanical non-subjective AGI. What we don't know is if emergence occurs the closer the base system gets. But again, that's too complex for you.

https://arxiv.org/pdf/1705.08690
https://www.mdpi.com/1099-4300/26/1/93
https://www.mdpi.com/2076-3417/11/24/12078
https://www.mdpi.com/1424-8220/23/16/7167

As the research is ongoing there's no "answers" or "proofs" for it yet in the binary way you require these things to be framed as. Rather, it's the continuation of merging knowledge between computer science and neuroscience that has been going on for a few years now ever since the similarities were noted to occur.

Quoting fishfry
How can you say that? Reasoning our way through novel situations and environments is exactly what humans do.


I can say that because "novel situations" are not a coherently complex thing. We're seeing reasoning capabilities within the models right now. Not at each level of human capacity, pretty rudimentary, but still there. Ignoring that is just dishonest. And with the ongoing research, we don't yet know how complex this reasoning capability will become, simply because we've haven't a multifunction system running yet that utilizes real-time processing and act across different functions. To claim that they won't be able to do is not valid as the current behavior and evidence point in the other direction. Making a fallacy of composition as the sole source as to why they won't be able to reason is not valid.

Quoting fishfry
That's the trouble with the machine intelligence folks. Rather than uplift their machines, they need to downgrade humans. It's not that programs can't be human, it's that humans are computer programs.


No they're not, they're researching AI, or they're researching neuroscience. Of course they're breaking down the building blocks in order to decode consciousness, the mind and behavior. The problem is that there are too many spiritualist and religious nutcases who rather arbitrarily uplift humans to a position that's composed of arrogance and hubris. That we are far more than part of the physical reality we were formed within. I don't care about spiritual and religious hogwash when it comes to actual research, that's something the uneducated people with existential crises can dwell their futile search for meaning in. I'm interested in what is, nothing more, nothing less.

Quoting fishfry
How can you, a human with life experiences, claim that people don't reason their way through novel situations all the time?


Why do you interpret it in this way? It's like you interpret things backwards. What I'm saying is that the operation of our brain and consciousness, through concepts like the theory of predictive coding, seems to operate on rather rudimentary functions that are possible to be replicated with current machine learning in new configurations. What you don't like to hear is the link between such functions generating extreme complexity and that concepts like subjectivity and qualia may form as emergent phenomenas out of that resulting complexity. Probably because you don't give a shit about reading up on any of this and instead just operate on yourself "just not liking it" as the foundation for the argument.

Quoting fishfry
Humans are not "probability systems in math or physics."


Are you disagreeing that our reality is fundamentally acting on probability functions? That's what I mean. Humans are part of this reality and this reality operates on probability. That we show behavior of operating on predictions of probability when navigating reality is following this fact; Predictive Coding Theory, Bayesian Brain Hypothesis, Prospect Theory, Reinforcement Learning Models etc.
Why wouldn't our psychology be based on the same underlying function as the rest of nature. Evolution itself is acting along predictive functions based on probabilistic "data" that arise out of complex ecological systems.

I don't deal in religious hogwash to put humans on a pedestal against the rest of reality.

Quoting fishfry
Credentialism? That's your last and best argument? I could point at you and disprove credentialism based on the lack of clarity in your own thinking.


It's not credentialism, I'm fucking asking you for evidence that it's impossible as you clearly just regurgitate the same notion of "impossibility" over and over without any sources or rational deduced argument for it. The problem here isn't clarity, it's that you actively ignore the information given and that you never demonstrate even a shallow understanding of this topic. Telling that you do, does not change that fact. Like in storytelling; show don't tell.

Show that you understand, show that you have a basis for your claims that AGI can never happen with these models as they are integrated with each other. So far you show nothing else but to try and ridicule the one you argue against, as if that were any kind of foundation for a solid argument. It's downright stupid.

Quoting fishfry
Yes, but apparently you can't see that.


Oh, so now you agree with my description that you earlier denied?

What about this?

Quoting fishfry
But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data..


So when I say this:

Quoting Christoffer
Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences?


You suddenly agree with this:

Quoting fishfry
Yes, but apparently you can't see that.


This is just another level of stupid and it shows that you're just ranting all over the place without actually understanding what the hell this is about, all while trying to mock me for lacking clarity. :lol: Seriously.


Quoting fishfry
I'm not the grant committee. But I am not opposed to scientific research. Only hype, mysterianism, and buzzwords as a substitute for clarity.


But the source of your knowledge, as mentioned by yourself, is still to not read papers, and only bloggers and influencers, the only ones who actually are THE ones to use buzzwords and hype? All while what I've mentioned are actual fields of studies and terminology derived from research papers?That's the most ridiculous I've ever heard. And you seem totally blind to any ability of self-reflection on this dissonance in reasoning. :lol:

Quoting fishfry
Is that the standard? The ones I read do. Eric Hoel and Gary Marcus come to mind, also Michael Harris. They don't know shit? You sure about that? Why so dismissive? Why so crabby about all this? All I said was, "I'll take the other side of that bet." When you're at the racetrack you don't pick arguments with the people who bet differently than you, do you


Yes, they do, but based on how you write things, I don't think you really understand them as you clearly seem to not understand either the concepts that's been mentioned or be able to formulate actual arguments for your claims. Reading blogs is not the same as reading the actual research and actual comprehension of a topic requires more sources of knowledge than just brief summery's. Saying that you read stuff, means nothing if you can't show a comprehension of the body of knowledge required. All of the concepts I've talked about should be something you already know about, but since you don't I only have your word that you "know stuff".

Quoting fishfry
You're right, I lack exponential emergent multimodality.


You lack the basics of how people are supposed to form arguments on this forum. You're doing twitter/reddit posts. Throughout your answer to me, you've not even once demonstrated actual insight into the topic or made any actual counter arguments. That even in that lengthy answer, you still weren't able to. It's like you want to show an example of the opposite of philosophical scrutiny.

Quoting fishfry
I've spent several decades observing the field of AI and I have academic and professional experience in adjacent fields. What is, this credential day? What is your deal?


Once again you just say that "you know shit", without every showing it in your arguments. It's the appeal to authority fallacy as it's your sole source of explanation of why "you know shit". If you have academic and professional experience, you would know how problematic it is to just adhere to experience like that as the source premise for an argument. What it rather tells me is that you either have such experience, but you're simply among the academics who're at the bottom of the barrel (there are lots of academics who're worse than non-academics in the practice of conducting proper arguments and research), or that the academic fields are not actually valid for the specific topic discussed, or that you just say it as a form of desperate attempt to increase validity. But being an academic or have professional experience (whatever that even means without context), means absolutely nothing if you can't show the knowledge that've come out of it. I know lots of academics who are everything from religious zealots to vaccine deniers, it doesn't mean shit. Academia is education and building knowledge, if you can't show that you learned or built any such knowledge, then it means nothing in here.

Quoting fishfry
You've convinced me to stop listening to you.


More convincing evidence that you are acting out of proper academic praxis in discourse? As with everything else, ridiculous.
fishfry May 23, 2024 at 05:08 #906112
Quoting flannel jesus
No internal model of any aspect of the actual game.
— fishfry

I feel like you might have missed some important paragraphs in the article. Did you notice the heat map pictures? Did you read all the paragraphs around that? A huge part of the article is very much exploring the evidence that gpt really does model the game.



I was especially impressed by the heat map data and I do believe I mentioned that in my earlier post. Indeed, I wrote:

Quoting fishfry
Those heat map things are amazing.


A little later in that same post, I wrote:

Quoting fishfry
If you don't give it a mental model of the game space, it builds one of its own.


That impressed me very much. That the programmers do not give it any knowledge of the game, and it builds a "mental" picture of the board on its own.

So I believe I already understood and articulated the point you thought I missed. I regret that I did not make my thoughts more clear.


flannel jesus May 23, 2024 at 06:32 #906119
Reply to fishfry okay so I guess I'm confused why, after all that, you still said

No internal model of any aspect of the actual game

fishfry May 23, 2024 at 07:45 #906125
Quoting flannel jesus
?fishfry okay so I guess I'm confused why, after all that, you still said

No internal model of any aspect of the actual game


The programmers gave it no internal model. It developed one on its own. I'm astonished at this example. Has caused me to rethink my opinions of LLMs.



Quoting flannel jesus
For full clarity, and I'm probably being unnecessarily pedantic here, it's not necessarily fair to say that's all they did. That's all their goal was, that's all they were asked to - BUT what all of this should tell you, in my opinion, is that when a neural net is asked to achieve a task, there's no telling HOW it's actually going to achieve that task.


Yes, I already knew that about neural nets. But (AFAIK) LLMs are a restricted class of neural nets, good only for finding string continuations. It turns out that's a far more powerful ability than I (we?) realized.

Quoting flannel jesus

In order to achieve the task of auto completing the chess text strings, it seemingly did something extra - it built an internal model of a board game which it (apparently) reverse engineered from the strings. (I actually think that's more interesting than its relatively high chess rating, the fact that it can reverse engineer the rules of chess seeing nothing but chess notation).


Yes, I'm gobsmacked by this example.

Also in passing I learned about linear probes, which I gather are simpler neural nets that can analyze the internals of other neural nets. So they are working on the "black box" problem, trying to understand the inner workings of neural nets. That's good to know.

Quoting flannel jesus

So we have to distinguish, I think, between the goals it was given, and how it accomplished those goals.


We know neural nets play chess, better than the greatest grandmasters these days. What we didn't know was that an LLM could play chess, and develop an internal model of the game without any programming. So string continuation might be an approach to a wider class of problems than we realize.

Quoting flannel jesus

Apologies if I'm just repeating the obvious.


I think we're in agreement. And thanks so much for pointing me at that example. It's a revelation.



fishfry May 23, 2024 at 08:00 #906127
Quoting Christoffer
You're just proving yourself to be an uneducated person who clearly finds pride in having radical uneducated opinions.


When I criticized the notion of emergence, you could have said, "Well, you're wrong, because this, that, and the other thing." But you are unable to express substantive thoughts of your own. Instead you got arrogant and defensive and started throwing out links and buzzwords, soon followed by insults. Are you like this in real life? People see through you, you know.

You're unpleasant, so I won't be interacting with you further.

All the best.
flannel jesus May 23, 2024 at 08:13 #906128
Quoting fishfry
Also in passing I learned about linear probes, which I gather are simpler neural nets that can analyze the internals of other neural nets. So they are working on the "black box" problem, trying to understand the inner workings of neural nets. That's good to know.


Yeah same, this was really intriguing to me too

Quoting fishfry
And thanks so much for pointing me at that example. It's a revelation.


Of course, I'm glad you think so. I've actually believed for quite some time that LLMs have internal models of stuff, but the strong evidence for that belief wasn't as available to me before - that's why that article is so big to me.

I'm really pleased that other people see how big of a deal that is too - you could have just read a few paragraphs and called me an idiot instead , that was what I assumed would happen. That's what normally happens in these circumstances. I applaud you for going further than that.
flannel jesus May 23, 2024 at 08:19 #906130
Reply to fishfry I have to read this next, it's the follow up article

https://adamkarvonen.github.io/machine_learning/2024/03/20/chess-gpt-interventions.html#:~:text=Chess%2DGPT%20is%20orders%20of,board%20state%20and%20player%20skill.
Christoffer May 23, 2024 at 08:38 #906131
Quoting fishfry
When I criticized the notion of emergence, you could have said, "Well, you're wrong, because this, that, and the other thing." But you are unable to express substantive thoughts of your own. Instead you got arrogant and defensive and started throwing out links and buzzwords, soon followed by insults. Are you like this in real life? People see through you, you know.

You're unpleasant, so I won't be interacting with you further.


Your criticism had zero counter-arguments and with an extremely arrogant tone in with a narrative of ridiculing and strawmanning everything I've said while totally ignoring ALL sources provided that acted as support for my premises.

And now you're trying to play the victim when I've called out all of these behaviors on your side. Masking your own behavior by trying to flip the narrative in this way is a downright narcissistic flip. No one's buying it. I've pinpointed, as much as I could; the small fragments of counter-points you've made through that long rant of disjointed responses, so I've done my part in giving you the benefit of doubt with proper answers to those points. But I'm not gonna back out of calling out the fallacies and arrogant remarks that obscured those points as well. If you want to "control the narrative" like that you just have to find someone else who's susceptible and fall for that kind of behavior. Bye.
Christoffer May 23, 2024 at 09:06 #906135
Quoting Nemo2124
Regarding the problem of the Chinese room, I think it might be safe to accede that machines do not understand symbols in the same way that we do. The Chinese room thought experiment shows a limit to machine cognition, perhaps. It's quite profound, but I do not think it influences this argument for machine subjectivity, just that its nature might be different from ours (lack of emotions, for instance).


It's rather showing a limit of our ability to know that it is thinking. Being the outsider feeding the Chinese characters through the door, we get the same behavior of translation regardless of if it's a simple non-cognitive program or if it's a sentient being doing it.

Another notable thought experiment is the classic "Mary in the black and white room", which is more through the perspective of the AI itself. The current AI models are basically acting as Mary in that room, they have a vast quantity of knowledge about color, but the subjective experience of color is unknown to them until they have a form of qualia.

Quoting Nemo2124
Machines are gaining subjective recognition from us via nascent AI (2020-2025). Before they could just be treated as inert objects. Even if we work with AI as if it's a simulated self, we are sowing the seeds for the future AI-robot. The de-centring I mentioned earlier is pertinent, because I think that subjectivity, in fact, begins with the machine. In other words, however abstract, artificial, simulated and impossible you might consider machine selfhood to be - however much you consider them to be totally created by and subordinated to humans - it is in fact, machine subjectivity that is at the epicentre of selfhood, a kind of 'Deus ex Machina' (God from the Machine) seems to exist as a phenomenon we have to deal with.

Here I think we are bordering on the field of metaphysics, but what certain philosophies indicate about consciousness arising from inert matter, surely this is the same problem we encounter with human consciousness: i.e. how does subjectivity arise from a bundle of neuron firing in tandem or synchronicity. I think, therefore, I am. If machines seem to be co-opting aspects of thinking e.g. mathematical calculation to begin with, then we seem to share common ground, even though the nature of their 'thinking' differs to ours (hence, the Chinese room).


But we still cannot know if they have subjectivity. Let's say we build a robot that mimics all aspects of the theory of predictive coding, and featuring a constant feed of sensory data that basically acts onto a "wetwork" of realtime changing neural structures. Basically as close as we can theoretically think of mechanically mimicking the brain and our psychology. We still don't know if that leads to qualia which is required for subjectivity, required for Mary to experience color.

All animals have a form of emotional realm that is part of navigating and guiding consciousness. It may very well be that the only reason our consciousness have a reason to act upon the world at all is because of this emotional realm. In the most basic living organisms, these are basically a pain-response in order to form predictive behavior that avoid pain and seek pleasure and in turn form a predictive network of ideas around how to navigate the world and nature.

At the moment, we're basically focusing all efforts to match the cognitive behavior of humans in these AI systems. But we have zero emotional realm mapped out that work in tandem with those systems. There's nothing driving their actions outside of our external inputs.

As life on this planet is the only example of cognition and consciousness we need to look for the points of criticality in which lifeforms go from one level of cognition to the next.

We can basically fully map bacterial behavior with traditional computing algorithms that don't require advanced neural networks. And we've been able to scale up the cognition to certain insects using these neural models. But as soon as the emotional realm of our consciousness starts to emerge in larger animals and mammals we start to hit a wall in which we can only simulate complex reasoning on the level of a multifunctional superadvanced calculator.

In other words, we've basically done the same as with a normal calculator. We can cognitively solve math in our head, but a calculator is better at it and more advanced. And now we have AI models that can calculate highly advanced reasoning that revolves around audiovisual and language operations.

We're getting close to perfectly simulate our cognitive abilities in reasoning and mechanical thinking, but we lack the emotional realm that is crucial for animals to "experience" out of that mechanical thinking.

It might be that this emotional aspect of our consciousness is the key to subjective experience and it's only when we can simulate that as part of these systems that actual subjective experience and qualia emerges out of such an AI model. How we simulate the emotional aspects of our cognition is still highly unknown.
ssu May 23, 2024 at 09:42 #906137
Quoting Christoffer
But we still cannot know if they have subjectivity.

Even our own subjectivity has still some philosophical and metaphysical questions. We simply start from being subjects. Hence it's no wonder we have problems to put "subjectivity" into to our contraptions called computers.

When Alan Turing talking about the Turing test, there's no attempt to answer the deep philosophical question, but just go with the thinking that a good enough fake is good enough for us. And basically as AI is still a machine, this is enough to us. And this is the way forward. I think we will have quite awesome AI services in a decade or two, but won't be closer to answer the philosophical questions.


Christoffer May 23, 2024 at 11:19 #906141
Quoting ssu
When Alan Turing talking about the Turing test, there's no attempt to answer the deep philosophical question, but just go with the thinking that a good enough fake is good enough for us. And basically as AI is still a machine, this is enough to us. And this is the way forward. I think we will have quite awesome AI services in a decade or two, but won't be closer to answer the philosophical questions.


The irony is that we will probably use these AI systems as tools to make further progress on the journey to form a method to evaluate self-awareness and subjectivity. Before we know if they have it they will be used to evaluate. At enough complexity we might find ourselves in a position where they end the test on themselves as "I tried to tell you" :lol:
ssu May 23, 2024 at 11:24 #906143
Reply to Christoffer Well, what does present physics look like?

Hey guys! These formulas seem to work and are very handy... so let's go with them. No idea why they work, but let's move on.
Nemo2124 May 23, 2024 at 15:38 #906178
Quoting Christoffer
We can basically fully map bacterial behavior with traditional computing algorithms that don't require advanced neural networks. And we've been able to scale up the cognition to certain insects using these neural models. But as soon as the emotional realm of our consciousness starts to emerge in larger animals and mammals we start to hit a wall in which we can only simulate complex reasoning on the level of a multifunctional superadvanced calculator.


This is all very well, but the question that remains is that one concerning subjectivity. At what point does the machine suddenly become self-aware? Do we have to wait until additional complexity and advancement or do we have enough understanding to begin to formulate our stance with respect to machine subjectivity and our recognition of it. To my mind, and this is the key, we have already begun to recognise subjectivity in the AI, even if the programming is relatively basic for now. This recognition of the AI is what fuels its development. AI-robots may think differently to the way we do, as well, much more logical, faster processing and relatively emotion-free.

So we are dealing with a supercomputer AI that is constantly rebounding off our intelligence and advancing ultimately to our level, although with noticeably different characteristic (silicon-based transistor technology). What's perhaps more important is how this throws into question our subjectivity, as well. So we are also on a track to advancement. How do we end-up becoming subjects or selves, what caps-off our intelligence, the limits of our self-awareness? Is our consciousness having to expand to account for more of the Universe, perhaps? Technology seems to be key here, the obvious development being how our minds are adapting to work in tandem with the smartphone, for example.

That said, I think we are reaching some sort of philosophical end-point. A lot of what the great philosophers of the past have provided us with some sort of foundation to build on using commentary. We can't expect the emergence of a systemiser like Hegel nowadays, but his master-slave dialectic seems instructive in how we relate with AI-robots. Finally, perhaps we could move forward via a metaphysical turn and consider the 'Deus ex Machina' argument. What is animating, fundamentally driving this futuristic expansion in AI? I think AI is one of the biggest shifts I have encountered in my lifetime from the past few years and I am excited to see what will happen in the future.
ssu May 24, 2024 at 12:13 #906371
Quoting Nemo2124
This is all very well, but the question that remains is that one concerning subjectivity. At what point does the machine suddenly become self-aware?

You should first answer, when do people or animals become self-aware? Is there an universal agreement how this happens?
Nemo2124 May 24, 2024 at 12:25 #906372
Reply to ssu That's the point entirely. Are we not just as fallible to this argument, our thoughts just merely elaborate computations with emotions that simply shift our reference frame? Nothing special apart from language sets us apart from the animals and that seems merely basic formulations. The question about human consciousness follows immediately when we consider machine self-awareness.