On the existence of options in a deterministic world

MoK March 08, 2025 at 18:38 6000 views 78 comments
This is a problem that has bothered me for a long time, several years if not more! To elaborate consider that you are in a maze and face a fork. You immediately realize that there are two options available for you, namely the left and right path. This realization is due to neural processes in the brain. Neural processes however are deterministic. So I am wondering how can deterministic processes lead to the realization of options.

I am sure we realize two objects in infancy after we realize one object. But first, how do we realize one object? I think that happens in the early stage of infancy. If you present an unmoving object to an infant it cannot realize it unless that object was realized and memorized in the past. So, the infant needs a moving object to realize the object from the background since otherwise she/he just perceives an image although the image has a texture the infant cannot realize different objects within the image. The realization of an object from the background could also be the result of minor movement of eyeballs. So here is the first question: What does happen at the neural level when the infant realizes the object, and distinguishes it from the background?

The next step is when the infant is presented with two same objects. I don't know in which stage of her/his life she/he realizes the difference between two objects and one object but the knowledge of one object is necessary to understand two same objects. So here is the second question: What does happen at the neural level when an infant realizes two same objects?

I would like to invite @Pierre-Normand here since he is very knowledgeable in AI. Hopefully, he can help us answer these questions or give us some insight.

Comments (78)

PoeticUniverse March 08, 2025 at 20:08 #974729
Quoting MoK
The realization of an object from the background


BECOMING

We humans mirror and recapitulate
All of evolution while growing in our mother’s womb,
Racing through the stages in which life evolved.

During this nine months and even beyond that
We move from mindlessness to shadowy awareness
To consciousness of the world around us
Onto consciousness of the self
And then even to becoming conscious
Of consciousness itself.

For the first two and one-half years of life
The inexplicable holistic world
Is experienced less and less holistically
As the child discovers the
Bounds of discrete objects.

[hide="Reveal"]The holistic right brain remains of course
For us to take in the overall view,
While the logical left brain is also there to recognize
The detailed relationships between objects.

As such, so goes the universe,
Since we are formed in its image.
So then this gives us a clue
To the nature of the universe.

Seeing that the brain is
Divided into two hemispheres,
Each with their own
Characteristic mode of thought,
Which can communicate with each other,
Means that we are looking very deeply
Into the way that reality itself is constructed.

These two complimentary aspects
To the cosmos are thus absolutely essential,
One being of the whole:
The apparently indivisible,
Continuous fluid entity
Although discrete at unnoticeable levels,
The other being the interrelationships of the parts.

Each interpretation may not appear
At exactly the same time,
But the Yin ever gives way to Yang
And ever then back to Yin, and so on,
The rounded life of the mind
Thus continuing to fully roll,
As the cycle of this symmetry
Turns and returns;
If not, one either gets totally lost
In the details or prematurely halts
After but an apparent whole.

The holistic right brain mode is unfocused,
As we see in some people
Who are unconcerned with details,
It always building the scene in parallel
To form a single entity;
Whereas, the focused left side of brain
Isolates a target of interest and tracks it
And its derivatives sequentially and serially.

Yet the two sides of the overall brain
Are connected to each other
And so the speed of the juggling act
Can meld them together
Into a complete balance like that
Portrayed by the revolving Yin-Yang symbol,
Each ever receding and giving rise to the other
Such does the universe go both ways too,
Its separate parts implicated
With everything else in the whole.

During conscious observation
The ‘hereness’ and ‘nowness’
Of reality crystalizes and remains,
We establishing what that reality is to some extent.

We define and refine the nature of reality
That leads to the mind’s outlook.

Counterintuitive? Cyclical?

Yes but it is the universe in dialog with itself;
The wave functions and yet the function waves.

The universe supplies the means of its own creation,
Its possibilities supplying the avenues
And the probability and workability
That carve out the paths leading to success.

So here we are, then and now,
The rains of change falling everywhere,
The streams being carved out,
The water rising back up to the sky,
The rain then falling everywhere,
The streams recarving and meandering
Toward more meaning and so on.[/hide]
Zebeden March 08, 2025 at 22:10 #974751
Reply to MoK

Can't comment on neurological development, but from how I understand what the option is, I would say that an option always requires another option for it to be an option. Only if I know that I can also take the left path, does the right path become an option. Otherwise, it's just a path. Or rather, the path, I should say.

I would create a primary single object from the two options taken as a whole. And you yourself offered a fork in a maze as a starting point. The fork is this single object. You realize it is a fork, a decision-point, the sudden blur of the way further. And then you Quoting MoK
immediately realize that there are two options available for you, namely the left and right path.


So my answer would be that the fork always precedes the options. To understand an option, one first experiences a moment of unclarity. And then grasps the elements that make up this unclarity, like how a wave has ascending and descending parts.

As the single fork precedes the multiple options, you get this single object as a starting point. A single option is just a part of a fork, which you always extract later, already knowing there will be multiple options as it comes from recognizing the fork as being a fork.

And because options are always multiple by the very nature of a fork, it doesn't matter if the process of picking an option is always deterministic. They are still options because they are multiple.
Pierre-Normand March 09, 2025 at 06:27 #974798
Quoting MoK
I would like to invite Pierre-Normand here since he is very knowledgeable in AI. Hopefully, he can help us answer these questions or give us some insight.


I'd like to comment but I'm a bit unclear on the nature of the connection that you wish to make between the two issues that you are raising in your OP. There is the issue of reconciling the idea of there being a plurality of options available to an agent in a deterministic world, and the issue of the cognitive development (and the maturation of their visual system) of an infant whereby they come to discriminate two separate objects from the background and from each other. Can you make your understanding of this connection more explicit?
MoK March 09, 2025 at 10:08 #974810
Reply to PoeticUniverse
I'm sorry, but I can't follow you. Could you please write your opinion in plain English so I can understand what you're discussing?
MoK March 09, 2025 at 10:26 #974812
Quoting Zebeden

Can't comment on neurological development, but from how I understand what the option is, I would say that an option always requires another option for it to be an option. Only if I know that I can also take the left path, does the right path become an option. Otherwise, it's just a path. Or rather, the path, I should say.

I say that you have only one option available when there is only one path available to you.

Quoting Zebeden

So my answer would be that the fork always precedes the options. To understand an option, one first experiences a moment of unclarity.

Could you please elaborate here?
Zebeden March 09, 2025 at 11:45 #974816
Reply to MoK
Quoting MoK
I say that you have only one option available when there is only one path available to you.

Then, I would say that I have no options at all. I think that the only option is not an option, but rather a mere necessity.

Quoting MoK
Could you please elaborate here?

First, you experience a situation that requires decision-making. Once you're in such a situation, only then do you start examining options. Before that, everything was clear and certain (I was just going forward on this single path), and now I'm weighing my options at the crossroads, hence the uncertainty.
MoK March 09, 2025 at 12:10 #974818
Quoting Pierre-Normand

I'd like to comment but I'm a bit unclear on the nature of the connection that you wish to make between the two issues that you are raising in your OP.

Ok, I will try to make things more clearer for you.

Quoting Pierre-Normand

There is the issue of reconciling the idea of there being a plurality of options available to an agent in a deterministic world,

First, I have to say that De Broglie–Bohm's interpretation of quantum mechanics is correct since it is paradox-free. The motion of particles in this theory is deterministic though. By deterministic I mean given the state of the system at a given point in time the state of the system at a later time is uniquely defined by the former state. So, the motions of particles in the brain are deterministic as well accepting De Broglie–Bohm's theory. What bothers me is that we for sure know that options are real. We also know for sure that the existence of options is due to neural processes in the brain. Neural processes are however deterministic so I am wondering how options can possibly result from neural processes in the brain. I think we can resolve the big problem in the philosophy of mind, the problem is that hard determinists claim that options cannot be real. Of course, the hard determinists cannot be right in this case since we can obviously distinguish between a situation in which there is only one object and another situation in which there are two objects. I studied neural networks in good depth in the past. My memory on neural networks is very rusty now but I would be happy to have your understanding of this topic if you can explain it in terms of neural networks as well. Can we train a neural network to realize between one and two objects and give outputs 1 and 2 respectively? If yes, what does happen at the neural level when it is trained to recognize two objects?

Quoting Pierre-Normand

and the issue of the cognitive development (and the maturation of their visual system) of an infant whereby they come to discriminate two separate objects from the background and from each other.

Please let's focus on one object first. If we accept the Hebbian theory is the correct theory for learning then we can explain how an infant realizes one object. Once the infant is presented with two objects, she/he can recognize each object very quickly since she/he already memorized the object. How she/he realizes that there are two separate objects is however very tricky and is the part that I don't understand well. I have seen that smartphones can recognize faces when I try to take a photo. I however don't think they can recognize that there are two or more faces though.

Quoting Pierre-Normand

Can you make your understanding of this connection more explicit?

I tried to elaborate the best I could. Please let me know what you think and ask questions if you have any.
MoK March 09, 2025 at 12:32 #974821
Quoting Zebeden

First, you experience a situation that requires decision-making. Once you're in such a situation, only then do you start examining options. Before that, everything was clear and certain (I was just going forward on this single path), and now I'm weighing my options at the crossroads, hence the uncertainty.

I am interested to know what happens at the neural level when we realize that there are two paths.
noAxioms March 09, 2025 at 20:46 #974912
Quoting MoK
So I am wondering how can deterministic processes lead to the realization of options.
This is trivially illustrated with the most simple code.

Take a step.
Count the ways forward (don't include the way you came)
If 0, it's a dead end. Only option is to turn around.
If 1, continue that one way
else there are multiple options.

It's that easy. The realization of multiple options is as simple as counting, and there is even multiple options with case 1 since a good maze following program might conclude it to not be productive to follow the current path to its unseen end.

Almost all computer programs are fully deterministic and are great models to simplify what might otherwise be a complex subject.

What you need to worry about is not the realization of options, but how determinism always results in the same choice given the exact same initial state. So our program might be crude and uses the right-hand rule, in which case it doesn't even count options, it just takes the first rightmost valid path and doesn't even notice if there are other options. A better program would be more optimal than that, but then complexity is required, and it still does the same thing given the same initial state.

So realization of options is one thing, but no matter how many options there are, only one choice can be ultimately be made, even if determinism is not the case. You can follow a choice in the maze, and if it dead ends, you go back and take the other way, which is 'doing otherwise'. Even the right-hand robot can do otherwise in that sense.


As for the infant process of neural development, that's an insanely complex issue that likely requires a doctorate in the right field to discuss the current view of how all that works. It seems irrelevant to the topic of determinism and options.


Quoting MoK
First, I have to say that De Broglie–Bohm's interpretation of quantum mechanics is correct since it is paradox-free.

All the interpretations are paradox free. None of them has been falsified (else they'd not be valid interpretations), and some of them posit fundamental randomness, but several don't.

I don't like Bohmian mechanics because it requires FTL causality and even retro-causality, forbidden by the principle of locality, but that principle is denied by that interpretation. That makes it valid, but it doesn't make me willing to accept it.
Banno March 09, 2025 at 20:58 #974915
Quoting MoK
This realization is due to neural processes in the brain.


Not quite. That realisation is neural processes in the brain. It is not seperate from yet caused by those neural processes.

And a babe's brain is pre-wired to recognise faces and areola.

Pierre-Normand March 10, 2025 at 06:17 #975012
Quoting MoK
Please let's focus on one object first. If we accept the Hebbian theory is the correct theory for learning then we can explain how an infant realizes one object. Once the infant is presented with two objects, she/he can recognize each object very quickly since she/he already memorized the object. How she/he realizes that there are two separate objects is however very tricky and is the part that I don't understand well. I have seen that smartphones can recognize faces when I try to take a photo. I however don't think they can recognize that there are two or more faces though.


Hebbian mechanisms contribute to explaining how object recognition skills (and reconstructive episodic memory) can be enabled by associative learning. Being presented (and/or internally evoking) a subset of a set of normally co-occurring stimuli yields the activation of the currently missing stimuli from the set. Hence, for instance, the co-occurring thoughts of "red" and "fruit" might evoke in you the thought of an apple or tomato since the stimuli normally provided by by this subset evokes the missing elements (including their names) of the co-occurring stimuli (or represented features) of familiar objects.

In the case of the artificial neural networks that undergird large language models like ChatGPT, the mechanism is different but has some functional commonalities. As GPT-4o once put it: "The analogy to Hebbian learning is apt in a sense. Hebbian learning is often summarized as "cells that fire together, wire together." In the context of a neural network, this can be likened to the way connections (weights) between neurons (nodes) are strengthened when certain patterns of activation occur together frequently. In transformers, this is achieved through the iterative training process where the model learns to associate certain tokens with their contexts over many examples."

I assume what you are driving at when you ponder over the ability to distinguish qualitatively similar objects in the visual field is the way in which those objects are proxies for alternative affordances for action, as your initial example of two alternative paths in a maze suggests. You may be suggesting (correct me if I'm wrong) that those two "objects" are being discriminated as signifying or indicating alternative opportunities for action and you wonder how this is possible in view of the fact that, in a deterministic universe, only one of those possibilities will be realized. Is that your worry? I think @Banno and @noAxioms both proposed compatibilist responses to your worry, but maybe you have incompatibilist intuitions that make you inclined to endorse something like Frankfurt's principle of alternate possibilities. Might that be the case?
MoK March 10, 2025 at 10:19 #975044
Quoting noAxioms

This is trivially illustrated with the most simple code.

I agree that one can write code to help a robot count the number of unmoving dots in its visual field. But I don't think a person can write code to help a robot count the number of objects or moving dots.

Quoting noAxioms

As for the infant process of neural development, that's an insanely complex issue that likely requires a doctorate in the right field to discuss the current view of how all that works.

I searched the internet to death but I didn't find anything useful.

Quoting noAxioms

It seems irrelevant to the topic of determinism and options.

It is relevant.

Quoting noAxioms

All the interpretations are paradox free.

Copenhagen interpretation for example suffers from the Schrodinger's cat paradox. It cannot explain John Wheeler's delayed choice experiment. etc. Anyway, I am not interested in going to a debate on quantum mechanics in this thread since it is off-topic. All I wanted to say is that for this thread the motion of particles in a brain is deterministic.
MoK March 10, 2025 at 10:55 #975048
Quoting Banno

Not quite. That realisation is neural processes in the brain. It is not seperate from yet caused by those neural processes.

We have a slight difference here. I am a substance dualist and it seems to me that you are a physicalist. But please let's focus on the topic of the thread and put this difference in view aside.

Quoting Banno

And a babe's brain is pre-wired to recognise faces and areola.

Do you have any argument or know any study to support this claim? I am asking how an infant can distinguish between one object or two objects. I would be interested to know how an infant's brain is pre-wired then. So saying that an infant's brain is just pre-wired does not help to have a better understanding of what is happening in her/his brain when she/he realizes one object or two objects.
MoK March 10, 2025 at 12:01 #975052
Quoting Pierre-Normand

I assume what you are driving at when you ponder over the ability to distinguish qualitatively similar objects in the visual field is the way in which those objects are proxies for alternative affordances for action, as your initial example of two alternative paths in a maze suggests. You may be suggesting (correct me if I'm wrong) that those two "objects" are being discriminated as signifying or indicating alternative opportunities for action and you wonder how this is possible in view of the fact that, in a deterministic universe, only one of those possibilities will be realized. Is that your worry?

Yes. I am wondering how we can realize two objects which look the same as a result of neural processes in the brain accepting that the neural processes are deterministic.

Quoting Pierre-Normand

I think @Banno and @noAxioms both proposed compatibilist responses to your worry

@noAxioms suggests that we are counting objects. I don't think that is the case when we are presented with two objects. We immediately realize two objects as a result of neural processes in the brain. We however need to count when we are presented with many objects.

@Banno suggests that an infant's brain is pre-wired. That could be true. But that does not answer how an infant could possibly realize two objects since it does not address how the brain is pre-wired.

Quoting Pierre-Normand

but maybe you have incompatibilist intuitions that make you inclined to endorse something like Frankfurt's principle of alternate possibilities. Might that be the case?

Yes. We are morally responsible if we could do otherwise. That means that we at least have two options to choose from. The options are however mental objects, like to steal or not to steal, which are slightly harder to discuss but I think that we are dealing with the same category when we realize two objects in our visual field or when we realize two mental objects. So I think we can resolve all the discussions related to the reality of options if we can understand how the brain can distinguish two objects in its visual field first.
noAxioms March 10, 2025 at 20:48 #975180
Quoting Pierre-Normand
I think Banno and @noAxioms both proposed compatibilist responses to your worry,

A compatibilist says that free will and determinism are compatible with each other, but I would need both words more precisely defined were I to agree with that.


Quoting MoK
noAxioms suggests that we are counting objects.
I was showing the counting of options, not objects.
Quoting MoK
I agree that one can write code to help a robot count the number of unmoving dots in its visual field.
You are complicating a simple matter. I made no mention of the fairly complex task of interpreting a visual field. The average maze runner doesn't even have a visual field at all, but some do.
All I am doing is showing the utterly trivial task of counting options, which is a task easily performed by a determinsitic entity, answering your seeming inability to realize this when you state "So I am wondering how can deterministic processes lead to the realization of options".

The solution is to count the options (in the maze example, paths away from current location) and if there is more than one, options have been realized. If there is but one, it isn't optional. The means by which these options are counted is a needless complication that is besides the point.

But I don't think a person can write code to help a robot count the number of objects or moving dots.
I wrote code that did exactly that. It would look at a bin of parts and decide on the next one to pick up, and would determine the angle at which to best do that. This was 45 years ago when this sort of thing was still considered innovative.

Copenhagen interpretation for example suffers from the Schrodinger's cat paradox.
Nonsense. Just because you don't know how it explains a scenario doesn't mean it doesn't explain it. Copenhagen was developed as an epistemological interpretation which means the observer outside the box doesn't know (wave function describing state) the cat state and the observer inside has a more collapsed wave function state. Super easy.
Sure, off topic, so I'll leave off the delayed-choice thingy.
But your assertion that Bohmian mechanics is the only valid interpretation (a deterministic one) is on topic, and thus the falsification of the other interpretations is very much on topic.

Again, I counted six kinds of determinism, and some of those are almost certainly the case and some of them are almost certainly not the case. Bohmian mechanics was number 2.

Quoting MoK
We are morally responsible if we could do otherwise. That means that we at least have two options to choose from.
Moral responsibility is far more complicated than that, as illustrated by counterexamples, but the core is correct. There being more than one course of action available, and it is very hard to come up with an example where that is not the case. I am in a maze, but find myself embedded in the concrete walls instead of the paths between. I have no options, and thus am not responsible for anything I do there.

The options are however mental objects, like to steal or not to steal
Stealing and not stealing are physical actions, not mental objects. Bearing moral responsibility for one's mental objects is a rare thing, but they did it to Jimmy Carter, about a moral person as they come.

The fallacy seems to be in the assertion that determinism somehow takes away choice, which of course is nonsense since we'd not have evolved large (and very expensive) brains if not to make better choices. I cannot think of a single way that a choice can be made better by a non-deterministic process than by a similar but deterministic process. I invite such an example, but a deterministic algorithm implemented on a non-deterministic information processor is still a deterministic process.
MoK March 11, 2025 at 10:48 #975313
Quoting noAxioms

I was showing the counting of options, not objects.

We don't count options if a few are presented to us. We just realize the number of options right away as a result of neural processes in the brain. I am interested in understanding what is happening in the brain when we are performing such a simple task.

Quoting noAxioms

You are complicating a simple matter. I made no mention of the fairly complex task of interpreting a visual field. The average maze runner doesn't even have a visual field at all, but some do.
All I am doing is showing the utterly trivial task of counting options, which is a task easily performed by a determinsitic entity, answering your seeming inability to realize this when you state "So I am wondering how can deterministic processes lead to the realization of options".

The solution is to count the options (in the maze example, paths away from current location) and if there is more than one, options have been realized. If there is but one, it isn't optional. The means by which these options are counted is a needless complication that is besides the point.

No, you consider the existence of options granted and then offer a code that is supposed to work and counts options. Thanks, but that is not what I am looking for.

Quoting noAxioms

Stealing and not stealing are physical actions, not mental objects. Bearing moral responsibility for one's mental objects is a rare thing, but they did it to Jimmy Carter, about a moral person as they come.

I am talking about available options to a thief before committing the crime.
Relativist March 11, 2025 at 14:51 #975331
Quoting MoK
What does happen at the neural level when the infant realizes the object, and distinguishes it from the background?

I imagine it entails pattern recognition: seeing the same image pattern against a relatively constant background. Artificial neural networks learn patterns, and they are considerably simpler that biological neural networks because they lack neuroplasticity (the growing of new neurons and synapses).


Quoting MoK
So I am wondering how can deterministic processes lead to the realization of options.

Options that are before us lead us to mentally deliberate to develop a choice. If we could wind the clock back, could we actually have made a different choice? Clearly, if determinism is true, then we could not. But if determinism is false- why think our deliberation would have led to a different outcome? The same mental factors would have been in place.

Patterner March 11, 2025 at 16:02 #975339
It seems to me you are talking about the Hard Problem of Consciousness. Photons hit retina; signals go to the brain; pattern recognition shows that there are two possible paths; stored information of past encounters with similar patterns are triggered; on and on and on. But, unlike the Roomba, I realize I have options.
MoK March 11, 2025 at 20:01 #975404
Quoting Relativist

I imagine it entails pattern recognition: seeing the same image pattern against a relatively constant background. Artificial neural networks learn patterns, and they are considerably simpler that biological neural networks because they lack neuroplasticity (the growing of new neurons and synapses).

I did an extensive search and I found many methods for object recognition. Here, you can find two main methods, namely CNN, and YOLO. Granted that objects are recognized I am interested to know methods for counting objects. I did an extensive search on the net and got lost since it seems that the literature is very very rich on this topic! The current focus of research is to find the best method for counting the very high dense number of objects where objects could overlap for example. Here is a review article that discusses the CNN method for crowd counting. I am interested in a simple neural network that can count a limited number of isolated objects though. I will continue the search and let you know if I find anything useful.

Quoting Relativist

Options that are before us lead us to mentally deliberate to develop a choice. If we could wind the clock back, could we actually have made a different choice? Clearly, if determinism is true, then we could not. But if determinism is false- why think our deliberation would have led to a different outcome? The same mental factors would have been in place.

I am not interested in discussing the decision here. I am interested in understanding how we realize two objects so swiftly. If I show you two objects, you without any counting realize that there are two objects in your vision field. The same applies when you are in a maze. You realize that there are two paths available to you without counting as well. The mechanism is completely deterministic though. Two objects, two paths in a maze, etc. we are dealing with the same topic, and although the mechanism is fully deterministic we could recognize two options. So that part of the puzzle is solved for me.
MoK March 11, 2025 at 20:15 #975406
Quoting Patterner

It seems to me you are talking about the Hard Problem of Consciousness.

No, here I am interested in understanding how we realize objects/options in our vision fields. Please read the previous post if you are interested.
MoK March 11, 2025 at 21:15 #975417
@Relativist Ok, after a long search, I found an interesting thesis that deals with a neural network that can count. I read up to chapter 3. It is late now and time to go to bed! :wink:
Relativist March 11, 2025 at 21:25 #975423
Reply to MoK It looks interesting. I'll read it when I get a chance. The bibliography also lists some references that may also be helpful.
Pierre-Normand March 12, 2025 at 01:18 #975504
Quoting MoK
Yes. I am wondering how we can realize two objects which look the same as a result of neural processes in the brain accepting that the neural processes are deterministic.


I think you may be using the word "realize" meaning "resolve" (as used in photography, for instance, to characterise the ability to distinguish between closely adjacent objects.) Interestingly, the polysemy of the word "resolve", that can be used to characterise an ability to visually discriminate or characterise the firmness in one's intention to pursue a determinate course of action suggests that they are semantically related, with the first one being a metaphorical extension of the second.
noAxioms March 12, 2025 at 02:00 #975520
I know you're talking about mental processing of visual data, but that's far more complex than anybody here is qualified to answer, so I am instead picking statements that seem to be falsified by a simple, understandable model.

Quoting MoK
No, you consider the existence of options granted

We were considering a fork in the path of a maze. Are they not a pair of options?
Sure, one cannot choose to first go down both. Of the options, only one can be chosen, and once done, choosing otherwise cannot be done without some sort of retrocausality. They show this in time travel fictions where you go back to correct some choice that had unforeseen bad consequences.

I guess I don't know what you consider to be options.

I am talking about available options to a thief before committing the crime.
So you do grant the existence of multiple options before choosing one of them. What part of the maze example then is different than the crime example?


Quoting Patterner
But, unlike the Roomba, I realize I have options.

A Roomba wouldn't work if it didn't realize options. If there are two paths to choose from, it needs to know that. If it always picked the left path, there would be vast swaths of floor never visited. It needs awareness of alternative places to go.

What fundamentally do you do that a Roomba doesn't? If you mean it is not remote controlled, I'll agree. It makes its own choices. The RC car on the other hand is remote controlled and has no free will of its own. That's a fundamental distinction between the Roomba and the RC car, but I ask about the Roomba and you, because I suspect you're the RC car, a puppet of another.
Patterner March 12, 2025 at 02:47 #975532
Quoting noAxioms
But, unlike the Roomba, I realize I have options.
— Patterner
A Roomba wouldn't work if it didn't realize options.
How about wording it this way:
[I]A Roomba wouldn't work if it didn't realize it has options.[/I]


I'm afraid you've lost me, regarding the puppet.
MoK March 12, 2025 at 10:05 #975570
Reply to Relativist
Yes, the bibliography also lists the references which could be useful. I will go through them after I finish the thesis.
MoK March 12, 2025 at 10:29 #975572
Reply to Pierre-Normand
I am unsure whether we first realize two objects and then distinguish/resolve them from each other upon further investigations or first distinguish/resolve them from each other and then count them and realize that there are two objects. The counting convolutional neural network works based on later.
Pierre-Normand March 12, 2025 at 10:42 #975575
Quoting MoK
I am unsure whether we first realize two objects and then distinguish/resolve them from each other upon further investigations or first distinguish/resolve them from each other and then count them and realize that there are two objects. The counting convolutional neural network works based on later.


We can indeed perceive a set of distinct objects as falling under the concept of a number without there being the need to engage in a sequential counting procedure. Direct pattern recognition plays a role in our recognising pairs, trios, quadruples, quintuples of objects, etc., just like we recognise numbers of dots on the faces of a die without counting them each time. We perceive them as distinctive Gestalten. But I'm more interested in the connection that you are making between recognising objects that are actually present visually to us and the prima facie unrelated topic of facing open (not yet actual) alternatives for future actions in a deterministic world.
Patterner March 12, 2025 at 11:24 #975578
Quoting Pierre-Normand
We can indeed perceive a set of distinct objects as falling under the concept of a number without there being the need to engage in a sequential counting procedure. Direct pattern recognition plays a role in our recognising pairs, trios, quadruples, quintuples of objects, etc., just like we recognise numbers of dots on the faces of a die without counting them each time. We perceive them as distinctive Gestalten.
I would think there's a limit to this. We might recognize the number of dots on a die because of the specific arrangements that we've seen so many times. Would we do as well with five or six randomly arranged objects? Or ten or fifteen?
MoK March 12, 2025 at 11:45 #975582
Quoting noAxioms

I know you're talking about mental processing of visual data, but that's far more complex than anybody here is qualified to answer, so I am instead picking statements that seem to be falsified by a simple, understandable model.

I found this useful thesis about counting objects by a convolutional neural network.

Quoting noAxioms

We were considering a fork in the path of a maze. Are they not a pair of options?

Sure they are.

Quoting noAxioms

Sure, one cannot choose to first go down both. Of the options, only one can be chosen, and once done, choosing otherwise cannot be done without some sort of retrocausality. They show this in time travel fictions where you go back to correct some choice that had unforeseen bad consequences.

The point is that both paths are real and accessible, as we can recognize them. However, the process of recognizing paths is deterministic. This is something that hard determinists deny. The decision is a separate topic though. I don't think that the decision results from the brain's neural process. The decision is due to the mind. That is true since any deterministic system halts when you present it with options. A deterministic system always goes from one state to another unique state. If a deterministic system reaches a situation where there are two states available for it it cannot choose between two states therefore it halts. When we are walking in a maze, our conscious mind is aware of different situations always. If there is one path available then we simply proceed. If we reach a fork we realize the options available to us, namely the left and right path. That is when the conscious mind comes into play, realizes the paths in its experience, and chooses one of the paths. The subconscious mind then becomes aware of the decision and acts accordingly.

Quoting noAxioms

I guess I don't know what you consider to be options.

By options, I mean a set of things that are real and accessible and we can choose from.

Quoting noAxioms

So you do grant the existence of multiple options before choosing one of them. What part of the maze example then is different than the crime example?

In the example of the maze, the options are presented to the person's visual fields. In the case of rubbery the options are mental objects.
MoK March 12, 2025 at 12:12 #975585
Quoting Pierre-Normand

When can indeed perceive a set of distinct objects as falling under the concept of a number without there being the need to engage in a sequential counting procedure. Direct pattern recognition plays a role in our recognising pairs, trios, quadruples, quintuples of objects, etc., just like we recognise numbers of dots on the faces of a die without counting them each time. We perceive them as distinctive Gestalten.

Correct. There is however a limit on the number of things that we can realize without counting. I think it is related to working memory and it is at most five to six items.

Quoting Pierre-Normand

But I'm more interested in the connection that you are making between recognising objects that are actually present visually to us and the prima facie unrelated topic of facing open (not yet actual) alternatives for future actions in a deterministic world.

I was interested in a neural network that can realize the number of objects. I found this thesis which exactly deals with the problem of realizing the number of objects that I was interested to. The author does not explain what exactly happens at the neural level when the neural network is presented with many objects and it can realize the number of objects as I think it is a complex phenomenon. I think we are dealing with the same phenomenon when we face two options in the example of the maze, left and right path. So, although the neural processes whether in our brain or an artificial neural network are deterministic they can lead to the realization of options. By options, I mean things that are real and accessible to us and we can choose one or more of them depending on the situation.
MoK March 12, 2025 at 12:14 #975586
Quoting Patterner

I would think there's a limit to this. We might recognize the number of dots on a die because of the specific arrangements that we've seen so many times. Would we do as well with five or six randomly arranged objects? Or ten or fifteen?

I think it depends on the working memory of the person which is at most 5 to 6 items.
Patterner March 12, 2025 at 12:29 #975590
Reply to MoK
Makes sense.
Patterner March 12, 2025 at 12:32 #975593
Reply to MoK
Strikes that. What do you mean by working memory? I'm thinking someone could glance at, say, a max of 10 randomly arranged items, and immediately know there are 10, without counting. Someone else might only be able to do that with up to 5 items.
MoK March 12, 2025 at 12:56 #975594
Reply to Patterner
Working memory is the memory of the conscious mind which is temporary.
Patterner March 12, 2025 at 13:02 #975596
Quoting MoK
Working memory is the memory of the conscious mind which is temporary.
Right. I'm thinking this specific thing is less about working memory than what the ability to recognize numbers of randomly arranged objects is called. No?
MoK March 12, 2025 at 13:17 #975598
Quoting Patterner

Right. I'm thinking this specific thing is less about working memory than what the ability to recognize numbers of randomly arranged objects is called. No?

I think it is related. You can realize a few objects in your visual field immediately without counting. These objects are registered in your working memory. If the number of objects surpasses your the size of working memory then you cannot immediately report the number of objects and you have to count them. You might find this study interesting.
noAxioms March 12, 2025 at 17:40 #975615
Quoting Patterner
How about wording it this way:
A Roomba wouldn't work if it didn't realize it has options.

I'm fine with that.

You didn't answer the question asked "What fundamentally do you do that a Roomba doesn't?" when you imply that a Roomba doesn't realize options.


Quoting MoK
We were considering a fork in the path of a maze. Are they not a pair of options? — noAxioms

Sure they are.

Sure, one cannot choose to first go down both. Of the options, only one can be chosen, and once done, choosing otherwise cannot be done without some sort of retrocausality. They show this in time travel fictions where you go back to correct some choice that had unforeseen bad consequences. — noAxioms

The point is that both paths are real and accessible, as we can recognize them. However, the process of recognizing paths is deterministic. This is something that hard determinists deny.

How can a determinist deny that some physical process is determisitic? You have a reference for this denial by 'hard determinists'?
I mean, even in a non-deterministic universe, the process of recognizing paths (biological or machine) is deterministic. I cannot think of a non-determinstic way to to implement it.

I don't think that the decision results from the brain's neural process. The decision is due to the mind.
Ah, so you think that this 'mind' is separate from neural processes. You should probably state assumptions of magic up front, especially when discussing how neural processes do something that you deny are done by the neural processes. Or maybe the brain actually has a function after all besides just keeping the heart beating and such.

since any deterministic system halts when you present it with options.
Tell that to Roomba or the maze runner, neither of which halts at all.

A deterministic system always goes from one state to another unique state. If a deterministic system reaches a situation where there are two states available for it it cannot choose between two states therefore it halts.
No, it makes a choice between them. Determinism helps with that, not hinders it. Choosing to halt is a decision as well, but rarely made. You make a lot of strawman assumptions about deterministic systems, don't you?


In the example of the maze, the options are presented to the person's visual fields. In the case of rubbery the options are mental objects.
The maze options are also 'mental' objects, where 'mental; is defined as the state of the information processing portion of the system. A difference in how the choice comes to be known is not a fundamental difference to the choice existing.



Patterner March 12, 2025 at 18:47 #975621
Quoting noAxioms
How about wording it this way:
A Roomba wouldn't work if it didn't realize it has options.
— Patterner
I'm fine with that.

You didn't answer the question asked "What fundamentally do you do that a Roomba doesn't?" when you imply that a Roomba doesn't realize options.
I did not. I was waiting to see if we were thinking of things the same way.

The difference is I am aware that I have options. The Roomba goes one way or the other at the command of it's programming, never aware of [I]how[/I] the decision was made; [I]that[/I] a decision was made; or even that there are options. It has no concept of options. It does not think about the choice it made two minutes ago, and wonder it if might have been better to have gone the other way. And it certainly doesn't regret any choice it ever made.
MoK March 12, 2025 at 19:31 #975623
Quoting noAxioms

How can a determinist deny that some physical process is determisitic? You have a reference for this denial by 'hard determinists'?

I wanted to say that determinists deny the existence of options rather than determinism.

Quoting noAxioms

Ah, so you think that this 'mind' is separate from neural processes.

Sure, I think that the mind is separate from neural processes. To me, physical processes in general are not possible without an entity that I call the Mind. I have two threads on this topic. In one of the threads entitled "Physical cannot be the cause of its own change" I provide two main arguments against the physicalist worldview. In another thread entitled "The Mind is the Uncaused Cause", I discuss the nature of causality as vertical rather than horizontal. So no Mind, no physical processes, no neural processes.

Quoting noAxioms

You should probably state assumptions of magic up front, especially when discussing how neural processes do something that you deny are done by the neural processes.

I am not denying the role of neural processes at all. It is due to neural processes that we can experience things all the time. The existence of options also is due to the existence of neural processes. The neural processes however cannot lead to direct experience through so-called the Hard Problem of Consciousness. So, to have a coherent view we need to include the mind as an entity that experiences. The Mind experiences and causes/creates physical whereas the mind, such as the conscious mind, experiences ideas, ideas such as the simulation of reality, generated by the subconscious mind. The conscious mind only intervenes when it is necessary, for example when there is a conflict of interests in a situation.

Quoting noAxioms

Or maybe the brain actually has a function after all besides just keeping the heart beating and such.

Sure. No brain, no neural processes, no experience in general, whether the experience is a feeling, the simulation of reality, thoughts, etc.

Quoting noAxioms

Tell that to Roomba or the maze runner, neither of which halts at all.

That is because Roomba acts based on the instruction that a human wrote it. We don't act based on a preprogrammed instruction. We are constantly faced with options, these options have different features that we have never experienced before. We normally go through a very complex process of giving weights to options. Once the process of giving weights to options is performed we are faced with two situations, either options do not have the same weight or they have the same weight. We normally choose the option that has higher weight most of the time but we can always choose otherwise. When the option has the same weight we can still decide freely choose the option we please. In both cases, that is the conscious mind that makes the final decision freely by choosing one of the options.

Quoting noAxioms

No, it makes a choice between them. Determinism helps with that, not hinders it. Choosing to halt is a decision as well, but rarely made. You make a lot of strawman assumptions about deterministic systems, don't you?

Not at all. Please see above.

Quoting noAxioms

The maze options are also 'mental' objects, where 'mental; is defined as the state of the information processing portion of the system. A difference in how the choice comes to be known is not a fundamental difference to the choice existing.

The maze options become mental objects if you think about them otherwise they are just something in your visual field.
MoK March 12, 2025 at 19:32 #975625
Reply to Patterner
:100: :up:
noAxioms March 13, 2025 at 00:58 #975691
Quoting Patterner
The difference is I am aware that I have options.
Both are. The Roomba would not be able to choose an option of which it was unaware. So maybe the left path has been visited less recently, but if it didn't know left was an option, it would just go to the one path it does know about and clean the same spot over and over. Not very good programming.

The Roomba goes one way or the other at the command of it's programming
The programming is part of the Roomba, same as your programming is part of you (maybe, opinions differ on the latter. You make it sound like a program at the factory is somehow remote controlling the device. It could work that way, but it doesn;t.

never aware of how the decision was made
Also true of both.

Quoting Patterner
It has no concept of options.
As I said, the device couldn't operate if it wasn't aware of options. It has sensory inputs. It uses them to determine options, including the option to seek the charging station, just like you do.

It does not think about the choice it made two minutes ago
Actually it does, but I do agree that some devices don't retain memory of past choices. How is that a fundamental difference? You also don't remember all choices made in the past, even 2 minutes old. The Roomba doesn't so much remember the specific choices (which come at the rate of several per second, possibly thousands), but rather remembers the consequences of them.

Quoting Patterner
And it certainly doesn't regret any choice it ever made.
Got me there. The human emotion of regret probably does not enhance its functionality, so they didn't include that. The recent chess playing machines do definitely have regret (its own kind, not the human kind), something necessary for learning, but Roombas are not learning things.



Quoting MoK
I wanted to say that determinists deny the existence of options rather than determinism.
If they do that, they're using a very different definition of 'options' than are you.

Your definition (OM): the available paths up for choice. There are usually hundreds of options, but in a simplified model, you come to a T intersection in a maze. [Left, right, back the way you came, just sit there, pause and make a mark] summarize most of the main categories. Going straight is not an option because there's a wall there.
I am putting words in your mouth, so if I'm wrong, then call it ON (Option definition, Noaxioms) and then give your own definition with clear examples of what is and is not an option.

OK, said hard determinist with the alternate definition OD: The possible subsequent states that lead from a given initial state. If determinism is true, there is indeed only one of those, both for the Roomba and for you. There is no distinction.

Thing is, there is no empirical way to figure out if determinism is the case or not. The experience is the same. If you want to go left, you go left. If you want to go right, you go right. That's true, determinism or not, and it's true regardless of which definition of 'options' is used.

Side note: Using OD:, there is one option only with types 2, 5, and 6, but 1,4 and 6 are not especially considered 'hard determinism'.


Quoting MoK
Sure, I think that the mind is separate from neural processes.
OK. Then it's going to at some point need to make a physical effect from it's choice. If you choose to punch your wife in the face, your choice needs to at some point cause your arm to move, something that cannot happen if the subsequent state is solely a function of the prior physical state. So your view is compatible only with type 6 determinism, and then only in a self-contradictory way, but self contradiction is what 6 is all about.

To me, physical processes in general are not possible without an entity that I call the Mind.
Fine. Work out the problem I identified just above. If you can't do that, then you haven't thought things through. Do you deny known natural law? If not, your beliefs fail right out of the gate. If you do deny it, where specifically is it violated?

How is the Roomba mind fundamentally different than yours? It's a physical process, and you assert above that such process is not possible without a mind. A rock cannot fall without a mind.

I suppose that works under idealism, but determinism (or lack of it) has pretty much no meaning under idealism.

Patterner March 13, 2025 at 04:01 #975744
Reply to noAxioms
So Roombas are the mental equals of humans? The only thing separating us is emotion?
MoK March 13, 2025 at 11:57 #975783
Quoting noAxioms

Your definition (OM): the available paths up for choice. There are usually hundreds of options, but in a simplified model, you come to a T intersection in a maze. [Left, right, back the way you came, just sit there, pause and make a mark] summarize most of the main categories. Going straight is not an option because there's a wall there.

By options, I mean things that are real and accessible to us and we can choose one or more of them depending on the situation.

Quoting noAxioms

OK, said hard determinist with the alternate definition OD: The possible subsequent states that lead from a given initial state. If determinism is true, there is indeed only one of those, both for the Roomba and for you. There is no distinction.

Sure, I disagree. This thread's whole purpose is to understand how options can exist and be real for entities such as humans with brains. I was just looking to understand how we could realize options as a result of neural processes in the brain. I did an extensive search on the internet and found many methods for object recognition. I also found a thesis that deals with a neural network that can realize the number of objects presented to it. So the existence of options is well established even in the domain of artificial neural networks.

Quoting noAxioms

OK. Then it's going to at some point need to make a physical effect from it's choice. If you choose to punch your wife in the face, your choice needs to at some point cause your arm to move, something that cannot happen if the subsequent state is solely a function of the prior physical state.

The mind can only intervene when options are available to it. Once the decision is made, it becomes an observer and follows the chain of causality until the next point where options become available again.

Quoting noAxioms

Fine. Work out the problem I identified just above. If you can't do that, then you haven't thought things through. Do you deny known natural law?

Sure, I agree with the existence of physical laws.

Quoting noAxioms

If not, your beliefs fail right out of the gate.

Not at all. The Mind is in constant charge of keeping things in motion, in this motion, the intrinsic properties of particles are preserved for example. The physical laws are manifestations of particles having certain intrinsic properties.
noAxioms March 13, 2025 at 15:57 #975796
Quoting Patterner
So Roombas are the mental equals of humans? The only thing separating us is emotion?

Ask MoK. He's the one that said that "hysical processes in general are not possible without an entity that I call the Mind", which implies that a Roomba is not possible without a mind. It's apparently how he explains the action resulting from an immaterial decision.


Quoting MoK
By options, I mean things that are real and accessible to us and we can choose one or more of them depending on the situation.
I think that pretty much matches the wording I gave. It works great for the Roomba too.
MoK March 13, 2025 at 16:23 #975800
Quoting noAxioms

I think that pretty much matches the wording I gave. It works great for the Roomba too.

The difference between a human and a Roomba is that a human has a conscious mind that makes the decisions whereas, in the case of a Roomba, all decisions related to different situations are preprogrammed.
javra March 13, 2025 at 17:11 #975809
Quoting MoK
Neural processes however are deterministic. So I am wondering how can deterministic processes lead to the realization of options.


I deem this the crucial premise in the OP that needs to be questioned.

IFF a world of causal determinism, then sure: “neural processes are deterministic” (just as much as a Roomba). However, if the world is not one of causal determinism, then on what grounds, rational or empirical, can this affirmation be concluded?

A living brain is after all living, itself composed of individual, interacting living cells, of which neurons are likely best known via empirical studies. As individual living cells, neurons too can be deemed to hold some sort of sentience – this in parallel to that sentience (else mind) that can be affirmed of single-celled eukaryotic organisms, such as ameba. Other that personal biases, there's no rational grounds to deny sentience (mind) to one and not the other. And, outside a stringent conviction in our world being one of causal determinism, there is no reason to conclude that an ameba, for example, behaves in fully deterministic manners. Likewise then applies to the behaviors of any individual neuron. Each neuron seeks both sustenance and stimulation via its synaptic connections so as to optimally live. It’s by now overwhelmingly evidenced that neuroplasticity in fact occurs. Such that it is more than plausible that both synaptic reinforcement and synaptic decay (as well as the creation of new synaptic connections) will occur based on the (granted, very minimal) volition of individual neurons’ attempts to best garner sustenance and stimulations so as to optimize its own individual life as a living cell.

And all this can well be in tune with the stance that neural processes are in fact not deterministic (here, this in the sense of a causal determinism).

To this effect, linked here is an article regarding the empirically evidenced intelligence, or else sentience, of individual cohorts of neurons grown in a petri dish which learned how to play Pong (which can be argued to require a good deal of forethought (prediction) to successfully play). Some highlights from the article:

Quoting https://neurosciencenews.com/organoid-pong-21625/
Summary: Brain cells grown in a petri dish can perform goal-directed tasks, such as learning to play a game of Pong.

[....]

“But in truth we don’t really understand how the brain works.”

By building a living model brain from basic structures in this way, scientists will be able to experiment using real brain function rather than flawed analogous models like a computer.

[...]

To perform the experiment, the research team took mouse cells from embryonic brains as well as some human brain cells derived from stem cells and grew them on top of microelectrode arrays that could both stimulate them and read their activity.

Electrodes on the left or right of one array were fired to tell Dishbrain which side the ball was on, while distance from the paddle was indicated by the frequency of signals. Feedback from the electrodes taught DishBrain how to return the ball, by making the cells act as if they themselves were the paddle.

[...]

Kagan says one exciting finding was that DishBrain did not behave like silicon-based systems. “When we presented structured information to disembodied neurons, we saw they changed their activity in a way that is very consistent with them actually behaving as a dynamic system,” he says.

“For example, the neurons’ ability to change and adapt their activity as a result of experience increases over time, consistent with what we see with the cells’ learning rate.”


Again, if one insists in the world being one of causal determinism, then all this is itself determinate in all respects. Fine. But if not, empirical studies such as this strongly indicate that neural processes are indeed indeterministic, aka, not deterministic.

The inquiry into options available and the act of choice making itself would then follow suit.
MoK March 13, 2025 at 19:08 #975832
Quoting javra

I deem this the crucial premise in the OP that needs to be questioned.

IFF a world of causal determinism, then sure: “neural processes are deterministic” (just as much as a Roomba). However, if the world is not one of causal determinism, then on what grounds, rational or empirical, can this affirmation be concluded?

In this thread, I really didn't want to get into a debate about whether the world at the microscopic level is deterministic or not. There is one interpretation of quantum mechanics, namely the De Broglie–Bohm interpretation that is paradox-free and it is deterministic. Accepting this interpretation then it follows that a neuron also is a deterministic entity. What happens when we have a set of neurons may be different though. Could a set of neurons work together in such a way that the result of this collaboration results in the existence of options? We know by fact that this is the case in the human brain. But what about when we have a few or some neurons? To answer that, let's put the real world aside and look at artificial neural networks (ANN) for a moment. Could the ANN realize and count different objects? It seems that is the case. So options are also realizable even to the ANN while the neurons in such a system function in a purely deterministic way.

Quoting javra

A living brain is after all living, itself composed of individual, interacting living cells, of which neurons are likely best known via empirical studies. As individual living cells, neurons too can be deemed to hold some sort of sentience – this in parallel to that sentience (else mind) that can be affirmed of single-celled eukaryotic organisms, such as ameba.

An ameba is a living organism and can function on its own. A neuron, although is a living entity, its function depends on the function of other neurons. For example, the strengthening and weakening of a synapse is the result of whether the neurons that are connected by the synapse fire in synchrony or not, so-called Hebbian theory. So there is a mechanism for the behavior of a few neurons, and it seems that is the basic principle for memory, and I would say for other complex phenomena even such as thinking.

Quoting javra

Other that personal biases, there's no rational grounds to deny sentience (mind) to one and not the other. And, outside a stringent conviction in our world being one of causal determinism, there is no reason to conclude that an ameba, for example, behaves in fully deterministic manners. Likewise then applies to the behaviors of any individual neuron. Each neuron seeks both sustenance and stimulation via its synaptic connections so as to optimally live.

I would say that an ameba has a mind, can learn, etc. but I highly doubt that a single neuron has a mind and can freely decide as it seems that the functioning of a neuron is not independent of other neurons. Please see the previous comment.

Quoting javra

It’s by now overwhelmingly evidenced that neuroplasticity in fact occurs. Such that it is more than plausible that both synaptic reinforcement and synaptic decay (as well as the creation of new synaptic connections) will occur based on the (granted, very minimal) volition of individual neurons’ attempts to best garner sustenance and stimulations so as to optimize its own individual life as a living cell.

Neuroplasticity, to the best of our knowledge, is the result of neurons firing together. Please see my comment on the Hebbian theory.

Quoting javra

To this effect, linked here is an article regarding the empirically evidenced intelligence, or else sentience, of individual cohorts of neurons grown in a petri dish which learned how to play Pong (which can be argued to require a good deal of forethought (prediction) to successfully play).

That was an interesting article to read. But there are almost 800,000 cells in the DishBrain. I don't understand the relevance of this study to the behavior of one neuron and whether a neuron is not a deterministic entity.
javra March 13, 2025 at 19:50 #975834
Quoting MoK
In this thread, I really didn't want to get into a debate about whether the world at the microscopic level is deterministic or not.


My bad then.

Quoting MoK
To answer that, let's put the real world aside and look at artificial neural networks (ANN) for a moment.


In other words, look at silicon-based systems rather than life-based systems in order to grasp how life-based systems operate. Not something I'm myself into. But it is your OP, after all.

Quoting MoK
As individual living cells, neurons too can be deemed to hold some sort of sentience – this in parallel to that sentience (else mind) that can be affirmed of single-celled eukaryotic organisms, such as ameba. — javra

An ameba is a living organism and can function on its own. A neuron, although is a living entity, its function depends on the function of other neurons. For example, the strengthening and weakening of a synapse is the result of whether the neurons that are connected by the synapse fire in synchrony or not, so-called Hebbian theory. So there is a mechanism for the behavior of a few neurons, and it seems that is the basic principle for memory, and I would say for other complex phenomena even such as thinking.


I'll only point out that all of your reply addresses synapses - which are connections in-between neurons and not the neutrons themselves.

So none of this either rationally or empirically evidences that an individual neuron is not of itself a sentience-endowed lifeform - one that engages in autopoiesis, to include homeostasis and metabolism as an individual lifeform, just as much as an any self-sustaining organism does; one that seeks out stimulation via both dendritic and axonal growth just as much as any self-sustaining organism seeks out and requires stimulation; one which perceives stimuli via its dendrites and acts, else reacts, via its axon; etc.

As I was previously mentioning, there is no rational or empirical grounds to deny sentience to the individual neuron (or most any somatic cell for that matter - with nucleus-lacking red blood cells as a likely exception) when ascribing sentience to self-sustaining single celled organisms such as amebas. Again, the explanation you've provided for neurons not being in some manner sentient falls short in part for the reasons just mentioned: in short, synapses are not neurons, but the means via which neurons communicate.

But back to the premise of neural processes being deterministic ...
MoK March 13, 2025 at 20:33 #975843
Quoting javra

My bad then.

I am sorry. But I elaborate a little on quantum mechanics in my reply to your post. I hoped that that was enough.

Quoting javra

In other words, look at silicon-based systems rather than life-based systems in order to grasp how life-based systems operate. Not something I'm myself into. But it is your OP, after all.

As I mentioned, I was interested in understanding whether a few or some neurons work together such that the system can realize the options. I think it would be extremely difficult to make such a setup by living neurons. That was why I suggested to focus on the artificial neural network.

Quoting javra

I'll only point out that all of your reply addresses synapses - which are connections in-between neurons and not the neutrons themselves.

That is a very important part when it comes to the neuroplasticity of the brain. A neuron mainly just fires when it is depolarized to a certain extent.

Quoting javra

So none of this either rationally or empirically evidences that an individual neuron is not of itself a sentience-endowed lifeform - one that engages in autopoiesis, to include homeostasis and metabolism as an individual lifeform, just as much as an any self-sustaining organism does; one that seeks out stimulation via both dendritic and axonial growth just as much as any self-sustaining organism seeks out and requires stimulation; one which perceives stimuli via its dendrites and acts, else reacts, via its axon; etc.

As I was previously mentioning, there is no rational or empirical grounds to deny sentience to the individual neuron (or most any somatic cell for that matter - with nucleus-lacking red blood cells as a likely exception) when ascribing sentience to self-sustaining single celled organisms such as ameba. Again, the explanation you've provided for neurons not being in some manner sentient falls short in part for the reasons just mentioned: in short, synapses are not neurons, but the means via which neurons communicate.

I highly doubt that a neuron has a mind. But let's assume so for the sake of the argument. In which location in a neuron is the information related to what the neuron experienced in the past stored? How could a neuron realize options? How could a group of neurons work coherently if each is free?
javra March 13, 2025 at 21:03 #975850
Quoting MoK
That is a very important part when it comes to the neuroplasticity of the brain. A neuron mainly just fires when it becomes depolarized to a certain extend.


This overlooks the importance of dendritic input. which culminates in the neuron's nucleus. As to neuroplasiticiy, it can be rather explicitly understood to consist of new synaptic connections created by new outreachings of dendrites and axons. Otherwise the brain would remain permanently hardwired, so to speak, with the neural connections it has from birth till the time of its death. And I distinctly remember the latter being the exact opposite of neuroplasticity in the neuroscience circles I once partook of. So understood, neuroplaticity is contingent on individual neurons growing their dendrites and axons (via most likely trial and error means) toward new sources of synapse-resultant stimulation.

Quoting MoK
I highly doubt that a neuron has a mind. But let's assume so for the sake of the argument. In which location in a neuron is the information related to what the neuron experienced in the past stored? How could a neuron realize options?


Same questions can be placed with equal validity of any individual ameba, for example. Point being, if you allow for "mind in life" as it would pertain to an ameba, there is no reason to not then allow the same for a neuron. The as of yet unknown detailed mechanism of how all this occurs in a lifeform devoid of a central nervous system being completely irrelevant to the issue at hand.

Quoting MoK
How could a group of neurons work coherently if each is free?


Free from what? All I said is that an individual neuron can well be maintained to be sentient, hence hold a volition and mind (utterly minuscule in comparison to our own though it would then be). As to the issue of how can a plurality of sentient lifeforms work "coherently" - assuming that by "coherently" you meant cooperatively - I'm not sure what you're here expecting? How does a society of humans work cooperatively? A multitude of hypotheses could be offered, one of which is that of maximizing the well being of oneself via cooperation with others. Besides, as liver cells are built to work cooperatively in the liver as organ, for example, neurons are built to work cooperatively in the CNS as organ.
Patterner March 14, 2025 at 03:25 #975954
Reply to javra
What is a mind? What does a mind do? This is from [I]Journey of the Mind: How Thinking Emerged from Chaos[/I], by Ogi Ogas and Sai Gaddam:
Ogas and Gaddam:A mind is a physical system that converts sensations into action. A mind takes in a set of inputs from its environment and transforms them into a set of environment-impacting outputs that, crucially, influence the welfare of its body. This process of changing inputs into outputs—of changing sensation into useful behavior—is thinking, the defining activity of a mind.

Accordingly, every mind requires a minimum of two thinking elements:
•?A sensor that responds to its environment
•?A doer that acts upon its environment
They talk about the amoeba, which has the required elements.

Obviously, these definitions of mind and thinking are as basic as can be. But it's where it all starts.

Can a neuron be said to have a mind, to think, by these definitions?

Or do you say a neuron has a mind because of some other definition?
javra March 14, 2025 at 04:17 #975958
Quoting Patterner
Accordingly, every mind requires a minimum of two thinking elements:
•?A sensor that responds to its environment
•?A doer that acts upon its environment — Ogas and Gaddam

They talk about the amoeba, which has the required elements.

Obviously, these definitions of mind and thinking are as basic as can be. But it's where it all starts.

Can a neuron be said to have a mind, to think, by these definitions?


I don't see why not.

The sensor aspect of thought so defined: the neuron via its dendrites senses in its environment of fellow neurons their axonal firings (axons of other neurons to which the dendrites of the particular neuron are connected via synapses) and responds to its environment of fellow neurons by firing its own axon so as to stimulate other neurons via their own dendrites.

The doer aspect of thought so defined: the neuron's growth of dendrites and axon (which is requisite for neural plasticity) occurs with the, at least apparent, purpose of finding, or else creating, new synaptic connections via which to be stimulated and stimulate - this being a neuron's doing in which the neuron acts upon its environment in novel ways.

To me, it seems to fit the definitions of mind offered just fine.

javra March 14, 2025 at 05:06 #975963
Reply to Patterner

BTW, so its known, what I just wrote is a simplified model of the average neuron.

Different neurons will have different physiology. Some neurons, for example, do not have an axon, at least not one that can be differentiated from its dendrites. (reference) Other neurons have over 1000 dendritic branches and the one axon. (reference) Still, they all (to my knowledge) sense dendritic input and act upon their environment in fairly blatant manners - thereby staying accordant to the definition of mind you've provided.

Also: in fairness, my own general understanding of mind follows E. Thompson's understanding pretty closely, which he explains in great detail in his book "Mind in Life: Biology, Phenomenology, and the Sciences of Mind". The first paragraph from the book's preface given the general idea:

Quoting https://lchc.ucsd.edu/MCA/Mail/xmcamail.2012_03.dir/pdf3okBxYPBXw.pdf
THE THEME OF THIS BOOK is the deep continuity of life and mind.
Where there is life there is mind, and mind in its most articulated forms
belongs to life. Life and mind share a core set of formal or organiza-
tional properties, and the formal or organizational properties distinc-
tive of mind are an enriched version of those fundamental to life. More
precisely, the self-organizing features of mind are an enriched version of
the self-organizing features of life. The self-producing or “autopoietic”
organization of biological life already implies cognition, and this incip-
ient mind finds sentient expression in the self-organizing dynamics of
action, perception, and emotion, as well as in the self-moving flow of
time-consciousness.


But the definitions of mind you've provided are far easier to express and to me work just fine.
MoK March 14, 2025 at 11:47 #976014
Quoting javra

Same questions can be placed with equal validity of any individual ameba, for example. Point being, if you allow for "mind in life" as it would pertain to an ameba, there is no reason to not then allow the same for a neuron. The as of yet unknown detailed mechanism of how all this occurs in a lifeform devoid of a central nervous system being completely irrelevant to the issue at hand.

I think that amoebas evolved in such a way to function as a single organism. Neurons are however different entities and they function together. Moreover, scientific evidence shows that a single amoeba can learn and remember. To my knowledge, no scientific evidence exists that a single neuron can learn or remember.
javra March 14, 2025 at 14:57 #976065
Quoting MoK
I think that amoebas evolved in such a way to function as a single organism. Neurons are however different entities and they function together.


Yes, but I don't see how that is significant to neurons being or not being sentient.

Quoting MoK
Moreover, scientific evidence shows that a single amoeba can learn and remember. To my knowledge, no scientific evidence exists that a single neuron can learn or remember.


Here's an article from Nature to the contrary: Neurons learn by predicting future activity.
javra March 14, 2025 at 18:05 #976082
I thought this could be of interest, or at least further clarify the position I currently hold:

Quoting javra
Also: in fairness, my own general understanding of mind follows E. Thompson's understanding pretty closely,


I should edit this as follows: this is so for certain aspects of mind – such as those that pertaining to single-celled lifeforms, be they somatic cells (e.g. neurons) or else individual organisms (e.g., ameba) – and somewhat less so for others: finding far more complexity than the book offers in relation to the workings of a human mind, for example (which we’ve previously briefly discussed in another thread).

As one good example of this approach in regard to the sentience of an organism and that of its individual constituent cells:

Most – including in academic circled – will acknowledge that a plant is sentient (some discussing the issue of plant intelligence to boot): It, after all, can sense sunlight and gravity such that it grows its leaves toward sunlight and its roots toward gravity. But, although this sensing of environment will be relatively global to the plant, I for the life of me can’t fathom how a plant might then have a centralized awareness and agency along the lines of what animals most typically have – such that in more complex animals it becomes the conscious being. I instead envision a plant’s sentience to generally be the diffuse sum product of the interactions between its individual constituent cells, such that each cell – with its own specialized functions - holds its own (utterly miniscule) sentience as part of a cooperative we term the organism, in this case the plant. This, in some ways, in parallel to how a living sponge as organism – itself being an animal – is basically just a communal cooperation between individual eukaryotic cells which feed together via the system of openings: with no centralized awareness to speak of. This general outlook then fits with the reality that some plants have no clear boundaries as organisms – despite yet sensing, minimally, sunlight and gravity - with grass as one such example: a field of grass of the same species is typically intimately interconnected underground as one organism, yet a single blade of grass and it’s root can live just fine independently as an individual organism if dug up and planted in a new area. I thereby take the plant to be sentient, but only as a cooperative of individual sentience-endowed plant cells whose common activities result in the doings of the plant as a whole organism: doing in the form of both sensing its environment and acting upon it (albeit far slower than most any animal). I don’t so far know of a better way of explaining a plant’s sentience given all that we know about plants.

Whereas in animals such as humans, the centralized awareness and agency which we term consciousness plays a relatively central role to out total mind's doings – obviously, with the unconscious aspects of our mind being not conscious to us; and with the latter in turn resulting from the structure and functioning of our physiological CNS, which itself holds different zones of activity (from which distinct agencies of the unconscious mind might emerge) and which we consider body rather than mind. So once one entertains the sentience of neurons, one here thereby addresses the constituents of one's living body, rather than of one's own mind per se.

Reply to MoK

My bad if this is too off-topic. I won't post anymore unless there's reason to reply.

MoK March 14, 2025 at 18:19 #976085
Quoting javra

Here's an article from Nature to the contrary: Neurons learn by predicting future activity.

That was an interesting article to read. I however have a serious objection to whether that is a collection of neurons that learns and adopts itself or each single neuron has such a capacity. Of course, if you assume that each neuron has such a capacity and plug it into the equation then you obtain that a collection of neurons also have the same capacity but the opposite is not necessarily true. I don't think that they have access to individual neuron activity when it comes to experiments too (although they mentioned neuron activity in the discussion for Figures 4 and 5). So I stick to what I think is more correct, a collection of neurons can learn but individual neurons cannot.
javra March 14, 2025 at 18:24 #976086
Reply to MoK I understand you disagree and can find alternative explanations to a single neuron learning. One could do the same for amebas if one wants to play devil's advocate.

If you're willing, what are the "serious objections" that you have to the possibility that individual neurons can learn from experience?
MoK March 14, 2025 at 19:00 #976091
Quoting javra

Most – including in academic circled – will acknowledge that a plant is sentient (some discussing the issue of plant intelligence to boot): It, after all, can sense sunlight and gravity such that it grows its leaves toward sunlight and its roots toward gravity. But, although this sensing of environment will be relatively global to the plant, I for the life of me can’t fathom how a plant might then have a centralized awareness and agency along the lines of what animals most typically have – such that in more complex animals it becomes the conscious being. I instead envision a plant’s sentience to generally be the diffuse sum product of the interactions between its individual constituent cells, such that each cell – with its own specialized functions - holds its own (utterly miniscule) sentience as part of a cooperative we term the organism, in this case the plant. This, in some ways, in parallel to how a living sponge as organism – itself being an animal – is basically just a communal cooperation between individual eukaryotic cells which feed together via the system of openings: with no centralized awareness to speak of. This general outlook then fits with the reality that some plants have no clear boundaries as organisms – despite yet sensing, minimally, sunlight and gravity - with grass as one such example: a field of grass of the same species is typically intimately interconnected underground as one organism, yet a single blade of grass and it’s root can live just fine independently as an individual organism if dug up and planted in a new area. I thereby take the plant to be sentient, but only as a cooperative of individual sentience-endowed plant cells whose common activities result in the doings of the plant as a whole organism: doing in the form of both sensing its environment and acting upon it (albeit far slower than most any animal). I don’t so far know of a better way of explaining a plant’s sentience given all that we know about plants.

I read about plant intelligence a long time ago and I was amazed. They cannot only recognize between up and down, etc. they also are capable of communicating with each other. I can find those articles and share them with you if you are interested.

Quoting javra

Whereas in animals such as humans, the centralized awareness and agency which we term consciousness plays a relatively central role to out total mind's doings – obviously, with the unconscious aspects of our mind being not conscious to us; and with the latter in turn resulting from the structure and functioning of our physiological CNS, which itself holds different zones of activity (from which distinct agencies of the unconscious mind might emerge) and which we consider body rather than mind.

To me what you call the unconscious mind (what I call the subconscious mind) is conscious. Its activity most of the time is absent from the conscious mind though. But you can tell that the subconscious mind and conscious mind are constantly working with each other when you reflect on a complex process of thoughts for example. Although that is the conscious mind which is a thinking entity, it needs a constant flow of information from what was experienced and thought in the past. This information of course registered in the subconscious mind's memory. The amount of information that is registered in the subconscious mind's memory however is huge so the subconscious mind has to be very selective in the type of information that should be passed to the conscious mind depending on the subject of focus of the conscious mind. Therefore, the subconscious mind is an intelligent entity as well. I also think that what we call intuition is due to the subconscious mind!

Quoting javra

So once one entertains the sentience of neurons, one here thereby addresses the constituents of one's living body, rather than of one's own mind per se.

I cannot follow what you are trying to say here.
MoK March 14, 2025 at 19:53 #976095
Quoting javra

I understand you disagree and can find alternative explanations to a single neuron learning. One could do the same for ameba is one wants to play devil's advocate.

I don't understand how in the case of Ameba they could possibly interact and learn collectively.

Quoting javra

If you're willing, what are the "serious objections" that you have to the possibility that individual neurons can learn from experience?

I try to be minimalistic all the time when I try to explain complex phenomena. The behavior of an electron is lawful and deterministic to me. The same applies to larger entities such as atoms and molecules. I try to be minimalistic even in the case of a neuron unless I face a phenomenon that cannot be explained. If I find myself in a troublesome situation where I cannot explain a phenomenon, then I try to dig from top to bottom questioning the assumption that I made trying to see where is the fault assumption. I would even question the assumption that I made for electrons as well if it is necessary.

In regards to the subject of this thread, the existence of options in a deterministic world, I found there is a simple explanation for the phenomenon once I consider a set of neurons each being a simple entity and deterministic.
javra March 14, 2025 at 21:05 #976101
Quoting MoK
I read about plant intelligence a long time ago and I was amazed. They cannot only recognize between up and down, etc. they also are capable of communicating with each other. I can find those articles and share them with you if you are interested.


I'm relatively well aware of this. Thank you. :up: It gets even more interesting in considering that, from what we know, subterranean communication between plants seems to require their communal symbiosis with fungi species. In a very metaphorical sense, their brains are underground, and communicate via a potentially wide web connections.

Quoting MoK
To me what you call the unconscious mind (what I call the subconscious mind) is conscious.


I in many ways agree. I would instead state that the unconscious mind - which I construe to not always be fully unified in its agencies - is instead "aware and volition-endowed". So, in this sense, it could be stated to be in its own way conscious (here to my mind keeping things simple and not addressing the plurality of agencies that could therein occur), but we as conscious agents are yet unconscious of most of its awareness and doings. This being why I yet term it the unconscious mind: we as conscious beings are, again, typically not conscious of its awareness and doings.

Quoting MoK
So once one entertains the sentience of neurons, one here thereby addresses the constituents of one's living body, rather than of one's own mind per se. — javra

I cannot follow what you are trying to say here.


I basically wanted to express that, if one allows the neurons being sentient, their own sentience is part and parcel of our living brain's total physiology, this as aspects of our living bodies. Whereas we as mind-endowed conscious beings of our own, our own sentience is not intertwined with that pertaining to individual neurons of our CNS. Rather, they do their thing within the CNS for the benefit of their own individual selves relative to their community of fellow neurons, which in turn results in certain neural-web firings within our brain, which in turn results in the most basic aspects of our own unconscious mind supervening on these neural-web firings, with these most basic aspects of our unconscious mind then in one way or another ultimately combining to form the non-manifold unity of the conscious human being. A consciousness which on occasion interacts with various aspects of its unconscious mind, such as when thinking about (questioning, judging the value of, etc.) concepts and ideas - as you've mentioned.

Hope that makes what I previously said clearer.

Quoting MoK
I understand you disagree and can find alternative explanations to a single neuron learning. One could do the same for ameba is one wants to play devil's advocate. — javra

I don't understand how in the case of Ameba they could possibly interact and learn collectively.


I haven't claimed that amebas can act collectively. Here, I was claiming that the so-called "problem of other minds" can be readily applied to the presumed sentience of amebas. This in the sense that just because it looks and sounds like a duck doesn't necessitate that it so be. Hence, just because an ameba looks and acts as thought it is sentient, were one to insist on it, one could argue that the ameba might nevertheless be perfectly insentient all the same. This as you seem to currently maintain for individual neurons. But this gets heavy into issues of epistemology and into what might constitute warranted vs. unwarranted doubts. (If it looks and sounds like a duck, it most likely is.)

Quoting MoK
In regards to the subject of this thread, the existence of options in a deterministic world, I found there is a simple explanation for the phenomenon once I consider a set of neurons each being a simple entity and deterministic.


No worries there. But why would allowing for neurons holding some form of sentience then disrupt this general outlook regarding the existence of options? The brain would still do what it does - this irrespective of how one explains the (human) mind-brain relationship. Or so I so far find.
MoK March 15, 2025 at 11:41 #976200
Quoting javra

I'm relatively well aware of this. Thank you. :up: It gets even more interesting in considering that, from what we know, subterranean communication between plants seems to require their communal symbiosis with fungi species. In a very metaphorical sense, their brains are underground, and communicate via a potentially wide web connections.

Cool! :wink:

Quoting javra

I in many ways agree. I would instead state that the unconscious mind - which I construe to not always be fully unified in its agencies - is instead "aware and volition-endowed". So, in this sense, it could be stated to be in its own way conscious (here to my mind keeping things simple and not addressing the plurality of agencies that could therein occur), but we as conscious agents are yet unconscious of most of its awareness and doings. This being why I yet term it the unconscious mind: we as conscious beings are, again, typically not conscious of its awareness and doings.

Correct.

Quoting javra

I basically wanted to express that, if one allows the neurons being sentient, their own sentience is part and parcel of our living brain's total physiology, this as aspects of our living bodies. Whereas we as mind-endowed conscious beings of our own, our own sentience is not intertwined with that pertaining to individual neurons of our CNS. Rather, they do their thing within the CNS for the benefit of their own individual selves relative to their community of fellow neurons, which in turn results in certain neural-web firings within our brain, which in turn results in the most basic aspects of our own unconscious mind supervening on these neural-web firings, with these most basic aspects of our unconscious mind then in one way or another ultimately combining to form the non-manifold unity of the conscious human being. A consciousness which on occasion interacts with various aspects of its unconscious mind, such as when thinking about (questioning, judging the value of, etc.) concepts and ideas - as you've mentioned.

A neuron is a living cell. Whether it is sentient and can learn things is a subject of discussion. I believe a neuron could become sentient if this provided an advantage for the organism. This is however very costly since it requires the neuron to be a complex entity. Such a neuron, not only needs more food but also a sort of training before it can function properly within the brain where all neurons are complex entities. So, let's say that you have a single neuron, let's call it X, which can perform a function, let's call it Z, learning for example. Now let's assume a collection of neurons, let's call them Y, which can do the same function as Z but each neuron is not capable of performing Z. The question is whether it is economical for the organism, to have X or Y. That is a very hard question. It is possible to find an organism that does not have many neurons and each neuron can perform Z. That however does not mean that we can generalize such an ability to neurons of other organisms that have plenty of neurons. The former organism may due to evolution gain such a capacity where such a capacity is not necessary and economical for the latter organism.

Quoting javra

Hope that makes what I previously said clearer.

Thanks for the elaboration.

Quoting javra

I haven't claimed that amebas can act collectively.

I said that for amebas to learn collectively, such as neurons, they need to interact.

Quoting javra

Here, I was claiming that the so-called "problem of other minds" can be readily applied to the presumed sentience of amebas. This in the sense that just because it looks and sounds like a duck doesn't necessitate that it so be. Hence, just because an ameba looks and acts as thought it is sentient, were one to insist on it, one could argue that the ameba might nevertheless be perfectly insentient all the same. This as you seem to currently maintain for individual neurons. But this gets heavy into issues of epistemology and into what might constitute warranted vs. unwarranted doubts. (If it looks and sounds like a duck, it most likely is.)

I agree.

Quoting javra

No worries there. But why would allowing for neurons holding some form of sentience then disrupt this general outlook regarding the existence of options? The brain would still do what it does - this irrespective of how one explains the (human) mind-brain relationship. Or so I so far find.

I agree that considering neurons to be sentient and can learn may not disrupt the function of the brain but I think that it might become very costly for the organism when a small set of simpler neurons can perform the same function, learning for example.
javra March 15, 2025 at 16:59 #976223
Quoting MoK
A neuron is a living cell. Whether it is sentient and can learn things is a subject of discussion. I believe a neuron could become sentient if this provided an advantage for the organism. This is however very costly since it requires the neuron to be a complex entity. Such a neuron, not only needs more food but also a sort of training before it can function properly within the brain where all neurons are complex entities. So, let's say that you have a single neuron, let's call it X, which can perform a function, let's call it Z, learning for example. Now let's assume a collection of neurons, let's call them Y, which can do the same function as Z but each neuron is not capable of performing Z. The question is whether it is economical for the organism, to have X or Y. That is a very hard question. It is possible to find an organism that does not have many neurons and each neuron can perform Z. That however does not mean that we can generalize such an ability to neurons of other organisms that have plenty of neurons. The former organism may due to evolution gain such a capacity where such a capacity is not necessary and economical for the latter organism.


Alright. While I still disagree with neurons being insentient, I can now better understand your reasoning. Thanks. If its worth saying, neurons do in fact require a lot of energy to live, and learning can very well be a largely innate faculty of at least certain lifeforms. But for my part, I'll leave things as they are. It was good talking with you!
MoK March 15, 2025 at 17:19 #976234
Reply to javra
It was very nice chatting with you too! :wink:
MoK March 15, 2025 at 19:15 #976241
@javra @Pierre-Normand By the way, I found a simple neural network that can perform a simple sum.

User image

The weights are all 1 and the inputs are 0 or 1. I thought that you might be interested so I shared it with you.
Pierre-Normand March 15, 2025 at 21:16 #976250
Quoting MoK
By the way, I found a simple neural network that can perform a simple sum.


Thank you, but the image isn't displayed. You may need to link it differently.
Patterner March 16, 2025 at 12:46 #976344
Quoting javra
Accordingly, every mind requires a minimum of two thinking elements:
•?A sensor that responds to its environment
•?A doer that acts upon its environment — Ogas and Gaddam

They talk about the amoeba, which has the required elements.

Obviously, these definitions of mind and thinking are as basic as can be. But it's where it all starts.

Can a neuron be said to have a mind, to think, by these definitions?
— Patterner

I don't see why not.

[I]The sensor aspect of thought so defined[/I]: the neuron via its dendrites senses in its environment of fellow neurons their axonal firings (axons of other neurons to which the dendrites of the particular neuron are connected via synapses) and responds to its environment of fellow neurons by firing its own axon so as to stimulate other neurons via their own dendrites.

[I]The doer aspect of thought so defined[/I]: the neuron's growth of dendrites and axon (which is requisite for neural plasticity) occurs with the, at least apparent, purpose of finding, or else creating, new synaptic connections via which to be stimulated and stimulate - this being a neuron's doing in which the neuron acts upon its environment in novel ways.

To me, it seems to fit the definitions of mind offered just fine.
I have a tough time seeing it your way. I think an autonomous entity has - [I]is[/I] - a mind. Archaea, bacteria, and amoeba live on their own. Neurons do not. I think neurons are part of a mind; part of the chain connecting the sensor and doer. In the archaea, being single celled, that chain is made of molecules. We couldn't (at least I couldn't) say any of the molecules are minds. And I think the neurons in a hydra are more complex links in the hydra's chain, rather than each being a mind within the mind of the hydra.

However, Ogas and Gaddam seem to agree with you:
Ogas and Gaddam:There are sensor neurons and doer neurons, which play the same roles as sensors and doers in molecule minds. Each neuron is composed of molecular thinking elements, including molecular doers (which release neurotransmitters into a synapse, for instance) and molecular sensors (which detect the voltage on the neuron membrane, for instance). Functionally, [I]every neuron is a self-contained molecule mind.[/I]
The italics are theirs, and the phrase is a link to a quote from [I]The Computational Brain[/I], by Patricia Churchland and Terrence Sejnowski:
Churchland and Sejnowski:Research on the properties of neurons shows that they are much more complex processing devices than previously imagined. For example, dendrites of neurons are themselves highly specialized, and some parts can probably act as independent processing units.


I think my difficulty lies in the fact that I haven't been at any of this for very long. I always took mind and consciousness to be pretty much the same thing. Intellectuality, I see a difference. But my feeling that they are the same still intrudes at times. I'm working on it. :grin:
javra March 16, 2025 at 16:29 #976365
Quoting Patterner
I have a tough time seeing it your way. I think an autonomous entity has - is - a mind. Archaea, bacteria, and amoeba live on their own. Neurons do not. I think neurons are part of a mind; part of the chain connecting the sensor and doer. In the archaea, being single celled, that chain is made of molecules. We couldn't (at least I couldn't) say any of the molecules are minds. And I think the neurons in a hydra are more complex links in the hydra's chain, rather than each being a mind within the mind of the hydra.


I feel like I get it. Thanks for the explanation.

Maybe this is worth expressing as a follow-up. Especially when considering the dire need humans have for nurture in the formative years after birth - without which we either perish or at best become insane and then perish on our own - humans too require a community of fellow humans in order to live. This, though, doesn’t take away from the individuality of human minds. In certain respects only, the same roundabout situation could be potentially claimed of neurons.

In terms of molecules and minds, I certainly wouldn’t claim that individual organic molecules are minds either. Going by the notion of “autopoiesis” which I’ve previously pointed out indirectly, the very life of any single-celled lifeform (to include metabolism, awareness, and sentient doings) in a sense supervenes on the structure and functioning of the single-celled lifeform’s organic molecules. Take away one lipid from an ameba and the ameba will continue living and doing what it does just fine. However, take enough individual lipids away from an ameba and the ameba will cease living. As an ameba’s life supervenes on the organization and functioning of this bundle of organic molecules, so too then will the ameba’s mind so supervene. The same could then be potentially claimed of a neuron’s sentience.

As to hydras, they’re weird, in no small part due to being virtually immortal as far as we know – this of course barring environmental mishaps – with extreme regenerative abilities (including the ability to regenerate their heads). Yet even here, I presume that the activities of their nervous system – though far, far less complex than that of a mammal’s (having a few thousand neurons tops) – will be that upon which the hydra’s mind supervenes. Such that the hydra’s mind will not of itself be conjoined with the sentience of the hydra’s individual neurons – but will instead supervene upon the totality of its nervous system’s doings (if not a totality resulting from other somatic cells as well).

But yea, this perspective maintaining that neurons are not insentient is by no means common staple in today’s world. So I get why I can be very hard to entertain.

Quoting Patterner
I think my difficulty lies in the fact that I haven't been at any of this for very long. I always took mind and consciousness to be pretty much the same thing. Intellectuality, I see a difference. But my feeling that they are the same still intrudes at times. I'm working on it. :grin:


Yea, its common practice around these parts to address mind and consciousness as though they were the same thing. I'm thinking maybe it's in part because one sense of "consciousness" is that of "awareness" and all aspects of mind, the unconscious very much included, are aware in one way or another. But, yes, if (at least our human understanding of) consciousness is contrasted to a co-occurring unconscious mind upon which consciousness is dependent, then consciousness can't be equivalent to a mind in total - for it excludes the far larger portion of mind which we are not conscious of. Whereas I don't find reason to believe that something like an ameba (or a neuron :wink: ) has any such dichotomy of mind to speak of.

Quoting MoK
By the way, I found a simple neural network that can perform a simple sum.


I too am interested. The link or image however is still not displaying.

MoK March 17, 2025 at 10:01 #976473
@javra @Pierre-Normand Unfortunately, after some thought, I realized that the suggested network does not work as I thought it should. Please disregard my previous post.
ENOAH March 20, 2025 at 01:13 #977176
Quoting MoK
how can deterministic processes lead to the realization of options.


Maybe the "options" are illusion.

The determinism in neural processes seem obvious to us since science has constructed that Narrative and it is conventional; i.e., that synapses are triggered by xyz, and there is no moment of an agent choosing to take a certain path.

But the same could go for the so-called Mind, where the illusion of option exists. Even in a decision seeming so free as which road to take at a fork, was ultimately the last domino to fall in a series of autonomously structured triggers. To oversimplify, a thought emerges, "the heart is on the left,"--like I said, over simplified--all the way to "ini mini miny moe", structures and structures signifiers of constructed meaning snap like dominoes until you move. The positive feeling in the body that is triggered by the "settlement," or what we think of as "belief", we also call a choice.

For each individual mind the result is different, but not owing to a free agent making a choice out of options, but by the conditioned process of signifier structuring at each specific locus in History where these triggers are built. Some might not think of the left as superior but the right because it is the hand that's raised. All of these pieces of data stored at various loci in History act in accordance with a highly evolved system of conditioning. If not, find the moment of choice that did not involve a thought, image, language, a final trigger which is silent. That could just be that feeling in the body, designed to end the dialectic; also a conditioned response. And if you deliberately "choose" to defy the triggers, and go the opposite, it was just those antithetical triggers that got you there, triggered by something daring you to defy it, releasing a positive feeling because your locus is conditioned by History that way. And so on.

Ultimately that suggests, if so called decisions are autonomous movements of stimulus and conditioned response, the self has no free will. But actually further, there is no self. Body is an organic process, Mind is a process functioning with images.
MoK March 20, 2025 at 11:04 #977223
Quoting ENOAH

Maybe the "options" are illusion.

Options cannot be an illusion. If I show you two balls that look similar, you will realize that there are two balls and that they look identical. There are even artificial neural networks that can count similar objects.

Quoting ENOAH

The determinism in neural processes seem obvious to us since science has constructed that Narrative and it is conventional; i.e., that synapses are triggered by xyz, and there is no moment of an agent choosing to take a certain path.

I am not talking about decisions in this thread.
JuanZu March 21, 2025 at 08:53 #977450
Reply to MoK

The existence of possibilities is that which follows from the fact that any course of action is not given in advance. That is, that in a sense the world is always in play. No matter how well our expectations or predictions are fulfilled there is always something not given in becoming. We can foresee that the sun will die in X years, but nevertheless it is not given. To the extent that there is something not given, thought is able to think of possibilities, there is always something left over that escapes prediction.

The determinist has to explain how the future is given. But that is something that cannot be done, since predictions are always possibilities and are representations of becoming. How does a prediction turn out to be true? Even if it turns out to be true, it is still a representation of becoming and not becoming itself. That is why we cannot say that things are determined, because they are only determined in the representation but not in becoming itself.

MoK March 21, 2025 at 11:56 #977470
Quoting JuanZu

The existence of possibilities is that which follows from the fact that any course of action is not given in advance. That is, that in a sense the world is always in play. No matter how well our expectations or predictions are fulfilled there is always something not given in becoming. We can foresee that the sun will die in X years, but nevertheless it is not given. To the extent that there is something not given, thought is able to think of possibilities, there is always something left over that escapes prediction.

The standard model was confirmed experimentally and it is a deterministic model. The experiment is performed very carefully so we are sure about how particles interact with each other. That is however true that when it comes to a system we cannot know the exact location of its parts so we cannot for sure predict the future state of the system but that is not what I am talking about. I am mostly interested in understanding how we could realize options given the fact that any physical system, for example the brain, is a deterministic entity. I am sure that the realization of options is due to the existence of neurons in the brain but it is still unclear to me how neural processes in the brain can lead to the realization of the options.

Quoting JuanZu

The determinist has to explain how the future is given. But that is something that cannot be done, since predictions are always possibilities and are representations of becoming. How does a prediction turn out to be true? Even if it turns out to be true, it is still a representation of becoming and not becoming itself. That is why we cannot say that things are determined, because they are only determined in the representation but not in becoming itself.

We can for sure say that the physical systems are deterministic since physicists closely examine the motion and interaction of elementary particles. Anyway, the purpose of this thread was not to discuss determinism but to understand how we can realize options given the fact that we have a brain.
JuanZu March 21, 2025 at 16:25 #977549
Reply to MoK

I think you have missed my point. If you tell me that there is a deterministic system that will end up in X state you are making a prediction. But if the system is not in its state X the system prediction cannot be confused with reality. That is, the prediction is a representation not reality itself. The prediction is one possibility among others, even if it is confirmed. And this is due to the non-givenness of becoming. We could only be absolute determinists if all the processes of reality were already given. But that is not the case. No matter how many experiments you do, predictions will always be imagined representations of what will happen, i.e. possibilities among others. And reality will always be in a state of not-given. Basically This is the problem of inducción.
MoK March 22, 2025 at 10:46 #977733
Quoting JuanZu

I think you have missed my point. If you tell me that there is a deterministic system that will end up in X state you are making a prediction.

I am saying that given the system in the state of X and the laws of nature, one always predicts and finds the system in the state of Y later.

Quoting JuanZu

But if the system is not in its state X the system prediction cannot be confused with reality.

I don't understand why you assume the system is not in the state of X. The system cannot be in another state but X which was predicted.

Quoting JuanZu

That is, the prediction is a representation not reality itself.

The prediction is about what is going to happen in reality and the system always ends up in Y given X in a deterministic system.

Quoting JuanZu

The prediction is one possibility among others, even if it is confirmed.

The is no other possibility in a reality. The determinism is tested to great accuracy.

Quoting JuanZu

And this is due to the non-givenness of becoming. We could only be absolute determinists if all the processes of reality were already given.

We don't need to test all processes of reality to make sure that reality is deterministic and that is not possible too.

Quoting JuanZu

No matter how many experiments you do, predictions will always be imagined representations of what will happen, i.e. possibilities among others.

We couldn't possibly do any science if this statement was true. For example, the computer you are using right now always works in a certain way. It doesn't work in one way one day and in another way another day.
JuanZu March 22, 2025 at 17:55 #977805
Reply to MoK

Scientific work also works with possibilities, but the scientist believes that what is represented in the imagination is going to happen. This implies that one thinks in possibilities precisely because the becoming is not given. The fact that the becoming is not given is the opportunity to be right or wrong in predictions. But a prediction is never a given. They are ontologically different things.


We would have to say the opposite of what You say (ad consecuentiam btw) that the fact that becoming is not given is that which obliges us to do science with the difference that we must believe in the uniformity of nature, but this is a belief that can never be confirmed universally, because becoming is never given. No matter how many experiments we do, the possibility of failure is always there. It is a possibility, like that of succeeding in our predictions.

MoK March 23, 2025 at 11:44 #977986
Quoting JuanZu

Scientific work also works with possibilities, but the scientist believes that what is represented in the imagination is going to happen. This implies that one thinks in possibilities precisely because the becoming is not given. The fact that the becoming is not given is the opportunity to be right or wrong in predictions. But a prediction is never a given. They are ontologically different things.

Physical behavior has been the subject of careful examination for almost 400 years. To date, there has been a fantastic correlation between physical theories and experiments/observations. Moreover, nature has always behaved in a deterministic way; without this, no form of life was possible.

Quoting JuanZu

We would have to say the opposite of what You say (ad consecuentiam btw) that the fact that becoming is not given is that which obliges us to do science with the difference that we must believe in the uniformity of nature, but this is a belief that can never be confirmed universally, because becoming is never given. No matter how many experiments we do, the possibility of failure is always there. It is a possibility, like that of succeeding in our predictions.

Ok, so let's wait for that day!