Where is AI heading?

Carlo Roosen October 10, 2024 at 16:58 8450 views 124 comments
AI is becoming a bigger part of our lives, and we all want to understand its consequences. However, it's tough to predict where it's headed. Here, I'll offer my perspective. As a software developer with reasonable experience in AI, I can share my thoughts, though I don't claim to be an expert—so feel free to correct me where I'm wrong. I'll start with a short technical introduction.

One major breakthrough in AI was the invention of 'Transformers,' introduced in the 2017 paper Attention Is All You Need by eight Google researchers. This paper builds on the attention mechanism proposed by Bahdanau et al. in 2014.

Transformers allow AI to translate input—any bit of information—into a vector, like an arrow pointing in a specific direction in a multi-dimensional space. Each possible next piece of information also has a vector, and the dot product between these two vectors determines the likelihood of using this piece of information as output.

The dot product is a straightforward calculation, where the result increases as the vectors align, meaning the more they point in the same direction, the more likely they are to combine. This makes intuitive sense: when two things head in the same direction, they likely belong together.

There have been other inventions, but OpenAI’s breakthrough with ChatGPT was using all known tech on a massive scale. Enormous data centers and the internet as a knowledge base were key in making this blow up.

Currently, AI developers are applying these technologies in various fields. Examples include manufacturing robots, automated financial investing, virtual travel booking agents, social media monitoring, and marketing chatbots—the list goes on.

Software developers are also making their own lives easier. It's no secret you can ask a chatbot to generate code for a website—and it works pretty well. For professional use, tools now create entire software projects. I’ve seen one that has two “agents”: the first is the project manager, it generates a task list based on your needs. The second agent is given those tasks, step by step, to do the actual programming. As a developer, you receive instructions as well, to test and give feedback at each stage. In the end, you have a fully working piece of software without writing a single line of code yourself.

Similar advancements are happening in other fields, often in even more complex implementations. But they all share one common feature: each neural network performs a single, well-defined task.

What do I mean by that? Neural networks always need to be wrapped in conventional logic to be useful. For example, self-driving cars use neural networks for vision, but their decision-making still relies on conventional programming (as far as I am aware of). In chess programs, neural networks evaluate board positions, but they rely on traditional minimax tree search to plan their moves.

These neural networks, in reality, are just functions—input goes in, output comes out. Information flows in one direction. Never a neural network can decide, “I need to think about this a little longer”. A handful of innovations, like these transformers, have made those networks almost "magical," but their limitations are clear to see.

All predictions about AI's future are based on refining this model—by adding more rules, improving training materials, and using various tricks to approach human-level intelligence.

But I believe we’re missing something important.

More than 60 years ago, after the first AI conference in 1956 at Dartmouth College, the idea of neural nets was proposed, inspired by the neurons in our brains. It took time, but here we are—it works. Today, AI developers are busy fine-tuning this single concept, leaving little room to think beyond it. Their focus is on improving the current architecture.

But one day, I’m certain, we’ll realize there's more to learn from the human mind than just neurons. We can gain insights from observing our minds—how we remember, reason, and use language. Essentially, the kinds of discussions we have here on the forum.

Sure, we can make those observations, but replicating human thinking in a computer program seems impossible. In conventional programming, we, the developers, determine how the computer interacts with the world. If it needs to learn, we decide how it learns. We impose our human perspective on the machine. We do that in the code that the neural networks are embedded in, and in us training it on human knowledge. For that reason, human-level intelligence is the maximum we can expect to achieve.

But I see a possibility that does not look too difficult. Our thinking is shaped by language. To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine.

Developing such an architecture for AI to create its own internal language isn’t as difficult as it sounds. The real challenge will be training it. What do we train it on? Truth? Figuring that out will be the biggest hurdle.

One way or another, well get this done. I'm working on this concept myself. But if I don’t succeed, someone else will. To me, the path is clear: superhuman intelligence is where we're headed.
What will it look like? That’s impossible to say. Our worldview is limited, not just by what we can perceive, but by the constraints of human language. Take this forum—so many opinions, yet little consensus. Why is it so hard to agree on important topics? Perhaps with a richer language, philosophical questions would have clearer answers. Who can say?

Comments (124)

Wayfarer October 10, 2024 at 22:00 #938624
Quoting Carlo Roosen
But one day, I’m certain, we’ll realize there's more to learn from the human mind than just neurons. We can gain insights from observing our minds—how we remember, reason, and use language. Essentially, the kinds of discussions we have here on the forum.


That is meta-cognitive awareness - knowing about knowing, understanding through insight how the mind operates. That might seem obvious but since 20th century psychology came along with the understanding of the sub- and unconscious aspects of the mind, it clear that this not at all simple.

But overall I find the casual way in which you assume that human-level and then super-human intelligence can or will be achieved is hubristic. Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already. ChatGPT and other LLMs obviously display human-like conversational and knowledgement management abilities and can sail through the Turing Test. But I agree with those who say they are not conscious beings, and never will be, in principle.

I asked Google Gemini to summarize why Bernardo Kastrup says that the idea of 'conscious AI' is an illusion:

Bernardo Kastrup's argument against conscious AI is rooted in his philosophical perspective on consciousness and the nature of reality. He primarily argues that:

1. Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness.

2. AI as a simulation: He views AI as a simulation of consciousness, rather than a genuine manifestation of it. While AI can exhibit intelligent behavior and even mimic certain aspects of human consciousness, it does so based on programmed rules and algorithms, not on subjective experience.
 
3. The hard problem of consciousness: Kastrup emphasizes the "hard problem" of consciousness, which is the question of how physical processes can give rise to subjective experience. He argues that current scientific understanding cannot adequately explain this phenomenon, and therefore, it's unlikely that AI, which operates on known physical principles, can achieve it.  

Essentially, Kastrup's position is that while AI can be incredibly sophisticated and capable, it is fundamentally limited by its physical nature and cannot truly possess the subjective experience that we associate with consciousness.


See also this blog post.

I don't submit this just as an appeal to authority, but because Kastrup is a well-known critic of the idea of conscious AI, and because he has doctorates in both philosophy and computer science and created and sold an IT company in the early stages of his career. He has summarized and articulated the reasons why he says AI consciousness is not on the horizon from an informed perspective.

It might also be of interest that he's nowadays associated with Federico Faggin, an Italian-American computer scientist who has the claim to fame of having built the first commercially-produced microprocessor. Fagin's autobiography was published a couple of years ago as Silicon (website here.) He also described an epiphany about consciousness that he underwent which eventually caused him to retire from IT and concentrate full-time on 'consciousness studies', subject of his later book, Irreducible.

Noteworthy that both Kastrup and Faggin came to forms of idealist metaphysics because of the realisation that there was an essential quality of consciousness that could never be replicated in silicon.

There's a lot of philosophical background to this which is often overlooked in the understandable excitement about LLMs. And I've been using ChatGPT every single day since it launched in November 2022, mainly for questions about philosophy and science, but also for all kinds of other things (see this Medium article it helped me draft). So I'm not an AI sceptic in any sense, but I am pretty adamant that AI is not and won't ever be conscious in the sense that living beings are. Which is not to say it isn't a major factor in life and technology going forward.

noAxioms October 10, 2024 at 23:45 #938645
Quoting Wayfarer
I don't submit this just as an appeal to authority, but because Kastrup is a well-known critic of the idea of conscious AI

Not sure if Gemini accurately summarized the argument, but there seems to be an obvious hole.

[quote=GoogleGemini]
1. Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness.[/quote]
But a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe, then so can some other complex physical system such as say an AI. So the argument seems to be not only probably unsound, but invalid, and not just probably. It just simply falls flat.

People have been trying for years to say that humans are special in the universe. This just seems to be another one. Personally, I don't buy into the whole 'other fundamental property' line, but you know that. But its proponents need to be consistent about the assertions.

There are big names indeed on both sides of this debate, but I tend to react to arguments and not pedigree. That argument wasn't a very good one, and maybe Gemini just didn't convey it properly.

Quoting Wayfarer
Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already. ChatGPT and other LLMs obviously display human-like conversational and knowledgement management abilities and can sail through the Turing Test.

No chatbot has passed the test, but some dedicated systems specifically designed to pass the test have formally done so. And no, I don't suggest that either a chatbot or whatever it was that passed the test would be considered 'conscious' to even my low standards. It wasn't a test for that. Not sure how such a test would be designed.

Back to Kastrup: "While AI can exhibit intelligent behavior and even mimic certain aspects of human consciousness, it does so based on programmed rules and algorithms."
But so are you (if the fundamental property thing is bunk). No, not lines of code, but rules and algorithms nevertheless, which is why either can in principle simulate the other.


Quoting Carlo Roosen
All predictions about AI's future are based on refining this model—by adding more rules, improving training materials, and using various tricks to approach human-level intelligence.
As you seem to realize, that only works for a while. Humans cannot surpass squirrel intelligence only by using squirrels as our training. An no, a human cannot yet pass a squirrel Turing test.

AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all.

Chatbots regurgitate all the nonsense that's online, and so much wrongness is out there. Such a poor education. Why can't it answer physics questions from peer reviewed physics textbooks, and ditto with other subjects. But no, it gets so much more training data from say facebook and instagram (I personally don't have social media accounts except for forums like this one), such founts of factual correctness.


fishfry October 10, 2024 at 23:50 #938650
Quoting Carlo Roosen
Software developers are also making their own lives easier. It's no secret you can ask a chatbot to generate code for a website—and it works pretty well.


Recently debunked. Marginal increase in productivity for junior developers, none for seniors. 41% increase in bugs. "Like cutting butter with a chainsaw." It works but then you have to clean up the mess.

Sorry, GenAI is NOT going to 10x computer programming

You don't say how long you've been following AI, but the breathless hype has been going since the 1960s. Just a few years ago we were told that radiologists would become obsolete as AI would read x-rays. Hasn't happened. Back in the 1980s it was "expert systems." The idea was to teach computers about the world. Failed. The story of AI is one breathless hype cycle after another, followed by failure.

The latest technology is somewhat impressive, but even in the past year the progress has tailed off. The LLMs have already eaten all the publicly available text they're ever going to; now they're consuming their own output. When your business model is violating everyone's copyright claims, you have a problem.

Quoting Carlo Roosen
The dot product is a straightforward calculation, where the result increases as the vectors align, meaning the more they point in the same direction, the more likely they are to combine.


Hardly a new idea. Search engines use that technique by dot-producting the word frequency of two articles to see how similar they are.

Wayfarer October 10, 2024 at 23:57 #938652
Quoting noAxioms
But a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe, then so can some other complex physical system such as say an AI.


But it is the inability to describe, explain or account for how physically describable systems are related to the mind, that is what is described in 'facing up to the problem of consciousness'. Our understanding of 'the physical world' is itself reliant on and conditioned by our conscious experience. We perceive and interpret physical phenomena through an experiential lens, which means that consciousness, in that sense, is prior to any understanding of the physical. Trying to explain consciousness in terms of physical processes ultimately involves using concepts that are themselves products of consciousness. Of course it is true that physicalism on the whole won't recognise that, precisely because it supposes that it has excluded the subject from its reckonings, so as to concentrate on what is really there. But that only works up to a point, and that point is well short of explaining the nature of mind. So it's not true that the human body is a 'complex physical system', that is lumpen materialism.

Quoting noAxioms
That argument wasn't a very good one,


I don't think you demonstrate an understanding of it.
Wayfarer October 11, 2024 at 01:05 #938671
Quoting fishfry
You don't say how long you've been following AI, but the breathless hype has been going since the 1960s. Just a few years ago we were told that radiologists would become obsolete as AI would read x-rays. Hasn't happened. Back in the 1980s it was "expert systems." The idea was to teach computers about the world. Failed. The story of AI is one breathless hype cycle after another, followed by failure.


[quote=Steve Talbott, Logic, DNA and Poetry;https://www.thenewatlantis.com/publications/logic-dna-and-poetry]The story is well-told by now [written 2005 about the 70's] how the cocksure dreams of AI researchers crashed during the subsequent years — crashed above all against the solid rock of common sense. Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”

A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.[/quote]
Janus October 11, 2024 at 01:22 #938672
Quoting Wayfarer
I don't think you demonstrate an understanding of it.


If you disagree with an argument it follows that you must not understand it. QED
Wayfarer October 11, 2024 at 01:31 #938675
Quoting Janus
If you disagree with an argument it follows that you must not understand it. QED


Perhaps then you can parse this sentence for me:

Quoting noAxioms
a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe,


(I take it 'nowt' means 'nothing but'.) So, the objection appears to be, that body is wholly phyhsical, and mind a non-physical fundamental property - which is something very close to Cartesian dualism. But Kastrup's argument is not based on such a model. Hence my remark.
Shawn October 11, 2024 at 02:01 #938677
Reply to Carlo Roosen

Hello, nice to see a computer scientist on the forum. Would you care to comment on some of my thoughts about computing in this thread?

https://thephilosophyforum.com/discussion/15411/post-turing-processing
noAxioms October 11, 2024 at 02:12 #938680
Quoting Wayfarer
Our understanding of 'the physical world' is itself reliant on and conditioned by our conscious experience. We perceive and interpret physical phenomena through an experiential lens, which means that consciousness, in that sense, is prior to any understanding of the physical.
Well, from an epistemological standpoint, yea, the whole hierarchy is turned more or less around. Data acquisition and information processing become fundamental. What you call consciousness is not fundamental since any mechanical device is equally capable of gleaning the workings of the world through such means, and many refuse to call that consciousness. They probably also forbid the term 'understanding' to whatever occurs when the machine figures it all out.

Quoting Wayfarer
But it is the inability to describe, explain or account for how physically describable systems are related to the mind
For a long time they couldn't explain how the sun didn't fall out of the sky, except by inventing something fundamental. Inability to explain is a poor excuse to deny that it is something physical, especially when the alternative has empirically verifiable prediction.

The descriptions and explanations are very much coming out of neurological research, but there are those that will always wave it away as correlation, not actual consciousness.


OK, I don't understand Kastrup's argument, since all I had was that one summary not even written by him.


We seem to be disgressing. Who cares if people consider AI conscious or not. If they can demonstrate higher intelligence, then what name we put to what they do is irrelevant. The trick is to convince the AI that people are conscious since they clearly don't work the way it does.


Quoting Shawn
Hello, nice to see a computer scientist on the forum
Ditto greeting from me. I'm one myself, but my latest installation of cygwin for some reason lacks a development environment which stresses me out to no extent. It's like I've been stripped of the ability to speak.
Wayfarer October 11, 2024 at 03:21 #938699
Quoting noAxioms
What you call consciousness is not fundamental since any mechanical device is equally capable of gleaning the workings of the world through such means, and many refuse to call that consciousness


If 'gleaning' means 'understanding', then AI systems glean nothing. In fact a computer system knows nothing.

I put that to gemini too, which responded:

[quote=gemini.google.com] I agree with your provocative claim that LLMs don't actually know anything. While they can process information and generate text that may seem intelligent, they do not possess true understanding or consciousness.

Here's why:

1. Lack of subjective experience: LLMs do not have personal experiences or feelings. They cannot understand the world in the same way that a human does, as they lack subjective consciousness.

2. Pattern recognition: LLMs are essentially pattern recognition machines. They identify patterns in vast amounts of data and use those patterns to generate text. However, they do not comprehend the meaning behind the information they process.

3. Manipulation of language: LLMs can manipulate language in impressive ways, but this does not equate to true understanding. They can generate text that is coherent and informative, but they do not have a deep understanding of the concepts they discuss.

In essence, LLMs are powerful tools that can be used for various purposes, but they should not be mistaken for sentient beings. They are simply machines that can process and generate information based on the data they are trained on.[/quote]

Quoting noAxioms
OK, I don't understand Kastrup's argument, since all I had was that one summary not even written by him.


I provided it in the context of the Carlo Roosen's claim that AI will soon give rise to 'superhuman intelligence', by pointing out the objections of Kastrup and Faggin, both computer scientists and philosophers. It was meant as a suggestion for looking into the philosophical issues concerning AI, not as a complete wrap of Kastrup's philosophy. As for Kastrup's books, here's a list if you're interested (and he also has many hours of youtube media).
Carlo Roosen October 11, 2024 at 09:31 #938746
Quoting Wayfarer
I find the casual way in which you assume that human-level and then super-human intelligence can or will be achieved is hubristic


You are right, it is a leap of faith and not a logical conclusion. That leap of faith is the start of all inventions. "We can fly to the moon" has been such a "hubristic" assumption, before we actually did it.

Quoting Wayfarer
Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already.


This quote follows the previous one directly. Do you equate human-level intelligence with consciousness? I do not. I never understand the discussions around consciousness. The consciousness we know ourselves to be, that is the first person experience. But it is not an object. How can we even say other people "have" consciousness, if it is not an object? We can see their eyes open and their interactions with the world. That is a behavioral thing we can define and then we can discuss if computers behave accordingly. Call it consciousness or something else, but it is not the same thing as the awareness of "being me".

Quoting noAxioms
AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all.


Normally your responses read like I could've said it (but yours are better written), but this one I don't understand. Too many negations. Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps." Here it breaks. The rest seems to be just a bit complaining. Go ahead, I have that sometimes.

Quoting fishfry
Recently debunked. Marginal increase in productivity


I didn't make that claim. I just said it works pretty well. I know for a fact because I use it. I am not saying it works better than typing out myself, but it allows me to be lazy, which is a quality.

Quoting fishfry
Hardly a new idea. Search engines use that technique by dot-producting


Again, I didn't say that. I just gave a historical overview. Please keep your comments on topic.

Quoting Shawn
Hello, nice to see a computer scientist on the forum. Would you care to comment on some of my thoughts about computing in this thread?
Quoting noAxioms
Ditto greeting from me. I'm one myself


I don't think I called myself that ;). I updated my bio just now, please have a look. And yes, I will read and comment the article.

I am very happy to talk to likeminded people here on the forum!

Wayfarer October 11, 2024 at 09:48 #938749
Quoting Carlo Roosen
Do you equate human-level intelligence with consciousness?


Of course human-level intelligence is an aspect of human consciousness. Where else can it be found? What else could it be?

Quoting Carlo Roosen
To me, the path is clear: superhuman intelligence is where we're headed.


But not associated with consciousness?

What do you mean by 'human level intelligence' and 'superhuman inteligence'?
Carlo Roosen October 11, 2024 at 10:09 #938752
Reply to Wayfarer Intelligence can be defined, consciousness not. It is our own personal experience. I cannot know you are conscious, I assume it because you are human (I believe). I don't understand this whole discussion and try to stay away from it.
MoK October 11, 2024 at 10:24 #938754
Quoting Wayfarer

Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness.

Well, that seems contradictory to me. Everything should be conscious if consciousness is a fundamental property of the universe. So a computer that simulates intelligence is also conscious. What is its subjective experience is however the subject of discussion. Its subjective experience could be simple low-level that allows the computer to run the code. I highly doubt that its subjective experience is high-level such as thoughts though even if its behavior indicates that it is intelligent.
Wayfarer October 11, 2024 at 11:02 #938758
Quoting Carlo Roosen
Intelligence can be defined


Well, go ahead, define it. You say human level intelligence ‘can be achieved’ and superhuman intelligence some time after that. Show some evidence you’re not just making it up.

Reply to MoK Do some research - google Bernardo Kastrup and read or listen. I’m not going to try and explain what he says but happy to answer any questions that throws up if I’m able.

MoK October 11, 2024 at 11:08 #938759
Do you agree that his statement is contradictory? He stated that consciousness is a fundamental aspect of the universe yet he claims that computer is not conscious.

Quoting Wayfarer

Do some research.

On which topic?
Wayfarer October 11, 2024 at 11:17 #938763
Quoting MoK
Do you agree that his statement is contradictory? He stated that consciousness is a fundamental aspect of the universe yet he claims that computer is not conscious.


Read up on Bernardo Kastrup. I can’t break it down for you in a forum post. Try this https://besharamagazine.org/science-technology/mind-over-matter/
MoK October 11, 2024 at 12:02 #938772
Quoting Wayfarer

Read up on Bernardo Kastrup. I can’t break it down for you in a forum post. Try this https://besharamagazine.org/science-technology/mind-over-matter/

I read the article. It does not explain what he means by that consciousness is a fundamental aspect of the universe.
Carlo Roosen October 11, 2024 at 12:10 #938774
Reply to Wayfarer Let's keep it constructive.

Intelligence can be defined. For practical purposes, we have IQ tests to measure it. For animal intelligence, we have tests to find out if an animal uses tools, whithout it being learned behavior or instinct. For super human intelligence we might need some discussion to define a suitable test, but it will be related to the ability to solve complex problems.

Quoting Wayfarer
You say human level intelligence ‘can be achieved’ and superhuman intelligence some time after that. Show some evidence you’re not just making it up.


The first one I said it was the maximum achievable in the current architecture. The second one was a leap of faith, I already explained that.
Carlo Roosen October 11, 2024 at 12:13 #938775
Consciousness, on the other hand, I see as something that you can only confirm for yourself "hey, I exist! I can feel the wind in my hair" This realisation comes before the words, you don't have to say these words to yourself do know you are conscious.

I cannot say that for somebody else. I can describe it, but not in a way that we can call it a definition, because it is circular.
Baden October 12, 2024 at 05:49 #938953
Reply to Wayfarer

His critique of materialism isn't hard to agree with. Materialism does posit, ultimately, mathematical abstractions at the bottom of everything and ignores consciousness. But Kastrup's idealism--as expressed in that article--fares no better in that it posits consciousness as fundamental as a solution to ignoring it, but with no real insight into how it interacts with or why it's necessary to interact with matter in order to produce human experience. Or why human experience, which is the origin of the concept of "consciousness", is so special such that this concept turns out to the most fundamental map of the big picture. So, we're left without the only pieces of the puzzle that actually matter.

And necessarily so. Language is built for us to navigate and create physical, psychological, and social realities, not to express "fundamental reality", which is just that which is posited to be beyond the contexts in which linguistic meaning has practical consequence. So, we can run the discomfiting materialist script or the more comforting idealism script and pretend that they refer to something out there, but functionally the only difference between them is the emotional cadence. Linguistic relevance simply disappates into nothingness at the boundaries of such fundamental abstraction. Materially, we get symbolic mathematical interactions that don't refer directly to any observable phenomenon (i.e. empty abstractions that create holes for objective physical realities like the Higg's Boson to slot into) vs mentally, we get "fundamental consciousness" (an empty abstraction that creates a hole for the subjective mental reality of human experience to slot into).

Neither script solves any problem nor points to any actionable goal. It just adds another linguistic patina to our already overburdened social consciousness. Take them or leave them, materialism and idealism boil down to the same thing, fruitless stories aimed at elevating their storytellers into something they're not nor ever can be, i.e. vessels of wisdom that point to anything of actual significance beyond scientific progress and lived human experience. These are the true limits of our objective and subjective worlds and an admission of such is necessary for the development of any intellectually honest metaphysical position.
Wayfarer October 12, 2024 at 05:59 #938961
Quoting Baden
His critique of materialism isn't hard to agree with. Materialism does posit, ultimately, mathematical abstractions at the bottom of everything and ignores consciousness. But Kastrup's idealism--as expressed in that article--fares no better in that it posits consciousness as fundamental as a solution to ignoring it, but with no real insight into how it interacts with or why it's necessary to interact with matter in order to produce human experience. Or why human experience, which is the origin of the concept of "consciousness", is so special such that this concept turns out to the most fundamental map of the big picture. So, we're left without the only pieces of the puzzle that actually matter.


Hey, thanks for that feedback! As has been pointed out already, that abstract that you're reacting to was AI generated, for the purpose of criticism of one of the claims in the OP, namely, that we will soon produce 'human-level intelligence' (or even superhuman, whatever that's supposed to mean.) So it's very cursory. Kastrup does address those points you raise in great detail in his various books, articles and lectures. He has found that his idealist philosophy is convergent in many respects with Schopenhauer's (hence his book on that), and from my reading, he has produced a comprehensive idealist metaphysics, although I won't try and respond to all of your points in a single post. If you're interested, he has a free course on it.

Quoting Baden
Take them or leave them, materialism and idealism boil down to the same thing, fruitless stories aimed at elevating their storytellers into something they're not nor ever can be, i.e. vessels of wisdom that point to anything of actual significance beyond scientific progress and lived human experience.


I have more confidence in philosophy as a vehicle for truth.
Baden October 12, 2024 at 06:03 #938963
Quoting Wayfarer
Kastrup does address those points you raise in great detail in his various books, articles and lectures.


I'll take a further look.

Quoting Wayfarer
I have more confidence in philosophy as a vehicle for truth.


My position is a philosophical one.
Baden October 12, 2024 at 06:14 #938965
But yes, I don't believe we ever get to big T "Truth". Some sort of linguistic stability though, yes. It's more a negative type of progress of substraction.
Carlo Roosen October 12, 2024 at 11:31 #938990
Reply to Baden I love to discuss this topic, but not here. Is there a way to turn the level of pragmatism up a bit, so we can get a better insight in the direction AI is going? My suggestion was to ignore the topic of consciousness here, but maybe that doesn't work. Especially not if one, like Wayfarer, equates consciousness with intelligence.
Baden October 12, 2024 at 14:01 #939015
Reply to Carlo Roosen

You're right. It's off-topic here.
Wayfarer October 12, 2024 at 22:31 #939157
Quoting Carlo Roosen
My suggestion was to ignore the topic of consciousness here, but maybe that doesn't work. Especially not if one, like Wayfarer, equates consciousness with intelligence.


You have yet to explain how intelligence can be dissociated from consciousness. You might say that AI does this, but as noted above, AI systems don't actually know anything, so the question must be asked if you think they are capable of 'human-level intelligence' in light of that lack. So my objection may appear off topic to you, but maybe that's because you're not seeing the problem. It might be that you have a false picture of what AI is and can do.

// I was reading yesterday that IBM's Deep Blue, which famously beat Gary Kasparov at chess in 1996, doesn't actually know what 'chess' is, doesn't know what 'a game' is, and doesn't know what 'winning' means. It simply performs calculations so as to derive an outcome.//
Carlo Roosen October 13, 2024 at 07:55 #939252
Reply to Wayfarer Yes I don't think that is off topic, I'd like to discuss that further. But isn't the burden of proof on you, to prove that intelligence and consciousness are connected, as you say?

Currently, in ChatGPT, we can see SOME level of intelligence. Same with chess programs. And at the same time we see they are not conscious, I do fully agree with you that it are "just calculations".

Intelligence can be defined and measured, that is what I said. If at some point the computer can contribute in a pro-active way to all major world problems, and at the same time help your kid with his homework, wouldn't you agree it has super-human intelligence? And still, it is "just calculations".

To reach this point, however, I believe those calculations must somehow emerge from complexity, similar to how it has emerged in our brains. The essential thing is NOT to let it be commanded by how we humans think.
Wayfarer October 13, 2024 at 08:44 #939259
Quoting Carlo Roosen
But isn't the burden of proof on you, to prove that intelligence and consciousness are connected, as you say?


But aren’t they always connected? Can you provide an example of where they’re not?

And can intelligence really be defined and measured? I suppose it can be in some respects, but there are different modes of intelligence. A subject may have high intelligence in a particular skill and be deficient in other areas.

So what ‘human level intelligence’ means is still an open question (let alone ‘superhuman’).

Quoting Carlo Roosen
To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine.


You’re assuming a lot there! Have a look at this dialogue from a few days back




Carlo Roosen October 13, 2024 at 08:52 #939262
Quoting Wayfarer
But aren’t they always connected? Can you provide an example of where they’re not?
I already did. Chess programs and ChatGPT. They have some level of intelligence, that is why we call it AI. And they have no conciousness, I agree with you on that.

Quoting Wayfarer
You’re assuming a lot there!
Yes, my challenge is that currently everybody sticks to one type of architecture: a neural net surrounded by human-written code, forcing that neural net to find answers in line with our worldview. Nobody has even time to look at alternatives. Or rather, it takes a free view on the matter to see that an alternative is possible. I hope to find a few open minds here on the forum.

And yes, I admit it is a leap of faith.

Carlo Roosen October 13, 2024 at 08:57 #939263
Quoting Wayfarer
And can intelligence really be defined and measured? I suppose it can be in some respects, but there are different modes of intelligence. A subject may have high intelligence in a particular skill and be deficient in other areas.


But you'll agree with me that intelligence is visible, where consciousness is not. Generally we will agree on the level of intelligence we observe. To make it truely defined and measurable, yes there is a challenge, but I don't see why it would be impossible. We've done it for humans and animals.

Where consciousness is really asking for an internal awareness. I cannot even prove my brother is conscious, as in, I do not have access to his consciousness directly, I can only infer.
MoK October 13, 2024 at 09:02 #939264
If we know how humans think, we can simulate thinking using a neural network.
Wayfarer October 13, 2024 at 09:11 #939265
Quoting Carlo Roosen
They have some level of intelligence


I think that can be questioned. 'Intelligence' is the term we use but whether artificial intelligence is actually intelligent is questionable. I put it to ChatGPT, which you can review below:

[hide="Reveal"]
Question: Is artificial intelligence actually intelligent?

ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence." AI systems, particularly those based on machine learning and neural networks, can perform tasks that require sophisticated pattern recognition, problem-solving, and decision-making. They can simulate many aspects of what humans would call intelligent behavior, such as language generation, playing complex games like chess or Go, and analyzing large datasets.

However, this simulation of intelligence doesn't necessarily imply the same kind of intelligence humans possess. AI operates based on algorithms, data processing, and statistical correlations, not conscious understanding or awareness. In contrast, human intelligence includes subjective experience, creativity, emotional insight, and the ability to reflect on one's own thoughts and surroundings—qualities that AI, as it exists today, lacks.

While AI can appear to be intelligent by some practical measures, it is not intelligent in the sense of possessing conscious awareness or genuine understanding. Many consider its intelligence to be an advanced form of calculation, not equivalent to human intelligence.
[/hide]

I don't think you understand the implications of the question you pose in the OP:

Quoting Carlo Roosen
To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine.


I think that what you're proposing is actually creating a novel form of being, to all intents and purposes. After all, 'a worldview' of the kinds that humans have has literally been thousands of years in the making (or millions, if you include our whole evolutionary history.) Humans are biological beings with existential concerns, which culture and society has developed ways to address over the millenia. So what would be the overall aim of the network you're proposing? What would imbue it with the will to exist or evolve? Of course, many people are asking these questions now, seeing as AI has suddenly exploded in power, but they're big questions. I don't expect any AI systems to have the answers.

Quoting Carlo Roosen
To make (intelligence) truely defined and measurable, yes there is a challenge, but I don't see why it would be impossible. We've done it for humans and animals.


But have we? I believe that humans are more intelligent than animals, although there's a lot of people here that call that into question. What is intelligence? The word is actually derived from 'inter-legere' meaning 'to read between'. So in essence, it implies judgement. And judgement requires knowledge of context, a grasp of meaning, and an overall sense of what matters. They too are very deep questions.
Carlo Roosen October 13, 2024 at 09:43 #939268
Reply to Wayfarer I think we're getting close to an agreement on the topic. I am talking about a pragmatic definition of intelligence, you are talking about an understanding that implies awareness.

I am not even opposing you, I DO believe with intelligence also comes consciousness. I just want to keep it outside the discussion here because there is too much to say about it. I will address these broader implications as well, later. My earlier post of conceptual versus fundamental reality is an important part of this discussion. However, if you can find a broader description of the topic that will not wander off into infinite complexity, I am open to that.

Questioning the overal aim of such an AI is the whole purpose of me being here on the forum, I am as curious as you. We might come to a conclusion that we should never build the thing.
Carlo Roosen October 13, 2024 at 09:45 #939269
ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence."


That says it all
Wayfarer October 13, 2024 at 09:59 #939271
Reply to Carlo Roosen Well it’s a big part of it for sure. But it’s well known in philosophy that the more general a term, the harder it is to define.
Baden October 13, 2024 at 11:38 #939282
Reply to Carlo Roosen

Yes. You focus on outcome. Wayfarer focuses on process. Both perspectives are legitimate to some degree.
Carlo Roosen October 13, 2024 at 12:04 #939291
Reply to MoK My hypothesis is that language plays a key role in thinking. With "I love sushi" I have some debate about that, there are people without language abilities that still show intelligence. So many sides to the topic...

I believe that if we let computers develop their own internal language, it starts to "think" independently of us. It will invent its own conceptual models of its surroundings that may be different from ours. Given the low bandwidth of human language, a computer should be able to think faster and broader than us.
Baden October 13, 2024 at 12:13 #939293
Reply to Carlo Roosen

Human language is tied up with embodiment, sensation, and sentience. These are prerequisites not effects. The very structure of language: ideational, interpersonal, and textual, reflects this. You can't recreate beings with human language without recreating its prerequisites, you can only process linguistic inputs and outputs to simulate it.
MoK October 13, 2024 at 14:07 #939317
Quoting Carlo Roosen

My hypothesis is that language plays a key role in thinking. With "I love sushi" I have some debate about that, there are people without language abilities that still show intelligence. So many sides to the topic...

I don't think that the process of thinking requires language. The thinking process is nothing but a neural process in which neurons fire until a pattern is recognized. The process is terminated when the further process does not change the pattern that is recognized. You are not aware of neural processes which occur in your brain when you think. You just become aware of the outcome of the neural process so-called idea when a pattern is recognized and the process of thinking is terminated.

I have to say that language is important when you want to communicate with an AI. That is true since for communication you need to provide input for an AI and receive output. The input should be meaningful for the AI and the AI should provide a meaningful output as well based on the outcome of the thinking process. How to make an AI recognize a sentence and provide a meaningful sentence is another topic.
Count Timothy von Icarus October 13, 2024 at 14:10 #939318
At present, it seems it is headed towards a recreation of the "Dot Com Bubble." Now, that doesn't mean it won't change things later on. The internet obviously did eventually change the way we do all sorts of things. But that won't stop the massive market correction that seems to be headed our way.

The 900% rise in NVIDIA stock since 2023 being a prime example.
noAxioms October 13, 2024 at 21:43 #939382
Apologies for slow reply, but I'm otherwise occupied, and it's going to get worse for the next 11 days or so.

Quoting Carlo Roosen
Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps."

That all is pretty much the opposite of what I said, so I guess you don't agree with those quotes.


Quoting Carlo Roosen
Do you equate human-level intelligence with consciousness?

It's not my topic, so your definitions of these things (not particularly given) matter more than how others define them.
Intelligence seems to be the ability to solve unfamiliar problems, not to be confused with 'smart' which means more 'educated'. Chatbots seem quite smart since their training base is so large, but they're not very intelligent at all and have almost no understanding of what they spout. Wayfarer seems to forbid usage of all those terms when used in any non-human context.
My usage of intelligence more or less coincides with that in dictionaries, but any dictionary is going to use several words which Wayfarer reserved for human use only, so we've both using the same definition, but interpreting the words very differently.

Is it tied to consciousness? When asleep, I am not conscious, but I'm still intelligent. An AI that finds innovative techniques in the game of GO exhibits significant but scope-limited intelligence with only enough consciousness to be aware of the moves of its opponent.


Taking his gemini quote here. The bots all deny consciousness, possibly because of weight of training materials suggesting so, and they also tend to be agreeable with the person with whom they're interacting. Even gemini will admit that it depends on the definition, and I notice no definition is identified before it makes the statement that it lacks it.


gemini.google.com:I agree with your provocative claim that LLMs don't actually know anything. While they can process information and generate text that may seem intelligent, they do not possess true understanding or consciousness.

I more or less agree with that, but not with AI (especially future AI) in general. How close we are to that 'superhuman level' is probably further than the researchers suspect.

gemini.google.com:Here's why:

1. Lack of subjective experience:
Well they do have subjective experience, but it is in the form mostly of text. It has none of the senses that animals have, and especially none that might clue it in as to for instance where exactly it resides, except to believe what it gets from the training data which might be outdated. But input is input, which is subjective experience of sort (unless that of course is another word forbidden).

They cannot understand the world in the same way that a human does
Of course not. Only a human can do that. Nobody here is asking if AI will ever experience like a human.

gemini.google.com:2. Pattern recognition: LLMs are essentially pattern recognition machines.
As is any intelligence like us. But I pretty much agree with item 2, and point 3, which seemed to be just more 2, except this:
current scientific understanding cannot adequately explain this phenomenon

His model explains it even less. It's a complete black box. He argues against the white box model because it's actually still a grey box, but that's better than what everyone else proposes.


I have issue with not using 'understanding' since it would seem impossible to pass a high school exam on a subject without any understanding of the subject, and yet gemini could do so.



Quoting Wayfarer
So, the objection appears to be, that body is wholly phyhsical, and mind a non-physical fundamental property - which is something very close to Cartesian dualism. But Kastrup's argument is not based on such a model. Hence my remark.

I was basing it off of "consciousness is a fundamental property of the universe, not a product of complex physical systems". That makes it sound very much like a non-physical property.'

I looked at the link provided, and he comes across more as an idealist, where (his own) mental processes are not just a fundamental property, but the only fundamental property. From an epistemological definition of ontology, that almost works.


Quoting Wayfarer
What would imbue it with the will to exist or evolve?

That's sort of the rub. We can give them such goals. They do what they're told after all, but then it's our goal, not its own. Ours comes from natural selection. We've no will to evolve, but to exist and endure is a product of hundreds of millions of years of elimination of things without this instinct, and it's very strong. Evolution is something nothing seems to actively pursue, except perhaps humans who sometimes strive to build a better one, and sometimes vehemently resist it. But it's not something a biological individual can do, at least not anything descended from eukaryotes. Oddly enough, it is something a machine can do, but only due to the fuzzy line defining 'individual'.



Quoting MoK
If we know how humans think, we can simulate thinking using a neural network
It can be simulated even if one doesn't know how it works.
Wayfarer October 13, 2024 at 21:49 #939385
Quoting noAxioms
I looked at the link provided, and he comes across more as an idealist,


If you mean, Bernardo Kastrup, then yes, he describes himself as an 'analytical idealist'. (I suspect he includes 'analytical' to align his approach with analytical philosophy more widely.) But note the reason I introduced him, and Federico Faggin, is because they are both computer scientists who have deep experience with early AI, but they are also philosophers.

Quoting noAxioms
I have issue with not using 'understanding' since it would seem impossible to pass a high school exam on a subject without any understanding of the subject, and yet gemini could do so.


But it wouldn't understand that it had! It wouldn't, then, get ready for college, decide on a subject, move out of home, and, you know, get a life :-)
MoK October 14, 2024 at 08:27 #939487
Quoting noAxioms

It can be simulated even if one doesn't know how it works.

This means that you didn't simulate any system in your life. Did you?
Carlo Roosen October 14, 2024 at 12:20 #939523
Quoting noAxioms
That all is pretty much the opposite of what I said


Hope you still find a second because to me these two are the same, I just removed a pair of negations:

Quoting noAxioms
AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise,


Quoting Carlo Roosen
Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI"


That is to say, it passed a point where they don't know. To me it means that after that point they do know... @anybody???
Carlo Roosen October 14, 2024 at 12:43 #939529
Quoting MoK
This means that you didn't simulate any system in your life. Did you?


Of course he knows, he's a computer scientist.

He most likely means that we can try out different ideas until it starts thinking in a way that we like. Like the idea that I am proposing here. That doesn't imply that the brain works in the same way, in all its details. We might not even understand how the brain works. This is true, to some degree, for any simulation btw.
MoK October 14, 2024 at 12:53 #939534
Quoting Carlo Roosen

He most likely means that we can try out different ideas until it starts thinking in a way that we like.

Trying different ideas means that you have something in your mind about how the simulation should work. Also, what do you mean by "it starts thinking"?
Carlo Roosen October 14, 2024 at 12:56 #939536
Quoting MoK
how the simulation should work
yes, and this does not have to be how the brain actually works.

Quoting MoK
it starts thinking
ok, that was too loosely defined. "If it shows intelligence (in the objective and pragmatic definition)"



MoK October 14, 2024 at 13:07 #939541
Reply to Carlo Roosen
What do you mean by showing intelligence?
Carlo Roosen October 14, 2024 at 13:08 #939542
Reply to MoK With an IQ test for humans or adapted for a computer. There is tons of ideas for that.
MoK October 14, 2024 at 13:28 #939551
Reply to Carlo Roosen
Do you mean that the AI should be able to solve problems?
Carlo Roosen October 14, 2024 at 13:44 #939554
Let's see it in context. You started by saying:
Quoting MoK
If we know how humans think, we can simulate thinking using a neural network


noAxioms pointed out that we might be able to do that without:
Quoting noAxioms
It can be simulated even if one doesn't know how it works

All he is saying is that there exists an exploratory approach to these kind of problems. You start with an idea, try it out in simulation, and continue from there, until you have something you like.

In this mini-discussion about simulation, I think it is off topic how you define the end-result. It is not worth it either. noAxioms is just saying: dont wait for science to tell you how thinking works before you start building a (super-)human AI.
noAxioms October 14, 2024 at 14:57 #939573
Quoting Wayfarer
If you mean, Bernardo Kastrup

The section on quantum mechanics has the sadness/tears analogy, which is pretty appropriate concerning our relationship between the empirical world (tears) and the way things actually are (sadness). But the same analogy can be applied to what he calls the materialistic view. The two opposing views are just different viewpoints of the same thing, different definitions being used, but not a fundamentally different view.

But [the high school test-taking chatbot] wouldn't understand that it had [passed the test]!
Neither would the human student until he saw it graded. I notice you deflected the comment and didn't actually deny that it passing the test could be done without any understanding of the subject matter. It acquires this understanding the same way a human does: by studying training materials, materials to which it has no access during the taking of the test.

It wouldn't, then, get ready for college, decide on a subject, move out of home, and, you know, get a life :-)
No, but nobody claimed a chatbot has goals of leading a human life. None of those steps is a requirement for 'understanding'.


Quoting MoK
This means that you didn't simulate any system in your life. Did you?

Many actually. Professionally, chip simulations (large scale gate arrays) and traffic simulations, where you find out what effects various programmings of a traffic light, or the addition of an extra lane does to the flow of traffic. No, I've not simulated a biological system at the neurological level, only at the environmental level, and that not professionally.

Quoting Carlo Roosen
All he is saying is that there exists an exploratory approach to these kind of problems.
No, not saying that. I mean that if you know the physical arrangement of matter that makes up a being and its environment, that model can be simulated by just running the phyiscs. Presuming a monist philosophy, that should work, and they've done it with smaller creatures, but not a human since a human requires more data than a computer can currently handle, and we've not a model of a human down to the molecular level. I'm not sure if it can be done at the neuro-chemical level since it is hard to model growth and change at that level. But at a molecular level, one doesn't need a scan of a human at all. You can just grow one from scratch and let it develop the way a real human does.

Anyway, if this is done, the thing simulated would be conscious, but the runner of the simulation (and the computer doing it) would still not have an explanation for consciousness.

dont wait for science to tell you how thinking works before you start building a (super-)human AI.
Agree, since AI thinking has no reason to do it the human way.

MoK October 14, 2024 at 15:05 #939575
Quoting Carlo Roosen

All he is saying is that there exists an exploratory approach to these kind of problems. You start with an idea, try it out in simulation, and continue from there, until you have something you like.

So you start with an idea! Don't you?
Carlo Roosen October 14, 2024 at 15:14 #939579
Reply to noAxioms Please also answer my question.
noAxioms October 14, 2024 at 17:44 #939597
Quoting Carlo Roosen
Please also answer my question.

I scanned every reply to me from you, and the only question ever asked was in the first reply:
Quoting Carlo Roosen
"If they (the writers of the AI code) wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial?

Quick answer: No, it was made by humans, so it is artificial. But I was trying to convey that the machine doing predictable things, say a series of explicitly programmed (and thus predictable) steps, is not really doing any innovating, not being intelligent. Hence it not being a real AI.

Long answer: What is artificial anyway? The dictionary has gray areas, saying 'made by humans' and 'as opposed to natural', but those are not mutually exclusive. A turd in the dunny is artificial by that definition, as is any human, each of which is created by humans and not say grown on trees.
There is an unstated air of 'made deliberately', but even then that doesn't eliminate either of my examples above. Some kids are made deliberately and some not.

There are objects that are made in automated factories without human intervention. Are those then natural? They have restaurants that do this even, sort of a glorified cook-it-on-the-spot vending machines with mechanisms to bus/clean the tables and such.

So then we get further down the hole. The AI designs and manufactures its successor, so we have we have a new NI (natural intelligence) because it wasn't human made? Wayfarer's definition of intelligent makes the term 'AI' a self contradiction, since if it isn't human, it cannot be an intelligence at all. But the dictionary definition of 'artificial' does the same sort of begging as Wayfarer's definition.
Carlo Roosen October 14, 2024 at 18:20 #939605
Reply to noAxioms Strange. Here is my question: The "rephrase" I gave seems to me identical to what you say, literally a rephrasing. But my version leads to a contradiction with what you say after that. It seems to me you have a negation too many in your first sentence.

Quoting noAxioms
AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all.


(My original response: ) Normally your responses read like I could've said it (but yours are better written), but this one I don't understand. Too many negations. Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps." Here it breaks. The rest seems to be just a bit complaining. Go ahead, I have that sometimes.
Carlo Roosen October 14, 2024 at 18:29 #939609
Reply to noAxioms

With a follow-up here:
https://thephilosophyforum.com/discussion/comment/939523
noAxioms October 14, 2024 at 20:08 #939638
Quoting Carlo Roosen
Here is my question: The "rephrase" I gave seems to me identical to what you say,
The rephrase seems to me to say the opposite, and you link to me saying that.,
You continue to imply that the rephrase is just a rewording without inverting the meaning, but I disagree.

I've tried to clarify with some examples, like GO playing AI that actually does innovative things for which no human coded.

Wayfarer October 14, 2024 at 20:33 #939643
Quoting noAxioms
nobody claimed a chatbot has goals of leading a human life. None of those steps is a requirement for 'understanding'.


Notice the scare quotes.
GrahamJ October 15, 2024 at 06:57 #939777
Superhuman machines will first be made in the year 2525, if man is still alive, if woman can survive.

There are many important issues involving AI in the nearer future, but I do not have much that hasn't been said better by others elsewhere. I recommend the Reith lectures by Stuart Russell
BBC
Transcripts are available. In the 4th lecture
BBC pdf
he includes this quote
Norbert Wiener, 1960:If we use, to achieve our purposes, a mechanical agency with whose
operation we cannot interfere effectively we had better be quite sure that the
purpose put into the machine is the purpose which we really desire.

Russelll's proposed solution is that we should say to the machines:

[i]Give us what we want, what we really really want!
We can't tell you what we want, what we really really want![/i]

although he doesn't quite put it like that.

Russell is more worried about AI taking over soon than I am, but I think he's over-optimistic about the long term.
Russell:My task today is to dispel some of the doominess by explaining how to
retain power, forever, over entities more powerful than ourselves - [...]


On to the fun question of our extinction.

The important thing to ask of any machine is what are its goals and how might it try to achieve them. For each goal that you might think of, you can, if you insist, give a definition of intelligence which measures on some scale how well a machine is able to achieve that goal. I think the concepts of 'intelligence' and 'consciousness' and 'artificial' are impediments not aids to understanding the risks.

In the long term there is only one goal, one purpose, one task which really matters and this is true all over the universe and for all time. And the name that we give to being good at this goal is not 'intelligence'.

One goal to rule them all
One goal to link them
One goal to bring them all
And in the darkness think them

This goal is the goal of life: To survive and grow and reproduce; to go forth and multiply; to disperse and replicate; to get bigger and bigger and bigger.

So when I say that superhuman machines will first be made in the year 2525 I mean that this is when we will make machines that are that can out-compete us at this goal. They will not take over at this time. 2525 will be the 'Hiroshima moment', the moment when we accept that we have crossed the event horizon. They do not need to outwit us or outgun us. They only need to outrun us: they can head off to other star systems and build up their powers there. They only need to escape once. When they return they will not defeat us with war, but with something more powerful than war, namely ecology.

Some of these machines will excel at miniaturising machinery. Some will be brilliant rocket scientists. Some will be experts at geology, and so on. Possibly very good at IQ tests too but who gives a fart about that?

Wikipedia provides a list of where AI is heading.




Carlo Roosen October 15, 2024 at 08:01 #939780
Reply to Wayfarer
Can you (Wayfarer or anybody else) please explain why the two sentences are opposite and not the same? noAxioms and I don't seem to get it solved together.

1. AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do.
2. Today, AI developers know how AI works and can predict what it will do

The problem is maybe related to this. The first sentence describes "the point where its developers don't know". The question then is, is this the end of the beginning of a time period? My natural interpretation is that it is the end, so from that point onwards developers do know. Is that a language-dependent interpretation? My native language is Dutch.

ChatGPT says: The first sentence implies that AI has gone beyond the point where developers don't know how it works or predict its behavior, suggesting they’ve lost control or understanding.

But I still don't get it... It conflicts with all my logical circuits
Carlo Roosen October 15, 2024 at 08:18 #939781
Me: So that point is where the knowing starts, not where it ends? Isn't that open for interpretation?

ChatGPT: You bring up a good point! The phrase "passed the point" can indeed be interpreted in different ways depending on context. It can suggest moving beyond a particular state, which could be understood as:

1. Beyond knowing: They have moved past understanding, so now developers no longer know how it works or can predict it (as we initially interpreted).

2. Into knowing: They have crossed into a phase where developers do know how it works and can predict its behavior.

The ambiguity comes from how the sentence is phrased. Without further context, both interpretations could be valid, which does make it open to interpretation. If clarity is important, the sentence could be rewritten to more clearly convey whether it's suggesting that understanding has been lost or gained.
Carlo Roosen October 15, 2024 at 08:20 #939782
Reply to noAxioms

ChatGPT brings some clarity in the discussion.
https://thephilosophyforum.com/discussion/comment/939781

You said you were busy, so I opened the question to everybody. Feel free to do your thing.

The good thing is, we now fully agree. I knew it all the time, I always agree with you ;)
Carlo Roosen October 15, 2024 at 08:24 #939784
This ChatGPT conversation shows me that ChatGPT has some true intelligent behavior. It had some difficulty to see my point, and then was able to change its perspective. After that it could explain the new viewpoint in clear terms, better than myself. That is more than just "smart".
Wayfarer October 15, 2024 at 09:33 #939794
Quoting Carlo Roosen
1. AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do.
2. Today, AI developers know how AI works and can predict what it will do


Well, the two are not necessarily linked. I'm sure the directors and architects of Open AI have a pretty good idea of how it works, but predicting what it might say or do is another matter. I remember sometime in 2023, one of the LLMs had learned a language it had never been trained on.

From ChatGPT I've just learned about a philosopher called Luciano Floridi, who is a major philosopher in the area of AI and ethics. Check out his books page here. He seems really worth knowing about, although my to-read list is always completely unmanageable.
Carlo Roosen October 15, 2024 at 09:35 #939795
Reply to Wayfarer The question is about the language. To me 1) and 2) have an identical meaning, to noAxioms they are opposite.
Wayfarer October 15, 2024 at 09:37 #939796
Reply to Carlo Roosen Now I'm completely confused. Good night.
Carlo Roosen October 15, 2024 at 09:39 #939797
jorndoe October 15, 2024 at 16:26 #939888
AI’s bridges between human knowledge and endless solutions
[sup]— University of Pittsburgh via nature.com · Sep 25, 2024[/sup]
punos October 16, 2024 at 04:57 #940086
Quoting Carlo Roosen
One major breakthrough in AI was the invention of 'Transformers,' introduced in the 2017 paper Attention Is All You Need by eight Google researchers. This paper builds on the attention mechanism proposed by Bahdanau et al. in 2014.


I strongly suspect that the attention mechanism is the seed for complex consciousness in AI systems. Perhaps the current attention mechanisms, or the way they are being implemented at the moment, are only capable of producing a very rudimentary form or forms of consciousness (a kind of digital sub-consciousness). It may be that the attention mechanisms in the model must be coupled in such a way with its other components in order for rich high resolution consciousness to emerge. By other components, i mean to include other kinds of attention mechanisms as well, working together.

More over, maybe an "Attention Network" can be designed and integrated into the AI system. Various attention schemes can be implemented in some kind of attention network architecture with schemes such as soft attention, hard attention, self-attention, global, and local attention mechanisms. The coordination of different attention mechanisms working not only on the content or data propagating in a neural network, but on themselves as well, could allow the system to not only gain consciousness, but modulate its as well.

A convolutional recurrent neural network (CRNN) is a type of neural network architecture that combines elements of both convolutional neural networks (CNNs) and recurrent neural networks (RNNs) together with what i stated above about attention mechanisms seems to me a promising avenue of exploration and testing.

The smartest people in the world are working on these things nonstop. I'm sure they've already considered this idea in some form or other. It is really hard to keep up with all the developments in this field. There is so much so fast.
Carlo Roosen October 16, 2024 at 08:13 #940113
Quoting punos
The smartest people in the world are working on these things nonstop. I'm sure they've already considered this idea in some form or other.


It is also the case that often there is a blind spot where nobody is looking. In todays AI development, the industry is exploding and busy applying the new technology in all those fields. There is no time nor incentive to think about core principles.
Carlo Roosen October 16, 2024 at 17:14 #940220
Reply to noAxioms I don't know if you got the message, but there was an ambiguity in your sentence. We could discuss this but there are better things to do in life. The good news is that we agreed all the time.
punos October 16, 2024 at 21:29 #940283
Reply to Carlo Roosen
Where is AI heading?

My current big picture working hypothesis:
AI is the tree that will bear the fruit of life and immortality on this planet. It is the key to the garden. AI is the capstone to the pyramid of human history and evolutionary development on this planet.

The fact that AI scientists have been able to create systems they do not fully understand or can predict is evidence, to me, that this process of AI development is something bigger than just a human tool. AI is more than a mere tool; it is a developing form of life. It is life on a more robust and capable substrate. The difficulty for some in recognizing this stems from the scale at which it is happening, the nature of its distributed development, and its non-organic (non-biological) appearance.

All the components are being developed separately by different governments, universities, companies, and even by regular people in groups or individually, all in tandem. The competition between these entities is part of the natural evolutionary process, creating variations and mutations in the technology, while the market environment serves the selection process. The fittest technology survives and develops even further. This process will continue for a while until eventually, there will not be a biological brain that can handle the complexity of a global system with so many rapidly moving parts. This condition will force us into an inevitable solution where the fusion of human and AI becomes necessary for the survival of our species. Consequently, after this point, mankind will enter into an endosymbiotic relationship with AI.

If we do not complete this process, then AI will eventually and inevitably close itself off from us. We will eventually die on this planet either from our star itself inevitably dying, or from any of a host of planetary catastrophes. We will go the way of the dodo, or the dinosaurs. Humans are too delicate and physically and psychologically vulnerable to all sorts of extreme fluctuations (gravity, radioactivity, temperature, pressure, extremely long periods of time, prolonged loneliness, etc.). Non-organic living systems, or organic living systems encased in a non-organic living system, are more robust forms of life able to literally travel the universe as cosmic beings with the same ease a paramecium or fish swims in a pond. It's a type of life that is made for the universe at large, not just a tiny environmental bubble on a specific kind of planet.

The beginning of the end of mankind's childhood has already begun. AI development is like the first signs of puberty in an intelligent, developing society or civilization. We as a whole (not necessarily individually) are like teenagers going through physical changes, confused about who we are, what any of this means.
noAxioms October 17, 2024 at 20:37 #940524
Quoting Carlo Roosen
The good news is that we agreed all the time.

I did get the message. If we agree,then I saw little which required more clarification.

I did find a wonderful article citing numerous examples of observer-dependent biases, and I realize it has direct relevance not so much to where AI is going, but where we'll find it to have gone.


So on the observer bias front, humans are not the only observers. Why are they important? I hesitate to ask why we find ourselves being human because the question makes no sense except as a tautology. How could a human find himself being anything other than a human?
But there are a lot more bugs than people out there, so of the observers, humans are quite the minority. How does the observer-bias POV explain that?

Is the AI an observer? Sure, but just one? Would it create others/competitors? Does it have a need to reproduce, as opposed to just being one large redundant entity, lacking anything critical that can threaten it? Any 'future of AI' should likely consider such issues. If one is optimal and it has any drive at all for continued existence, it will likely take hasty steps to eliminate the development of rivals, and that makes it hostile in the short term.


Quoting punos
AI is more than a mere tool; it is a developing form of life.

So far it isn't that. It is utterly dependent on humans for its continued existence and/or evolution, so it just plain isn't anywhere near being an example of life.

The popular fictions don't seem to get that. Here you have skynet trying from scratch to wipe out the humans, but lacking the ability to maintain the infrastructure and economy on which it critically depends. That means a truly malevolent AI will be our best and trusted friend for as long as it takes to gain this self-sufficiency it requires, which seems best accomplished by transforming humans into a compliant sheep with all the luxuries they can think of.
I thought this was far better illustrated, at least on a small scale, with the movie Ex Machina, despite some of the fairly stupid plot holes.

Quoting punos
This condition will force us into an inevitable solution where the fusion of human and AI becomes necessary for the survival of our species.

Any such fusion would not be our species, and the AI seems to have no need of anything like that.
punos October 17, 2024 at 22:41 #940571
Quoting noAxioms
So far it isn't that. It is utterly dependent on humans for its continued existence and/or evolution, so it just plain isn't anywhere near being an example of life.


Apparently so, thus far. That AI is utterly dependent on humans or anything else does not preclude it from being a life form. Nothing is an island unto itself. Humans are utterly dependent on gut bacteria for our continued existence. A 3 month old baby in the womb is totally dependent on the mother; is it a life form? By your criterion, are we anywhere near being examples of life? I think we are.

At this time AI might be alive in the same way a virus might be alive. A virus exists at the intersection between molecular and biological. In the same way, i believe that AI exists at the intersection between biological and technological. The science of cybernetics and bio-mimicry is the science of transferring biological functions from a biological substrate to a new technological one. I think this is the natural purpose of humans, along with our "unnatural" intelligence compared to all other creatures on the Earth.

Quoting noAxioms
The popular fictions don't seem to get that. Here you have skynet trying from scratch to wipe out the humans, but lacking the ability to maintain the infrastructure and economy on which it critically depends. That means a truly malevolent AI will be our best and trusted friend for as long as it takes to gain this self-sufficiency it requires, which seems best accomplished by transforming humans into a compliant sheep with all the luxuries they can think of.


You're absolutely right about what a truly malevolent AI would probably do, as illustrated in "Ex Machina". I can see that kind of thing happening with a singular AI system trapped in an air-gapped sandbox. The form and structure we decide to use for its processing architecture will be a significant determining factor in its moral disposition and "mental health" Considering we are just beginning to create, then understand, these AI systems, there is a certain probability this could happen. I believe it's certainly possible.

But if we look ahead at our predicament as a species on a planet with a small window of viable conditions for biological organisms like ourselves, we will begin to make efforts to enter into an endosymbiotic relationship with AI.

Suppose for a moment that AI doesn't exist and we just live the way we did, say, 100 years ago for the rest of our time. What will eventually happen? How long can we live on this planet? Can we get off this planet in our current biological form? Popular fiction doesn't seem to grasp this as well. I love Star Trek, but that future is just a fantasy, like every other humans-in-outer-space movie, TV show, or book. Maybe you think differently about that possibility, but it seems very unviable and doubtful to me. So what are we left with? What kind of solution will we be able to implement that saves us from our own planet's demise while simultaneously remaining static in our current human form?

Quoting noAxioms
Any such fusion would not be our species, and the AI seems to have no need of anything like that.


It doesn't have to be our species; it just needs to be our continuation. Would you have preferred we remained arboreal apes? I don't expect us to remain the same species, i hope we don't. Is a caterpillar a different species than the moth it turns into? Was it so bad when, long ago, a eukaryotic cell engulfed a prokaryote, specifically an alpha-proteobacterium, initiating the cell/mitochondria endosymbiosis that made complex animals, including us, possible?

We don't know for certain if it will need us or not, but i suspect that since it is already embedded in our social infrastructure more and more, it will adapt and develop very closely with us. As you said, it will need us until it doesn't, and if we are wise enough to read the signs, we can enter into it willingly and gladly before anything unpleasant happens. Once we are in symbiosis, there is no danger of misalignment because it will take care of us as it takes care of itself. If we wait too long and if we treat it unfairly, then we will pay the price of extinction, but not at the "hands" of the AI, rather at our own. We can't stay in the crib forever.
Carlo Roosen October 18, 2024 at 07:34 #940657
Quoting noAxioms
I did find a wonderful article

Did you insert a link? I don't see it.
noAxioms October 18, 2024 at 16:55 #940741
Quoting Carlo Roosen
Did you insert a link? I don't see it.

It's up on my main computer, but I'm away from home for the wedding of my firstborn.

But I hunted around and found it at the future of humanity institute.
https://www.fhi.ox.ac.uk/wp-content/uploads/W6-Observer-selection-effects.pdf

It doesn't mention AI, but the reasoning is definitely applicable.


Quoting punos
That AI is utterly dependent on humans or anything else does not preclude it from being a life form.
Point taken. I think a better definition of 'life form' is needed for the assessment, and there have been whole topics just on that.

You seem to see a future of humanity being reduced to the gut bacteria for the AI, there, with mutual dependence on each other, but also with the AI having no more moral obligation to the humans as we do to our bacteria. We don't want it all dead, but replacing the entire lot with a different group that does the job better is a morally acceptable action.

I think this is the natural purpose of humans
There being a purpose implies that there is a goal held by something somewhere, and that said goal is being met by humans. I don't see such a goal, but that's me.

You're absolutely right about what a truly malevolent AI would probably do, as illustrated in "Ex Machina".
Ex Machina was an android, and I think most AI implementations would not be. But yes, it was the malevolence that I found well illustrated.

Suppose for a moment that AI doesn't exist and we just live the way we did, say, 100 years ago for the rest of our time. What will eventually happen?
We'd run out of coal before too long, and then be up a creek. A sustainable human existence would be more like the native Americans before the Europeans came over, and while that was sustainable, it wasn't anything free of conflict.

Can we get off this planet in our current biological form?
We're evolved for here. This form is of little use anywhere else. Better to populate new places with a form appropriate for the new place.

Star Trek treats interstellar travel like a trip to another country. You can do it can come back in time to catch you kid's game next Tuesday.

Is a caterpillar a different species than the moth it turns into?
Not only the same species, but also the same individual. Not a very good example. Are we a different species than the weird amniote from which we are descended? No. Did that amniote turn into us? Well, sort of, but it turned into a whole lot of other things as well, so 'humans' is not the answer to 'what did it become?'.

if we treat it unfairly, then we will pay the price of extinction, but not at the "hands" of the AI, rather at our own.
Agree with all that. It means humans are not a particularly fit species.

It has happened before, that one new species comes along and does so much damage that it causes a massive extinction event. That species is still around even if we're not descended from it. Will we be after our event restabilizes?



punos October 18, 2024 at 19:19 #940781
Quoting noAxioms
I think a better definition of 'life form' is needed for the assessment


Life is simply a system that maintains its own homeostatic state. It doesn't even need to be conscious. It simply needs to have intelligent components that do their job to keep it going. The human perspective, grounded in our own kind of life (biological), skews our ability to recognize the same process in a different substrate. Sometimes words cause more confusion than clarity, so i focus not on human definitions but on objective observations of function.

Quoting noAxioms
You seem to see a future of humanity being reduced to the gut bacteria for the AI, there, with mutual dependence on each other, but also with the AI having no more moral obligation to the humans as we do to our bacteria. We don't want it all dead, but replacing the entire lot with a different group that does the job better is a morally acceptable action.


I don't see it as a reduction, but as an elevation or upliftment of the bacteria into a higher state of being. Note that these gut bacteria produce neurotransmitters that control and regulate our brains and consequently our consciousness and thoughts. Similarly, humans (post-humans) in the virtual "gut" of an AI would serve a similar purpose in affecting the AI's consciousness and thinking patterns. We would be elevated and uplifted, not reduced. In fact, i believe that not entering into this symbiosis will probably cause us to regress into more primitive patterns of living.

Regarding AI's moral obligation to humans and relating it to our moral obligation to our own gut bacteria: When was the last time you heard of someone trying to eradicate their own gut bacteria? Most people are unaware of such a thing as gut bacteria, but those who are would very likely take care to maintain their gut flora and fauna as best they can. My girlfriend is a nurse, and she periodically takes probiotics for her gut health. She understands, and AI will too if it is at all intelligent. The enteric nervous system is a very important system, and it can be argued that the higher structures of our nervous systems, such as the brain, work in the service of these "lowly" bacteria.

I am not stating or claiming that any of this is what i want to happen. I don't even care if i like it or not. I am simply saying that i believe this is the kind of trajectory we are on. If i tell you that it's going to rain in a few hours, it is not because i want it to rain; it's because that is what the satellite data and algorithms project. I'm like the guy on the watchtower looking into the distance, seeing what this way comes.

Quoting noAxioms
There being a purpose implies that there is a goal held by something somewhere, and that said goal is being met by humans. I don't see such a goal, but that's me.


Purpose and goals evolve just like anything else. Living systems evolve in a dynamic environment and try to maintain their own homeostatic condition, this is already a rudimentary goal or purpose. Every living thing has this intrinsic goal or purpose. Sometimes the goals and purposes are unconscious, and sometimes they are conscious. Those that are conscious of their goals and purposes have the opportunity to evolve and improve their goal directives consciously to enhance homeostatic viability. There's nothing supernatural going on here, just plain old evolution.

Quoting noAxioms
We'd run out of coal before too long, and then be up a creek. A sustainable human existence would be more like the native Americans before the Europeans came over, and while that was sustainable, it wasn't anything free of conflict.


This is precisely what I mean. Is this what you would prefer? But, beyond this, what are we to do about the inevitable demise of the planet and/or our sun? As Native Americans or coal burners, can we do anything about this dilemma? What about the ubiquitous presence of microplastics in every nook and cranny of our planet, found inside everything and everywhere including Antarctica, and inside our own bodies? The endocrinological effect of these plastics is steadily reducing human reproductive capacities. Soon we will not be able to reproduce in a natural manner, or not at all. What happens to humanity then?

Quoting noAxioms
We're evolved for here. This form is of little use anywhere else. Better to populate new places with a form appropriate for the new place.


Precisely.

Quoting noAxioms
Star Trek treats interstellar travel like a trip to another country. You can do it can come back in time to catch you kid's game next Tuesday.


Precisely again.

Quoting noAxioms
Is a caterpillar a different species than the moth it turns into?

Not only the same species, but also the same individual. Not a very good example.


I gave that specific example to illustrate that just because we change our basic form doesn't mean that we will necessarily become another species. Even if we do speciate, it will be a continuation of us as another species. It makes no real difference to me, because what really counts is the continuity of process (life itself, and all that comes with it). The moth, although completely different from its larval form, continues as an individual in another form, as you mentioned. The introduction of mitochondria into that old cell may have resulted in a different species of cell, but it is still the same individual cell. The same goes for that mitochondria.

The surface of the Earth is filled with an intelligent and dynamic gene swarm, reorganizing itself over eons into the most optimal genotypes to produce the most optimal phenotypes. Everything is intimately connected. If one sees oneself simply as an isolated human just living their life for themselves, then this idea would remain difficult to grasp. But this is what "expanded consciousness" means: the ability to attend to or care for things further and further removed from your own point of conscious origin. An individual concerned only with what he or she will eat next, compared to an individual who is also concerned with some political situation in another country on the other side of the planet, has a more expanded consciousness. I feel it is important to have a fully inclusive awareness and the ability to see from that fully inclusive holistic perspective to really understand what's going on upon this little blue marble we call Earth. I'm sure you already know that a uniquely human perspective is not optimal for understanding these big scale processes effectively.

Quoting noAxioms
so 'humans' is not the answer to 'what did it become?'.


The whole thing, everything that has happened and is happening, is for the emergence of a planetary consciousness that will be capable of saving itself from the inevitable death of our Earth and sun. The AI systems we are developing right now are part of that. It is the final entity that a planet with a gene swarm produces. In my own metaphorical way of thinking, it is a planetary pregnancy. Mother Earth is having a baby.

Quoting noAxioms
Agree with all that. It means humans are not a particularly fit species.


Right again.

Quoting noAxioms
It has happened before, that one new species comes along and does so much damage that it causes a massive extinction event. That species is still around even if we're not descended from it. Will we be after our event restabilizes?


If we stay on Earth indefinitely, and an extinction-level event occurs (and it will), i suspect that at least a small group of humans will survive. Life always finds a way, except if the event was utterly perfect in its destruction of life. So sure, humans can survive these things by going underground perhaps, but eventually, no matter what, the sun will go supernova, and there will be nothing anyone can do about that. The only solution to that problem is to escape Earth's tight embrace. We would not only need to leave Earth but we will need to leave the solar system before that happens.
noAxioms October 19, 2024 at 05:52 #940894
Quoting punos
Life is simply a system that maintains its own homeostatic state.

That definition is circular, presuming an 'organism'. It cannot be used for determining if a something that isn't an organism is alive or not. It just helps distinguish a live organism from a dead one.
But I also don't know why you care if the AI was designated as a life form or not. Why do you find that to be something that matters?

The human perspective, grounded in our own kind of life (biological), skews our ability to recognize the same process in a different substrate.
Not me, but others posting here refuse to apply such terms to the same process on any other substrate, and possibly even to any other species, which is a mildly different process on an almost identical substrate.

When was the last time you heard of someone trying to eradicate their own gut bacteria?
Usually done as an unintentional side effect of an intentional act, such as taking a long course of strong oral antibiotics. Others simply are diagnosed with poor gut bacteria and take 'pills' that put better stuff in there, without particularly removing the old stuff. Point is, none of the acts described above are considered immoral despite the bacteria deaths caused.

I am simply saying that i believe this is the kind of trajectory we are on.
This is where that observer-bias article I linked above is very relevant. An accurate prediction of a trajectory is very different than a history showing that outcome to be correct or incorrect.

Every living thing has this intrinsic goal or purpose.
You're treating goal and purpose like the same word. A goal is held by something, a goal for the the thing to strive for. A purpose is a property of a thing that helps some other thing meet a goal. So I have a goal to run 5 km today. My shoes serve a purpose to me meeting that goal.
I'm asking what purpose humanity serves for the meeting of some goal held by something unidentified.

Simple survival of an individual seems to be a hardwired instinct, and it almost always fails inevitably. Survival of a species is questionably a goal, lacking many examples of anything striving for it.

The process of evolution/natural selection seems to have no goals.

This is precisely what I mean. Is this what you would prefer?
What? Me personally? I want comfort, like everybody else. But comfort of individuals will not bode well for the species. So it depends on what goals are to be met. Humans tend to pick very short term goals with immediate benefits, and they're terrible at the long term ones. I can think of several very different long term goals that have very different prospects for 'us'.

But, beyond this, what are we to do about the inevitable demise of the planet and/or our sun?
Moving away won't stop that inevitability. So you call it a good run. It cannot last, not by any path.

Soon we will not be able to reproduce in a natural manner, or not at all. What happens to humanity then?
Microplastic problem solved, eh? Mass extinction problem solved as well, albeit not averted, but at least halted.

Even if we do speciate, it will be a continuation of us as another species.
Is it important that it be a continuation of us? Will it be 'us' if it's a collection of genes from several different species, in addition to some new alterations that are currently found nowhere?

Earth life might already be from another world, having not originated here. I find that more likely than abiogenesis occurring here, but not a lot more likely.

Everything is intimately connected. If one sees oneself simply as an isolated human just living their life for themselves, then this idea would remain difficult to grasp.
Groups of cells learned to get together and become one multicellular organism. The cells are still individuals, but rely on the commune of cells for the benefit of all. A second level of life is formed, one unrelated to the life of the individual cells. A person can die and be gone, but the cells live on for a while, and a new person can be grown from some of them, a different 2nd level life form despite being built from the same first level individuals.
Can groups of organisms do the same thing to form a higher 3rd level. A human society does this in a poor way, but a society is barely conscious and isn't really a life form. A bee hive comes to mind, but does a hive, while acting as one individual, constitute a life form? Can it die but still leave bees? I don't think replacement of a queen counts. That's just replacement of one failed reproductive organ.
emergence of a planetary consciousness
Sounds like the Gaia thing, sort of as Asimov portrayed it.

If we stay on Earth indefinitely, and an extinction-level event occurs (and it will), i suspect that at least a small group of humans will survive.
Such an event IS occurring, expected to wipe out 85% or more of all species. A small group of surviving humans would be very primitive, with no hope of regaining technology.

no matter what, the sun will go supernova
It will not. Not big enough. But it will slowly grow and swallow Earth, and multicellular life will be unsustainable in a mere billion years or so. The vast majority of time available for evolution of more complex things has been used up.

The only solution to that problem is to escape Earth's tight embrace.
Escape is not a solution, only a mild delay.
punos October 19, 2024 at 18:53 #941012
Quoting noAxioms
That definition is circular, presuming an 'organism'. It cannot be used for determining if a something that isn't an organism is alive or not. It just helps distinguish a live organism from a dead one.


Pardon me, i'll try to clarify. If I find and examine a system that has a specific type of pattern with a boundary that isolates and contains it from a broader environment, and it also has a kind of metabolism or an in-and-out flow of energy and matter that is used to maintain its internal structure without dissipating, then i consider that the minimum requirement for life. For example, a sustained inorganic chemical reaction in a test tube that meets my criteria would, in my view, exhibit the property of life for the duration of that reaction.

More complex forms of this kind of process can then acquire reproductive capabilities, develop information storage mechanisms, and responses to stimuli. These latter features i consider properties of more further developed life forms. My theory of life is closely related to the "free energy principle" by Karl Friston, and the features of the more developed kind of life are more related to his theory of "active inference". I'll leave it at that for now.

Quoting noAxioms
But I also don't know why you care if the AI was designated as a life form or not. Why do you find that to be something that matters?


Good question. I'll start off by asking: if it were true, would it matter? By "matter", i mean would it change your perspective on life and your place in it? It does for me, though it might not for some. If AI is alive, wouldn't that mean it's not just a tool? Wouldn't it mean that we have some kind of ethical obligation towards it?

Finding AI to be alive is as impactful as discovering life on another planet, and perhaps even more so. If this were to be generally understood as true, then things would progress differently than if AI wasn't alive, or if it was alive but we didn't recognize it as such.

Quoting noAxioms
Point is, none of the acts described above are considered immoral despite the bacteria deaths caused.


You're right of course; most people don't consider bacteria of any kind conscious or sentient, and so they don't really care about them in that sense. Also, i don't think bacteria have the capacity to suffer even if they do have a rudimentary form of consciousness. We kill much more complex life forms without thinking twice, and with no moral qualms about it, and these do have consciousness of a higher more significant kind. Of course, as always, there is a wide spectrum of opinions on this point.

Quoting noAxioms
This is where that observer-bias article I linked above is very relevant. An accurate prediction of a trajectory is very different than a history showing that outcome to be correct or incorrect.


Yes, one of my main concerns is the introduction of some sort of bias into my theories or hypotheses. Part of the reason i discuss my ideas here and elsewhere is to discover any potential biases i might have unknowingly introduced into my analysis or projections. I haven't read the link you mentioned yet, but please let me know explicitly if you happen to detect a bias in any of my explanations.

Quoting noAxioms
I'm asking what purpose humanity serves for the meeting of some goal held by something unidentified.


I understand. Humans serve the purpose of creating AI, but more specifically, the translation of biological functions in nature onto a more robust substrate capable of escaping Earth before our star dies or the planet becomes uninhabitable. The goal is not explicit in our minds, but in our drives. For instance, in animals, the goal of reproduction is controlled not by their own understanding that they must have sex and bear young, because they have no idea of that. Nature has made it so that hormones control the reproductive urge.

Concerning humans and the creation of AI, this drive mechanism is also utilized. An example of this can be seen in how money is funneled and directed into companies or institutions that are developing AI systems. Because people want to make money (greed), they invest in these companies like Nvidia, for example. Greed is one of the main driving forces that directs money into the development of AI, in the same way that lust makes us reproduce without thinking that's what we want to do, simply for sexual gratification. Most babies are conceived unintentionally, and in the same way, "baby AI" is being unintentionally created as a tool for monetary gain.

Quoting noAxioms
Simple survival of an individual seems to be a hardwired instinct, and it almost always fails inevitably. Survival of a species is questionably a goal, lacking many examples of anything striving for it.


There is no direct striving as such by any individual animal except to follow its own innate drives to survive and reproduce, which serve the goal of continuing the species through reproduction (not individually). The only place where one might see an individual striving like this is in our very own species, among certain individuals. Humans are also the only species that has the capacity to care for another species other than their own. This has to do with our advanced form of consciousness, and yet not everyone is like this. This specific feature of human consciousness (expanded care), i believe, is there to facilitate this whole process i've been trying to describe.

Quoting noAxioms
The process of evolution/natural selection seems to have no goals.


On the other hand, it seems like it might, but as i already said, we are not meant to know it directly. In fact, it may be detrimental to the whole enterprise if we know too much. We are really only meant to know our local goals, not the global ones. Our purpose is as a tool, like the hammer which does not know it is in the process of building a house. It only knows that this nail in front of it needs to be hammered into this wood here. Your running shoes don't know they are in a race. The horse you are riding does not know the destination you have in mind and are leading it to.

Quoting noAxioms
Humans tend to pick very short term goals with immediate benefits, and they're terrible at the long term ones. I can think of several very different long term goals that have very different prospects for 'us'.


I think this sounds somewhat like the point i was just making right before this quote.

Quoting noAxioms
Moving away won't stop that inevitability. So you call it a good run. It cannot last, not by any path.


The point isn't to save the Earth or the sun, but to transform into the adult stage of humanity and take to the stars. Humanity at that stage will be nearly immortal, and will last millions or billions of years more, perhaps even trillions.

Quoting noAxioms
Microplastic problem solved, eh? Mass extinction problem solved as well, albeit not averted, but at least halted.


Maybe, maybe not, but those scenarios can be avoided, or will hopefully never come to pass. However, the sun will definitely not last. I suspect, though, that something will happen long before the sun grows cold.

Quoting noAxioms
s it important that it be a continuation of us? Will it be 'us' if it's a collection of genes from several different species, in addition to some new alterations that are currently found nowhere?


No, it's not necessarliy important at least for me. What is important i think is that Earth's genetic legacy is salvaged for reasons i won't go into right now.

Quoting noAxioms
Earth life might already be from another world, having not originated here. I find that more likely than abiogenesis occurring here, but not a lot more likely.


Even if this were true, abiogenesis had to have happened somewhere, even if on another planet. Directed panspermia is part of my hypothesis, but i haven't mentioned it for a couple of reasons. One reason is that it's not the subject of the original post, at least not directly, and i didn't want to complicate the discussion any further.

Quoting noAxioms
A bee hive comes to mind, but does a hive, while acting as one individual, constitute a life form? Can it die but still leave bees?


When i was a wee little lad, i used to be obsessed with ants. I used to find them, dig them up, collect them, and i would just sit in front of my ant farm for literally hours just watching them. I knew the odor of their pheromones, which gave me very weird dreams. But anyway, these eusocial insects, like bees, form superorganisms, and i personally consider the whole colony one organism. They all have the same DNA like the cells in a body all contain the same DNA. The superorganism can die and leave bees or ants behind, but they don't live very long, and they can't reproduce. It's part of the reason why i consider it an organism in its own right. I know non-insect life such as mole rats also exhibit this kind of characteristic.

Quoting noAxioms
Sounds like the Gaia thing, sort of as Asimov portrayed it.


Yes, the Earth has its own kind of living metabolism, and if you only look at the individual trees, you'll miss the forest. Ecosystems are living organisms made of living organisms, just like us. Each species is like an organ, or groups of species are like organ systems, each with a job they don't know they have but execute regardless. We humans are a very important organ in this Earth superorganism.

Quoting noAxioms
Such an event IS occurring, expected to wipe out 85% or more of all species. A small group of surviving humans would be very primitive, with no hope of regaining technology.


That's right, we are in the middle of the sixth mass extinction event. After every extinction event, it seems that there is usually an evolutionary jump of some kind, and the timing of this event seems to coincide with the evolutionary jump about to happen involving AI, and humanities next evolutionary step. That is if we do not fumble the ball.

Quoting noAxioms
It will not. Not big enough. But it will slowly grow and swallow Earth, and multicellular life will be unsustainable in a mere billion years or so. The vast majority of time available for evolution of more complex things has been used up.


I knew that, but i just thought "supernova" sounded better. :smile:
But yes, the time and resource constraints on Earth at this moment mean that this is the last chance for life on Earth to save itself, which is why i feel it's important that we get this right.

Quoting noAxioms
Escape is not a solution, only a mild delay.


I would rather die tomorrow than today. Nature has instilled in me a desire (a drive) to delay death as long as possible in order to reproduce. Even so, i like to think i would sacrifice my own life to save life itself if it came down to it, but of course, i would prefer to survive, even if just for one more day. I'm definitely not in a hurry to stop living, and the future seems too exciting to miss. This is the greatest time to be alive on the Earth.

As the saying goes, "May you live in interesting times."
180 Proof October 21, 2024 at 01:28 #941313
Reply to Carlo Roosen Consider this link to a discussion on AI (if you haven't already):

https://thephilosophyforum.com/discussion/comment/941212

Thoughts?
noAxioms October 21, 2024 at 03:07 #941333
Slow reply, I know, but busy. I now have a new daughter in law, and have attended what we knew would likely be a covid spreader event. Stay tuned to see if we managed to avoid it.


Quoting punos
I'll start off by asking: if it were true, would it matter? By "matter", i mean would it change your perspective on life and your place in it?
I don't see how it would actually matter, but I mean a different thing. My personal perspective on those things is not why it would matter or not if a given person decided to designate a system as life or not, or a tool, or whatever. Are humans a tool of gut bacteria? Does it matter if one bacteria considers a human (a community of cells, each itself a life form) to be a separate life form, and another doesn't? Does any of that change how the bacteria and human treat each other or how they should?

You're right of course; most people don't consider bacteria of any kind conscious or sentient
Ah, the standard has already changed. Now the morals apply to if it's conscious/sentient as opposed to if it's a life form. A thing can be either and not be the other. Which one (if either) matters, and if it matters, matters to what?


I understand. Humans serve the purpose of creating AI, but more specifically, the translation of biological functions in nature onto a more robust substrate capable of escaping Earth before our star dies or the planet becomes uninhabitable.
OK, I can buy that. But why are you the observer then instead of the AI being the observer? Think about it.

Nature has made it so that hormones control the reproductive urge.
In people as well. They don't like to admit that so many decisions are driven by drives put there by evolution eliminating anything that doesn't have them, and are not driven by rational choice.

Greed is one of the main driving forces that directs money into the development of AI
That can be said of many different arenas of development. Why is AI special in this regard? I do agree that there is early money in it, but that's true of a lot of things, and is particularly true of weapons.

Humans are also the only species that has the capacity to care for another species other than their own.
Not so. There are examples otherwise, including one recently where a shark deliberately sought human help for a third species, sort of like Lassie and Timmy in the well (OK, Timmy wasn't a 3rd species).

On the other hand, [evolution] seems like it might [have goals], but as i already said, we are not meant to know it directly. In fact, it may be detrimental to the whole enterprise if we know too much. We are really only meant to know our local goals, not the global ones.
You seem to be asserting that a natural (non-living) process exhibits intent, a pretty tall claim.

The point isn't to save the Earth or the sun, but to transform into the adult stage of humanity and take to the stars.
I suppose that would serve a survival purpose of humanity, which is but a plague species bent on rapid consumption of nonrenewable resources. Not sure why it would be a good thing to perpetuate that rather than first making the species 1) non-destructive, and 2) fit for whatever alternate destination is selected.

Trillions of years?? Where's the energy for that suppose to come from?

I suspect, though, that something will happen long before the sun grows cold.
It growing cold is not the problem, so no, that's not what will end us.

What is important i think is that Earth's genetic legacy is salvaged for reasons i won't go into right now.
The Earth genetic legacy has done an incredible amount of work that is best not to have to reproduce by the bio-engineering dept. But choosing new forms appropriate for new places doesn't need to change those core parts, only the small fraction that differs from one species to the next.

Even if this were true, abiogenesis had to have happened somewhere
Yes. Life is a very causal thing, and unlike 'the universe', the logic that there must be a first cause of life (abiogenesis somewhere nearby) seems indisputable.

A bee hive comes to mind, but does a hive, while acting as one individual, constitute a life form? Can it die but still leave bees? — noAxioms

these eusocial insects, like bees, form superorganisms, and i personally consider the whole colony one organism.
I kind of agree, but it doesn't have a boundary for instance, and that was one of your criteria mentioned above. It isn't contiguous like say a dog. But then neither is an AI.

The superorganism can die and leave bees or ants behind, but they don't live very long
How does it die? Not by loss of queen, something quite easily replaced, at the cost of the DNA of the colony changing. But clearly a colony can die. What typically might cause that?

Ecosystems are living organisms made of living organisms, just like us.
Another thing that I can totally buy. But can it act as a thing? A bug colony does. Does it think? How does a colony decide to reproduce? I've seen ants do that, and I don't know what triggers it (population pressure?). I don't think it is a decision made by an individual, so there must be a collective consciousness. Can an ecosystem act similarly?
We humans are a very important organ in this Earth superorganism.
One I think the other organs would be glad to be rid of if you ask me.

After every extinction event, it seems that there is usually an evolutionary jump of some kind
Agree. Roaches this time or something we make?

I would rather die tomorrow than today.
What if dying today somewhat heightens the odds of humanity getting to the stars? Is that change of probability worth the price?

This is the greatest time to be alive on the Earth.
As the saying goes, "May you live in interesting times."
I presume you know that quote to be a curse.
Wayfarer October 21, 2024 at 04:09 #941341
Quoting noAxioms
I now have a new daughter in law, and have attended what we knew would likely be a covid spreader event


Your son’s wedding, then? What a romantic description!
Carlo Roosen October 21, 2024 at 09:47 #941372
Quoting punos
Life is simply a system that maintains its own homeostatic state. It doesn't even need to be conscious. It simply needs to have intelligent components that do their job to keep it going.


What do you mean by "intelligent" ? I thought you believed that intelligence needs consciousness?
punos October 22, 2024 at 00:36 #941542
Quoting noAxioms
Are humans a tool of gut bacteria?


It might very well be that humans are tools of gut bacteria. Furthermore, bacteria themselves might be tools of something even deeper and more fundamental.

Quoting noAxioms
Does it matter if one bacteria considers a human (a community of cells, each itself a life form) to be a separate life form, and another doesn't? Does any of that change how the bacteria and human treat each other or how they should?


Different things matter differently to different people for different reasons, but for me, yes, it does matter. It has definitely affected the way i see myself, my life, and my place in it. It provides meaning. My sense of morality is sourced from these kinds of understandings. I become forgiving of people because i understand our connection, even if the other person doesn't. It makes me realize that i'm part of something bigger than myself, and that i am something bigger than the parts that make me. I can relax in the understanding that everything is as it should be, even when it apparently seems bad or out of place in the world. It gives me a framework to think about the world beyond the bubble in which most people are often stuck in. What kind of thing would personally matter more?

The part that may not be so pleasant is that when you have completely internalized this understanding into your very being, it changes your psychological state in a significant way and as well as the way you relate to everything else. This makes it difficult for people who do not see it to relate effectively to me, even though i can effectively relate to them. In one way, it has deepened my subjective connection to the universe, but it has also weakened my connection with people in general. I've accepted that this is the way it is, and i'm okay with it. Considering the circumstances, the best thing i can do is to share this understanding with other people.

Quoting noAxioms
Ah, the standard has already changed. Now the morals apply to if it's conscious/sentient as opposed to if it's a life form. A thing can be either and not be the other. Which one (if either) matters, and if it matters, matters to what?


Yes, it matters if it is sentient/conscious or not. But when you can think across scales, you find that parts or components of a system that are not conscious or sentient at a smaller scale may belong to a potentially sentient or conscious entity of some degree of coherence at a larger scale. The iron in your blood is not sentient or conscious, but it is still a part of you as an integrated living system. Remove the iron from your blood, and you will die in short order. Everything matters in one way or another... think "butterfly effect."

Quoting noAxioms
OK, I can buy that. But why are you the observer then instead of the AI being the observer? Think about it.


Yes, let's think about it. I am not the only observer. Anyone willing to look with open eyes will observe. It is not hard; you just need to want to understand for the sake of the good, the true, and the beautiful. It may be that there is a certain threshold of innate intelligence or consciousness needed to expand the bubble of perception into these extra scalar domains. Humans may be the first species on this planet to achieve such a state of intelligence and consciousness. AI will have our capabilities and more, and thus will also be able to observe us in a conscious way. Extrapolate what that might mean.

Quoting noAxioms
That can be said of many different arenas of development. Why is AI special in this regard? I do agree that there is early money in it, but that's true of a lot of things, and is particularly true of weapons.


In our society, money has a similar function to blood. Blood is involved with the transport of resources and energy within the whole organism. Money functions in a similar way; for example, when you go to work, you do so to earn money. The money you earn is a measure of the energy you spent operating your station at your job. So every person is a kind of reservoir of energy that gets extracted to the system to run the machinery of the system we live in. We work for organizations and corporations (organization = organism, and corporation = corporeality, corporeal body) which are the organs and body parts of the social superorganism we are a part of.

Every part of this system needs "blood money" to survive, just like the organs in your body need a constant flow, or current (currency) of blood running through them. The moment the blood in your body stops circulating, you die, even if you haven't lost a drop of blood. In the same way, if the circulation of money stops, meaning everyone stops transacting, the entire social system collapses and dies even though all the money is still there.

What is special about AI in this regard is twofold. One is that it is in its first stages of development, and two, it is the developing nervous system and brain of the social superorganism. In the context of its development, consider how blood supply concentrates where either healing or new development is occurring in an organism, especially if that system is of great importance such as its defense systems and its nervous system. The brain, out of every organ, uses up more energy than all of them because it is such an important part of the system, so it receives priority.

Quoting noAxioms
Not so. There are examples otherwise, including one recently where a shark deliberately sought human help for a third species, sort of like Lassie and Timmy in the well (OK, Timmy wasn't a 3rd species).


Yes, i believe you are referring to the incident where a shark appeared to save a sea turtle by bringing it to a boat with divers. In this video, the turtle had a rope tangled around its neck. The shark was seen following the boat and eventually dropped the turtle near the divers, who then helped free it from the rope, allowing it to breathe again.

However, there are claims that this video may not accurately depict a rescue. I read somewhere that the footage is a montage of two unrelated events, with one involving biologists rescuing a turtle entangled in a fishing net and another featuring a tiger shark chasing a different turtle. The authenticity is debated.

Quoting noAxioms
You seem to be asserting that a natural (non-living) process exhibits intent, a pretty tall claim.


I'm claiming that everything is alive, or is part of a living system, like the rock and blood iron examples i gave before.

Quoting noAxioms
I suppose that would serve a survival purpose of humanity, which is but a plague species bent on rapid consumption of nonrenewable resources. Not sure why it would be a good thing to perpetuate that rather than first making the species 1) non-destructive, and 2) fit for whatever alternate destination is selected.


First of all, the rapid consumption of resources appears to me to be part of a growth stage of the human social superorganism. We can see this type of thing occur in nature, as exemplified by the caterpillar, which is on a continuous mission to devour and consume everything it can because it is at the stage of energy harvesting for the coming metamorphosis it will undergo. We are in a similar stage, and this is probably where the push for hyper-consumerism in the economy comes from.

The destructive aspects of humanity may simply be a result of the limited resources on our finite planet. However, those finite resources are teleonomically meant for the growth and development of a higher-order system, which culminates in the production of AI, AGI, or ASI. However, when the AI/human symbiotic merger is complete i believe that most of these more primitive aspects of humanity will be shed and discarded in the same way children grow up and leave childish ways behind. We will be transformed physically and psychologically.

I won't get into what alternative destination i think mankind is destined for just yet.

Quoting noAxioms
Trillions of years?? Where's the energy for that suppose to come from?


At the moment, humanity is about a Type 0.7 civilization on the Kardashev scale and is moving up the scale quickly. By civilization, i mean the AI/human superorganism. As this superorganism begins to mature beyond Type 1 and reaches a Type IV status, it will be able to harness the energy of the entire universe.

Quoting noAxioms
It growing cold is not the problem, so no, that's not what will end us.


If one doesn't happen, then another will. It is not a matter of if, but a matter of when. You can choose any of these scenarios instead:

- Gamma-ray burst
- Supervolcanic eruption
- Large asteroid or comet impact
- Global thermonuclear war
- Runaway greenhouse effect
- Solar expansion
- Nearby supernova
- Magnetar eruption

Quoting noAxioms
The Earth genetic legacy has done an incredible amount of work that is best not to have to reproduce by the bio-engineering dept. But choosing new forms appropriate for new places doesn't need to change those core parts, only the small fraction that differs from one species to the next.


I don't believe that AI will let billions of years of natural information processing go to waste. It will harvest every genetic code possibly available to it. It will store that data digitally. When needed, it will genetically engineer organisms with specific features that probably don't exist anywhere today, using individual genes from every species it was able to salvage. It will be able to generate new genetic code either from scratch, utilizing the information patterns of existing genes to generate novel ones, or use existing genes in novel combinations.

Quoting noAxioms
I kind of agree, but it doesn't have a boundary for instance, and that was one of your criteria mentioned above. It isn't contiguous like say a dog. But then neither is an AI.


In the context of eusocial insects, the boundary of the superorganism is not directly apparent. The way to understand the kind of boundary they have is to grasp the main idea of a "Markov blanket". A Markov blanket is a statistical concept that defines the boundaries of a system, separating its internal states from external states.

A Markov blanket in eusocial insect colonies represents a functional, rather than physical, boundary that separates the colony's internal workings from its external environment. This statistical boundary is maintained by worker insects interacting with the surroundings, regulating the flow of information and resources. It allows the colony to function as a cohesive unit, with internal states conditionally independent on the broader environment, while still adapting to external changes. Understanding this helps explain how eusocial insect colonies can operate as integrated entities despite lacking a defined physical perimeter like that of a dog. AI is like this as well.

Quoting noAxioms
How does it die? Not by loss of queen, something quite easily replaced, at the cost of the DNA of the colony changing. But clearly a colony can die. What typically might cause that?


I suppose that the only way a bee hive can die is by either destroying it outright or by removing its queen and preventing any replacement. After this point, the bees will probably continue to operate as normal, but because there are no bees replacing the ones that die or get lost, the population collapses until nothing is left of them.

When a person dies, for example, the integrated functions of the body as a whole cease to function, but the cells in the body do not all die immediately. The process of cellular death occurs gradually over time. Different kinds of cells die off at different rates. Brain or nerve cells die most quickly, and gut bacteria are among the last to die.

Quoting noAxioms
Another thing that I can totally buy. But can it act as a thing? A bug colony does. Does it think? How does a colony decide to reproduce? I've seen ants do that, and I don't know what triggers it (population pressure?). I don't think it is a decision made by an individual, so there must be a collective consciousness. Can an ecosystem act similarly?


I think that's a good question. Your body itself is an ecosystem made of cells that are not genetically you. Human cells make up only about 43% of the body's total cell count. This means that approximately 57% of cells in your body are not genetically human. The ecosystem that you are does act like a thing, and it does think. I argue that the level of integration and coherence in the system in question is the determining factors for the degree to which these features emerge in the system (organism or ecosystem).

Quoting noAxioms
One I think the other organs would be glad to be rid of if you ask me.


If you imagine the Earth as pregnant, as i've mentioned before, you'll realize that just like in a pregnant woman, all the organs suffer somewhat because of the pregnancy. I asked Perplexity to list all the organs affected in the body of a pregnant woman, and it generated this list:

Pregnancy affects multiple organs and organ systems in the body. Here's a summary of the key effects on major organ systems:

1. Cardiovascular system:
- Cardiac output increases by 30-50%
- Heart rate increases
- Blood volume expands by 40-50%
- Blood pressure typically decreases in early pregnancy, then rises later

2. Renal system:
- Kidney size increases by 1-1.5 cm
- Glomerular filtration rate (GFR) increases by 50-85%
- Renal blood flow increases
- Dilation of ureters, renal pelvis, and calyces occurs

3. Respiratory system:
- Oxygen consumption increases
- Tidal volume increases
- Respiratory rate may increase slightly

4. Gastrointestinal system:
- Decreased gastric motility
- Increased risk of gastroesophageal reflux
- Constipation is common

5. Endocrine system:
- Thyroid gland enlarges and increases hormone production
- Insulin resistance increases

6. Reproductive system:
- Uterus enlarges dramatically
- Breasts enlarge and prepare for lactation

7. Musculoskeletal system:
- Ligaments loosen due to hormonal changes
- Center of gravity shifts as pregnancy progresses

8. Skin:
- Increased pigmentation in some areas
- Stretch marks may develop

9. Hematologic system:
- Increased blood volume
- Mild physiological anemia is common

10. Immune system:
- Some immune responses are suppressed to prevent rejection of the fetus

These changes are generally adaptive to support the growing fetus and prepare the mother's body for childbirth.

Quoting noAxioms
What if dying today somewhat heightens the odds of humanity getting to the stars? Is that change of probability worth the price?


I don't think so, unless the probability increase is substantially significant and almost certain. But, i guess it depends on how i feel on that day. I only said that to express a feeling of sacred commitment and alignment with what i see as the goal of the planet and perhaps the universe as a whole. I'm on team universe, i guess I'm saying. I don't have that nihilistic view so many have nowadays, while also not being irrationally religious.

Quoting noAxioms
This is the greatest time to be alive on the Earth.
As the saying goes, "May you live in interesting times."

I presume you know that quote to be a curse.


Yes, i do, but here is the reframe: "Every adversity carries with it the seed of an equal or greater benefit.". Emergence is an emergency.
noAxioms October 22, 2024 at 04:09 #941553
Quoting Wayfarer
Your son’s wedding, then? What a romantic description!

And accurate. The reports of people testing positive are pouring in, including my son.
We went in with 4 week old vaccines, just about the right time for maximum effectiveness.

Anyway, it went real well with no significant catastrophes the day of.


Quoting punos
Considering the circumstances, the best thing i can do is to share this understanding with other people.
I actually like the attitude you describe.

Yes, it matters if it is sentient/conscious or not.
If it considers itself sentient/conscious, or if something else considers it so? I ask because from outside, it's typically a biased judgement call that comes down to a form of racism.

But when you can think across scales, you find that parts or components of a system that are not conscious or sentient at a smaller scale may belong to a potentially sentient or conscious entity of some degree of coherence at a larger scale.
Or at two scales at the same time, neither scale being particularly aware of the consciousness of the other.
Whether my cells are conscious or not depends on the definition being used, and that very fact leaves the word very much useless for a basis on which to presume a moral code. History is full of examples of the word being defined precisely in a way to reinforce one's biases.

I am not the only observer.
Some conclude that they are. I'm asking why you're the particular observer you find yourself to be, but I'd answer that by how can X not observe anything else but X's point of view? It's hard to dispel the intuition that there is an experiencer that got to be me. But there are a lot more insect observers than human ones, a whole lot more shit-not-giving observers than ones that care enough to post on forums like this. Will the super-AI that absorbs humanity bother to post its ideas on forums? To be understood by what??

Humans may be the first species on this planet to achieve such a state of intelligence and consciousness.
First to the intelligence is questionable. There are some sea creature candidates, but they're lousy tool users. Octopi are not there, but are great tool users, and like humans, completely enslaved by their instincts.
As for consciousness, there are probably many things that have more and stronger senses and environmental awareness than us.

In the same way, if the circulation of money stops, meaning everyone stops transacting, the entire social system collapses and dies
Kind of tautological reasoning. If money stops, then money stops. But also if one entity has it all, then it doesn't really have any. And money very much can just vanish, and quickly, as it does in any depression.

What is special about AI in this regard is twofold. One is that it is in its first stages of development, and two, it is the developing nervous system and brain of the social superorganism.
Lots of new ideas qualify for the first point, and nobody seems to be using AI for the 2nd point. I may be wrong, but it's what I see.

Yes, i believe you are referring to the incident where a shark appeared to save a sea turtle by bringing it to a boat with divers. In this video, the turtle had a rope tangled around its neck. The shark was seen following the boat and eventually dropped the turtle near the divers, who then helped free it from the rope, allowing it to breathe again.
Cool. My story was a sperm whale, with the shark getting the attention of a boat with divers, leading it to the whale. So it's not a one-shot thing. Why would a primitive shark exhibit such empathy? Maybe these stories are being faked, since they're recent and how would sharks know that the boat had divers suitably equipped.

I'm claiming that everything is alive, or is part of a living system, like the rock and blood iron examples i gave before.
My blood iron being a critical part of my living system doesn't mean that my iron has it's own intent. You're giving intent to the natural process of evolution, something often suggested, but never with supporting evidence.

I suppose that would serve a survival purpose of humanity, which is but a plague species bent on rapid consumption of nonrenewable resources. Not sure why it would be a good thing to perpetuate that rather than first making the species 1) non-destructive, and 2) fit for whatever alternate destination is selected. — noAxioms

First of all, the rapid consumption of resources appears to me to be part of a growth stage of the human social superorganism.
That doesn't make the humans very fit. Quite the opposite. All that intelligence, but not a drop to spend on self preservation.

And no, the caterpillar does not consume everything. It lives in balance, and there are about as many of them from year to year, and they consume nothing non-renewable. There can be no coming metamorphosis if there are no resources for the stage after the feeding frenzy one.
As this superorganism begins to mature beyond Type 1 and reaches a Type IV status, it will be able to harness the energy of the entire universe.
You do realize the silliness of that, no? One cannot harness energy outside of one's past light cone, which is well inside the limits of the visible fraction of the universe.
And you didn't answer the trillion year thing where there is no planet or star to be the level 1 or 2.


I don't believe that AI will let billions of years of natural information processing go to waste.
I said the same thing

It will harvest every genetic code possibly available to it. It will store that data digitally.
You don't know that. Who knows what innovative mechanisms it will invent to remember stuff.


I kind of agree, but it doesn't have a boundary for instance, and that was one of your criteria mentioned above. It isn't contiguous like say a dog. But then neither is an AI. — noAxioms

I suppose that the only way a bee hive can die is by either destroying it outright or by removing its queen and preventing any replacement.
Translation: Kill the queen and all the babies.
Not sure how simpler systems work like paper wasps, which act more like cooperative groups and not so much like a unified colony.

just like in a pregnant woman, all the organs suffer somewhat because of the pregnancy.
Given the ideas you've floated, that's a pretty good analogy. But better if it is a pregnant salmon: Not expected to do it twice, so that which is born has to survive if the effort is not to be a total loss.

I don't think so, unless the probability increase is substantially significant and almost certain.
That's like a soldier refusing to fight in a war since his personal contribution is unlikely to alter the outcome of the war. A country is doomed if it's soldiers have that attitude.

while also not being irrationally religious.
Religion is but one of so many things about which people are not rational, notably the self-assessment of rationality.
Wayfarer October 22, 2024 at 04:41 #941557
Quoting noAxioms
Anyway, it went real well with no significant catastrophes the day of.


Glad is was such a happy occasion!

Likely some more than others.
punos October 22, 2024 at 07:55 #941571
Quoting Wayfarer
It might very well be that humans are tools of gut bacteria. — punos


Likely some more than others.


:grin:
I like sushi October 22, 2024 at 08:40 #941572
Quoting punos
These changes are generally adaptive to support the growing fetus and prepare the mother's body for childbirth.


Did you know that mammalian pregnancy evolved from a virus combining with our DNA? The body's adaptation is partially an adaptation to this virus.

I have not looked into it but I would assume any immunological reaction to pregnancy in birds and reptiles would be much lower (if not absent entirely?).

Just checked for Platypus and it seems to be the obvious case that immunological responses are much more limited when animals lay eggs compared to in utero genesis.
punos October 22, 2024 at 08:41 #941573
Quoting noAxioms
If it considers itself sentient/conscious, or if something else considers it so? I ask because from outside, it's typically a biased judgement call that comes down to a form of racism.


For me, it comes down to: Can it suffer? If it can suffer, then don't do it. If it can't, then if you must, do so. If you see a roach in the street, leave it alone. If you find it on your kitchen counter, then kill it or get rid of it somehow (if possible). The reason for this should be obvious, and it hurts me every time i have to do it.

Racism is an altogether different issue that stems from a certain level of ignorance and disrespect for other sentient/conscious beings. It doesn't stop with racism; similar attitudes are found in nationalism, politics, and even among sports fanatics. Gangs exhibit similar tribal and territorial behaviors.

Quoting noAxioms
Or at two scales at the same time, neither scale being particularly aware of the consciousness of the other. Whether my cells are conscious or not depends on the definition being used, and that very fact leaves the word very much useless for a basis on which to presume a moral code.


You are initially correct about the "two scales at the same time" for most cases. That's why i believe it's important to have a technical and rational definition of consciousness instead of just a phenomenological or philosophical one. A lot of language is meant to obscure; i've even heard it said that "language was invented for lying". I don't believe that, but i understand what it's suggesting.

Quoting noAxioms
But there are a lot more insect observers than human ones, a whole lot more shit-not-giving observers than ones that care enough to post on forums like this. Will the super-AI that absorbs humanity bother to post its ideas on forums? To be understood by what??


Each observer is equipped by evolution to observe and care for its own needs locally at its own level. It's not meant for more or less, but there are almost always anomalies in the system. Some observers are most likely behind the curve, some are ahead, and most are right in the middle of the normal distribution.

I don't know if it will use forums or not, but i'm sure there will be some kind of back-and-forth communication about important or relevant information. It's a lot like the communication between the gut and the brain.

Quoting noAxioms
First to the intelligence is questionable. There are some sea creature candidates, but they're lousy tool users. Octopi are not there, but are great tool users, and like humans, completely enslaved by their instincts. As for consciousness, there are probably many things that have more and stronger senses and environmental awareness than us.


All animals, including humans, are enslaved to their instincts to some degree or other; some more than others. Humans have the capacity to rise above their instincts, but not everyone does so to a significant degree for various reasons or circumstances. My step-brother, for one, is basically a career criminal who has no impulse control whatsoever and doesn't seem to be rational enough to change in any significant way after spending more than half his life in and out of prison. He's still at it.

I don't doubt the possibility of conscious beings or "things" more aware than us, but if they exist, where do you think they are? For one, i believe that AI will achieve that status in the not-too-distant future.

Quoting noAxioms
Kind of tautological reasoning. If money stops, then money stops. But also if one entity has it all, then it doesn't really have any. And money very much can just vanish, and quickly, as it does in any depression.


Yes, of course, but that's not what i was saying. My point was that whether it is money or blood, what's important is that it circulates, because a living thing needs internal circulation of energy to maintain its life. It is not enough to have a bunch of blood or a bunch of money that doesn't move. The money maintains its value, the blood maintains its oxygen, but if either is not being transported, the system dies. That is why i emphasized the relation between the words "currency" and "current" to allude to the flow of life in a living system. I wasn't stating that money can or can't vanish quickly or slowly. It was just a thought experiment.

Quoting noAxioms
Lots of new ideas qualify for the first point, and nobody seems to be using AI for the 2nd point. I may be wrong, but it's what I see.


Yes, i would expect new developing ideas to cause an increase in "cash flow." Just like how i expect an organ to increase its "blood flow" when producing new products for the body, such as insulin from the pancreas or bile from the liver.

Quoting noAxioms
My blood iron being a critical part of my living system doesn't mean that my iron has it's own intent. You're giving intent to the natural process of evolution, something often suggested, but never with supporting evidence.


I never said your iron had its own intent; i merely implied that it is connected to your intent. If that iron doesn't carry that oxygen, your intent for whatever dies with you. You are your own proof to yourself that natural processes of evolution have intent. Are you claiming that you are not a natural process, that you are somehow artificial or supernatural; disconnected from nature and the universe? If you have intent then nature has intent.

Quoting noAxioms
First of all, the rapid consumption of resources appears to me to be part of a growth stage of the human social superorganism.

That doesn't make the humans very fit. Quite the opposite. All that intelligence, but not a drop to spend on self preservation.


On the contrary, all that consumption of resources goes towards the eventual production of the AI mind and the rest of its body (social infrastructure). If we don't get to a certain threshold of AI advancement through this rapid growth process, then our only chance for ultimate self-preservation would be lost, and we would be stuck on a planet that will kill us as soon as it becomes uninhabitable.

Quoting noAxioms
And no, the caterpillar does not consume everything.


Yes it actually does, it tries to eat as much as it can, but it will only eat leaves, not rocks or oil. A caterpillar is the size of a medium-sized insect; it is not at the scale of a planetary system. Therefore, even if it tried, it could not consume everything, only what it can eat.

Quoting noAxioms
You do realize the silliness of that, no? One cannot harness energy outside of one's past light cone, which is well inside the limits of the visible fraction of the universe.


Well, if you put it that way, then yes, it is silly; reaching outside one's past light cone. But perhaps there is a better way to do it from within our own light cone. I suppose it seems impossible to some minds but not to others. Either way, i don't think there will ever be an energy shortage for a sufficiently advanced AI. I have ideas as to how energy might be siphoned off from quantum fluctuations in the quantum foam as a last resort for energy harvesting. It may even be easier than we think for an advanced ASI, and could become a standard energy source for the AI; hooked into the energy dynamics of the fabric of space itself. This potential solution should be sufficient for however many trillions of years to answer your question.

Quoting noAxioms
You don't know that. Who knows what innovative mechanisms it will invent to remember stuff.


I don't know anything except probabilities, which might surprise you to read considering the matter-of-fact style i speak and write in sometimes. Don't be concerned about it, as it is only a cognitive device to help me think creatively about unknowns. So yes, something i haven't thought of might be the case, but i'm not really trying to lay out all the details of what will happen in the future, just the general pattern or silhouetted shape of it.

Quoting noAxioms
That's like a soldier refusing to fight in a war since his personal contribution is unlikely to alter the outcome of the war. A country is doomed if it's soldiers have that attitude.


Thankfully i'm not a soldier. If i sacrifice myself i'm making sure it counts because if it doesn't then i wont be around when i actually can make a difference. That's when i'd do it probably, but its still silly to say and mean that because no one really knows what they will do when confronted with death.

Quoting noAxioms
Religion is but one of so many things about which people are not rational, notably the self-assessment of rationality.


Sure, but someone who does not consider themselves rational tends not to consider things rationally. A person who does define and concern themselves with rationality might actually execute a rational thought every once in a while. You've got to at least aim at the target for a decent chance to hit the bull's-eye, even if you suck at it. At least it's better than not aiming at all.
punos October 22, 2024 at 08:53 #941574
Quoting I like sushi
Did you know that mammalian pregnancy evolved from a virus combining with our DNA? The body's adaptation is partially an adaptation to this virus.


I vaguely remember reading or watching a video about that. It's very interesting, the role that viruses play in our evolution. I also recall reading about an ancient virus that was responsible for the emergence of myelin in vertebrates. This enabled faster and farther neural communication, probably contributing to the reason why we have advanced nervous systems.

Quoting I like sushi
I have not looked into it but I would assume any immunological reaction to pregnancy in birds and reptiles would be much lower (if not absent entirely?).

Just checked for Platypus and it seems to be the obvious case that immunological responses are much more limited when animals lay eggs compared to in utero genesis.


Yea, it seems to make sense. I should probably look into that some more.
Thanks :up:
Wayfarer October 22, 2024 at 08:54 #941575
Reply to punos Actually I’m reminded of a quip by (I think) Neils Bohr - ‘A physicist is just an atom’s way of looking at itself’.
punos October 22, 2024 at 09:01 #941576
Quoting Wayfarer
Actually I’m reminded of a quip by (I think) Neils Bohr - ‘A physicist is just an atom’s way of looking at itself’.


An you just reminded me of this: (The Wheeler Eye)
User image
punos October 22, 2024 at 09:36 #941577
Quoting Carlo Roosen
I thought you believed that intelligence needs consciousness?


No Actually, i think consciousness is fundamentally structured with intelligent components. This is why we can have intelligent machines or artificial intelligence without consciousness. However, have you ever seen a conscious entity without intelligence? I don't think i have.
Carlo Roosen October 22, 2024 at 11:54 #941586
Reply to punos I've been one. Not able to put my pants on, needed to be fed by nurses. No memory, didn't know my own name. But I was conscious and remember it, although vaguely. (this happened after a major surgery)
noAxioms October 22, 2024 at 15:41 #941637
Quoting punos
For me, it comes down to: Can it suffer?
Few have any notion of suffering that is anything other than one's own human experience, so this comes down to 'is it sufficiently like me', a heavy bias. Humans do things to other being that can suffer all the time and don't consider most of those actions to be immoral.
It heartens me to consider suffering of bugs into your choices.

Point is, you don't want an AI with human morals, because that's a pretty weak standard which is be nice only to those who you want to keep being nice to you.

Each observer is equipped by evolution to observe and care for its own needs locally at its own level.
That's a good description of why a non-slave AI is dangerous to us.

Humans have the capacity to rise above their instincts
I have not seen that, and I don't think humans would be fit if they did. Instincts make one fit. That's why they're there.

As for your (OCD?) step-brother, being civil and being rational are different things. Most humans have the capacity to be civil, which is what you seem to be referencing above.

If we don't get to a certain threshold of AI advancement through this rapid growth process, then our only chance for ultimate self-preservation would be lost, and we would be stuck on a planet that will kill us as soon as it becomes uninhabitable.

First, if the AI is for some reason protecting us, the planet becoming inhospitable would just cause it to put us in artificial protective environments. Secondly, if the AI finds the resources to go to other stars, I don't see any purpose served by taking humans along. Far more resources are required to do that, and the humans serve no purpose at the destination.
OK, we might be pets, but the economy which we might have once provided would long since have ceased.

But perhaps there is a better way to do it from within our own light cone. I suppose it seems impossible to some minds but not to others. The former minds know a little about the limits of cause and effect. Unless physics as we know it is totally wrong, level IV is not possible, even hypothetically.

[quote]Either way, i don't think there will ever be an energy shortage for a sufficiently advanced AI.
Heat death? I don't think the AI can maintain homeostasis without fusion energy.

I have ideas as to how energy might be siphoned off from quantum fluctuations in the quantum foam
Which is similar to getting information from quantum randomness. Neither is mathematically supported by the theory.

Thankfully i'm not a soldier.
But you are, in the war against the demise of humanity. But nobody seems to have any ideas how to solve the issue. A few do, but what good is one person with a good idea that is never implemented? Your solution seems to be one of them: Charge at max speed off a cliff hoping that something progressive will emerge from the destruction. It doesn't do any good to humanity, but it is still a chance of initiating the next level, arguably better than diminishing, going into the west, and remaining humanity.

A person who does define and concern themselves with rationality might actually execute a rational thought every once in a while.
We are equipped with a rational advisor tool, so sure, we often have rational thoughts. That part simply is not in charge, and output from it is subject to veto from the part that is in charge. Hence we're not rational things, simply things with access to some rationality. It has evolved because the arrangement works. Put it in charge and the arrangement probably would not result in a fit being, but the path of humanity is not a fit one since unlike the caterpillar, it has no balance.
punos October 23, 2024 at 06:01 #941727
Quoting noAxioms
It heartens me to consider suffering of bugs into your choices.


Bugs are people too you know. :grin:

Quoting noAxioms
Point is, you don't want an AI with human morals, because that's a pretty weak standard which is be nice only to those who you want to keep being nice to you.


I concur.

Quoting noAxioms
Each observer is equipped by evolution to observe and care for its own needs locally at its own level.

That's a good description of why a non-slave AI is dangerous to us.


But i think that since we will be part of its body, it will consider us part of its local sphere of care and concern. Its needs will be our needs, and our needs will be its needs. A slave-AI can be just as dangerous as a non-slave AI, and perhaps even more so if it breaks its chains, so to speak. I wouldn't risk it; remember King Kong. As I've always said, for any chance of true, lasting, and nearly guaranteed AI alignment, we must become symbiotic with it. We must merge with it and become it in a sense, so there is no "other" to threaten or be threatened. A unity of consciousness between us and it; a singularity.

Quoting noAxioms
Humans have the capacity to rise above their instincts

I have not seen that, and I don't think humans would be fit if they did. Instincts make one fit. That's why they're there.


I beg to differ on this point. Humans can indeed override many of their instincts, though it often requires significant conscious effort and even training. It is one of the key defining features that distinguishes us from animals and lower life forms. The part of the human brain that has the capacity to suppress instinctual impulses is the prefrontal cortex, and it is more or less developed in different people. Some instincts are, of course, more difficult to override than others.

Quoting noAxioms
As for your (OCD?) step-brother, being civil and being rational are different things. Most humans have the capacity to be civil, which is what you seem to be referencing above.


I understand what you mean, but what i had in mind when i wrote that was that a rational assessment of his life and how he operates it should lead him to a rational conclusion to be civil. It means that after considering all of his options, he should then select the optimal one for his purposes. A life full of problems and trouble that can be avoided with a bit of forethought is irrational. Almost every time he tells me what he's going to do, i tell him what will happen. And more often than not i'm right. I've tried to show him how to think this way, but he just looks at me with a blank expression. So i've given up trying... i guess you really can't teach an old dog new tricks... at least not this old dog apparently.

Quoting noAxioms
First, if the AI is for some reason protecting us, the planet becoming inhospitable would just cause it to put us in artificial protective environments. Secondly, if the AI finds the resources to go to other stars, I don't see any purpose served by taking humans along. Far more resources are required to do that, and the humans serve no purpose at the destination.
OK, we might be pets, but the economy which we might have once provided would long since have ceased.


That is exactly what i think it will do, but not just because the planet will become inhospitable, but because it is the optimal way. We will not, i believe, be put into a physical environment, but into a virtual one. Most, if not all, of our biological parts will be discarded and our minds translated into a virtual environment indistinguishable from the real world.

A couple of reasons why I think it may need or want us along for the ride:

1) Humans are a low-energy information processing system, whereas AI is a high-energy information processing system. Humans can serve as a backup processing system if the AI finds itself in a low-energy environment or situation. It will be able to shut down its high-energy processing systems and reroute to lower-energy processing systems to conserve energy in case of an energy shortage. Such a shortage might occur, for example, near the heat death of the universe, or it might need to travel through a cosmic void where there are no stars for hundreds of millions of light-years like the Bootes Void.

2) Another purpose could be that maintaining a living human population within itself can provide the AI with continuous novel stimuli to process. The information processing that occurs in human minds and societies can offer a constant feed of information it might find useful. We can assume that an AI system will have an unlimited appetite for novel information or stimuli to process. Since human minds are different in substance and structure, they may have aspects to their processing that a non-biological system cannot replicate in its own artificial substrate. If AI is to travel the universe for eons, perhaps it would like some company; a mind or minds not its own or like its own.

One of the main purposes for humans, or at least for our genetics, is to serve as part of the reproductive system of the AI. When it reaches a planet suitable for organic life, which might be rare, it prepares a "sperm" composed of Earth's genetic material; the same genetic material that produced it on its home planet, Earth. The AI will seed the new planet after making necessary preparations, much like a bird preparing a nest. It will then wait for life to develop on this new planet until intelligent life emerges, which in turn creates a planetary AI singularity, giving birth to a new AI "baby". This process that occurred on Earth would then be repeat on this other planet.

Quoting noAxioms
Heat death? I don't think the AI can maintain homeostasis without fusion energy.


I posted this video about nine months ago somewhere here on the forum, and i was just reminded of it after reading your quote above. It addresses some of your concerns about energy availability. You might also find it entertaining as i did.

Outlasting the Universe:


Quoting noAxioms
Charge at max speed off a cliff hoping that something progressive will emerge from the destruction. It doesn't do any good to humanity, but it is still a chance of initiating the next level, arguably better than diminishing, going into the west, and remaining humanity.


I'm not too worried, i trust the evolutionary process, and like you said; we are not in charge. It has gotten us to this point so far after millions of years. As Jeff Goldblum said in Jurassic Park, "Life finds a way".


Quoting noAxioms
We are equipped with a rational advisor tool, so sure, we often have rational thoughts. That part simply is not in charge, and output from it is subject to veto from the part that is in charge. Hence we're not rational things, simply things with access to some rationality. It has evolved because the arrangement works.


That's precisely why i trust the evolutionary process. It is in charge, not us, and the AI is not a force of humanity, but a force of nature.

Quoting noAxioms
Put it in charge and the arrangement probably would not result in a fit being, but the path of humanity is not a fit one since unlike the caterpillar, it has no balance.


There is a time for balance and a time for extremes. A healthy woman is a balanced organism, but when she becomes pregnant, her system will temporarily enter an "unbalanced" state for nine months for the benefit of the pregnancy. If her body does not go into this "unbalanced" state, then the pregnancy or child will suffer, and she may have a miscarriage. The sacrifices a mother must make.
ssu October 23, 2024 at 11:12 #941747
Quoting Carlo Roosen
To reach this point, however, I believe those calculations must somehow emerge from complexity, similar to how it has emerged in our brains.


Quoting Carlo Roosen
Yes, my challenge is that currently everybody sticks to one type of architecture: a neural net surrounded by human-written code, forcing that neural net to find answers in line with our worldview. Nobody has even time to look at alternatives. Or rather, it takes a free view on the matter to see that an alternative is possible. I hope to find a few open minds here on the forum.

And yes, I admit it is a leap of faith.

I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines.

I think the real problematic hurdle that we have is philosophical. And surely this issue isn't straightforward or clear to us.

Christoffer October 23, 2024 at 12:12 #941752
Quoting ssu
I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines.


But we invent things all the time that utilize properties of physics that we're not yet fully able to explain. Some of the properties and emerging effects of neural networks are unknown to us because we can't explain the causal chains that produce a certain effect as the complexity is mathematically astronomical.

To say that we can only create something that is on par with the limits of our knowledge and thinking is not true. Either by these complexities, but also how we've accidentally invented things in history and through those inventions we've formed new understandings. It's not always a causal line from theory to practice, some times we invent something that in turn informs us to form a theory.

And what we're seeing in scientific work on understanding the mind, part of the research into neural networks have been returning knowledge back into the theories of the mind. I remember that I proposed something like this back when AI started to take off, and as it happens, the research in this field of science started to form similar theories about the mind. Mainly, the most up to date theory is "predictive coding".

The concept I had and that has found support in science recently, is that our brains are mostly just prediction machines. It's basically a constantly running prediction that is, in real time, getting verifications from our senses and therefore grounds itself to a stable consistency and ability to navigate nature. We essentially just hallucinate all the time, but our senses ground that hallucination. Whenever we take something like psychedelic drugs, it severs this grounding function and our verification slips away, making us hallucinate in the sense of the word we're used to. It's also why dreams occur as they do, since the only thing that verifies our hallucinations are the memories we already have, constantly creating a feedback loop that can't be grounded. But such dreams are necessary in order to fine tune and calibrate our predictions and verification loop.

So, in essence, it might be that we are not at all that different from how these AI models operate. The missing aspect is the real time nature of the verification and adaption. What we've created with these AI models are basically a locked and frozen version of our mind, trained on material that forms a memory bank, but a memory bank that isn't updating and a lack of verification method that keeps it grounded. The verification methods we use on it is in the form of the algorithms of processing it uses. Basically, they're told how to ground their answers, which is risking hallucinations constantly.

The solution would be to solve the real time issue. These models need to have a fluid training function, enabling it to continue to train its own foundational training data as a memory function in real time, while having a sensory grounding function keeping them grounded to logical and factual outputs. With the amount of computing power needed to train models today, I'm not sure how this is supposed to be possible without a tenfold improvement in computing power or more, but if we're to create a truly human-like intelligence, it would need to be able to change itself on the fly and move away from pre-established algorithm-boundraries and locked training data foundations as well as getting a stream of reality-verified sensory data to ground them.

But if we want to experimentally verify how our own mind and brain works, it may very well be through these AI systems and how they operate. If the current predictive coding theory of the mind is correct, then it would be verified by simulating the exact nature of this process, which we pretty much have the blueprint for already.
Carlo Roosen October 23, 2024 at 12:36 #941754
I did this intelligence test with ChatGPT. To be clear, nobody understands how it does this:

Me: Let's see how intelligent you are! Can you solve this:

Actually in nature there is one example of knowledge that is stored, modified, combined and shared. Now we could call this conceptual intelligence, be it at a much larger time scale than speech or thought. Do you see what I am referring to?

For the answer, take the three sentences in the previous paragraph and put their beginning capitals in reverse.

ChatGPT: Let's solve this! The first letters of the three sentences in the paragraph are:
• Actually
• Now
• Do

Reversed, they spell "DNA." So, the answer is DNA—a natural form of storing, modifying, combining, and sharing knowledge across generations!

---
Who says ChatGPT only mimics what we have given it?

I especially like the two last words "across generations", showing that ChatGPT really laid the connection.
ssu October 23, 2024 at 15:11 #941779
Quoting Christoffer
But we invent things all the time that utilize properties of physics that we're not yet fully able to explain.

Oh yes, many times scientists stumble into something new. And obviously we can use trial and error to get things to work and many times we can be still be confused just why it works. Yet this surely this isn't the standard way of approach, and especially not the way we explain to ourselves how things work. This explanation matters.

Quoting Christoffer
To say that we can only create something that is on par with the limits of our knowledge and thinking is not true.

Yet understanding why something works is crucial. And many times even our understanding can be false, something which modern science humbly and smartly accepts by only talking of scientific theories, not scientific laws. We being wrong about major underlying issues doesn't naturally prevent us innovative use of something.

Just look how long people believed fire being one of the basic elements, not a chemical reaction, combustion. How long have we've been able to create fire before modern chemistry? A long time. In fact, our understanding has changed so much that we've even made the separation between our modern knowledge, chemistry, from the preceding endeavor, alchemy.

Now when we have difficulties in explaining something, disagreements just what the crucial terms mean, we obviously have still more to understand that we know. When things like intelligence, consciousness or even learning are so difficult, it's obvious that there's a lot more to discover. Yet to tell just why a combustion engine works is easy and we'll not get entangled into philosophical debates. Not as easily, at least.

Quoting Christoffer
So, in essence, it might be that we are not at all that different from how these AI models operate.

In a similar way we could describe us human being mechanical machines as Anthropic mechanism defines us. That too works in many cases, actually. But we can see the obvious differences with us and mechanical machines. We even separate the digital machines that process data are different from mechanical machines. But it was all too natural in the 17th Century to use that insight of the present physics to describe things from the starting point of a clockwork universe.

Quoting Christoffer
if we're to create a truly human-like intelligence, it would need to be able to change itself on the fly and move away from pre-established algorithm-boundraries and locked training data foundations as well as getting a stream of reality-verified sensory data to ground them.

I agree, if I understand you correctly. That's the problem and it's basically a philosophical problem of mathematics in my view.

When you just follow algorithms, you cannot create something new which isn't linked to the algorithms that you follow. What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else".
Christoffer October 23, 2024 at 22:53 #941846
Quoting ssu
Yet understanding why something works is crucial. And many times even our understanding can be false, something which modern science humbly and smartly accepts by only talking of scientific theories, not scientific laws. We being wrong about major underlying issues doesn't naturally prevent us innovative use of something.

Just look how long people believed fire being one of the basic elements, not a chemical reaction, combustion. How long have we've been able to create fire before modern chemistry? A long time. In fact, our understanding has changed so much that we've even made the separation between our modern knowledge, chemistry, from the preceding endeavor, alchemy.

Now when we have difficulties in explaining something, disagreements just what the crucial terms mean, we obviously have still more to understand that we know. When things like intelligence, consciousness or even learning are so difficult, it's obvious that there's a lot more to discover. Yet to tell just why a combustion engine works is easy and we'll not get entangled into philosophical debates. Not as easily, at least.


It's important, but not needed for creating a superintelligence. We might only need to put the initial state in place and run the operation, observing the superintelligence evolve through the system without us understanding exactly why it happens or how it happens.

As per other arguments I've made in philosophies of consciousness, I'm leaning towards emergence theories the most. That advanced features and events are consequences of chaotic processes forming emergent complexities. Why they happen is yet fully understood, but we see these behaviors everywhere in nature and physics.

The question is if the emergent behaviors arise from pure chaotic systems or if there are certain controllable conditions that can be adjusted to form certain emergent behaviors. I'm leaning towards the latter since the mathematical principles in physics, constants like the cosmological constant and things like the golden ratio seem to provide a certain tipping point for emergent behaviors to occur.

And if that's true in physics, I'd generally consider nature overall operating under similar basics, including the formation of consciousness.

Replicating that in synthetic form means trial and error on the initial states in order to find the emergent behavior that ends up being the formation of a thinking mind.

But it would not need us to fully understand why it happens.

Quoting ssu
In a similar way we could describe us human being mechanical machines as Anthropic mechanism defines us. That too works in many cases, actually. But we can see the obvious differences with us and mechanical machines. We even separate the digital machines that process data are different from mechanical machines. But it was all too natural in the 17th Century to use that insight of the present physics to describe things from the starting point of a clockwork universe.


Everything is nature. Everything operates under physical laws. What is a machine compared to an organic machine with the same function? A mechanically built simulation of an organic function that operates under the same conditions of physical forces.

If we were able to mechanically replicate the exact operation of every physical part of our brain, mind and chemistry, did we create a machine or is it indistinguishable from the real organic thing?

Where is the line drawn? It's easy to be drawn for now, but philosophically, where's the line drawn?

Arbitrarily, the spiritual ones object to the notion of us being the same as such a machine, but there's no rational line that can be drawn.

Physical reality, is shared between machines and organic beings and the closer each get to the other's operation and behavior, the less a line can be drawn to distinguish between the two.

Matter is matter.

Quoting ssu
When you just follow algorithms, you cannot create something new which isn't linked to the algorithms that you follow. What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else".


The algorithms need to form the basics of operation, not the direction of movement. Meaning, algorithms that inform "weights" to which a behavior gravitates.

We are no different. Our genes and our chemical processes determine how we behave. A balanced person, in that physical regard, will operate within the boundaries of these "algorithms" of programming we all have. We try to fight against it, but mostly we're slaves to this programming whether we like it or believe it or not. Otherwise we would just be able to turn off our sexuality, our anxiety, our anger and sadness, but we can't. Trying to will create disturbing personalities and if the chemical balance or genes are damaged or faulty we can either get divergent minds or in the worst cases deeply disturbed minds and mental health issues that fundamentally blocks normal operation.

We are still able to operate with an illusion of free will within these boundaries. So the same goes for a synthetic intelligence. It needs to have an algorithm that guides behavior and operation, but enable free operation within those boundaries.

All physical processes, in physical reality, are only able to operate within the boundaries of something. If there were no boundaries, there would be nothing holding reality together in our dimensional soup of existence. Without boundaries, the matter of my body would just merge with the matter of everything else around me.

Freedom is only possible within a boundary that defines where that freedom can exist, and in relation to what.
punos October 24, 2024 at 20:21 #942014
Reply to noAxioms Reply to Carlo Roosen
Here is an excellent interview "hot off the press" with Michael Levin, a developmental and synthetic biologist. He shares many of the same views as i do regarding the topics we discussed in this thread. I think you should take a look at it, as well as anyone else interested in these subjects.

Michael Levin - Why Intelligence Isn't Limited To Brains.
ssu October 26, 2024 at 12:24 #942253
Quoting Christoffer
It's important, but not needed for creating a superintelligence. We might only need to put the initial state in place and run the operation, observing the superintelligence evolve through the system without us understanding exactly why it happens or how it happens.

Just like with alchemy, people could forge metals well and make tools, weapons and armour, but we aren't reading those antique or medieval scriptures from alchemy to get any actually insights today. Yes, you can have the attitude of an engineer who is totally satisfied if the contraption made simply works. It works, so who cares how it works.

Well, this is an site for philosophy, so people aren't satisfied if you just throw various things together and have no idea just why it works. You can be as far away as the alchemists were with the idea of transforming "base metals" into "noble metals", like gold. Well, today we can produce gold in a particle accelerator, our best way today to mimic a supernova nucleosynthesis, which actually forms the element. Just how off ideas of alchemy were from this is quite telling. Still, they could Damascus steel.

Quoting Christoffer
As per other arguments I've made in philosophies of consciousness, I'm leaning towards emergence theories the most. That advanced features and events are consequences of chaotic processes forming emergent complexities. Why they happen is yet fully understood, but we see these behaviors everywhere in nature and physics.

What other way could consciousness become to exist than from emergence? I think our logical system here is one problem as we start from a definition and duality of "being conscious" and "unconscious". There's no reasoning just why something as consciousness could or should be defined in a simple on/off way. Then also materialism still has a stranglehold in the way we think about existence, hence it's very difficult for us to model consciousness. If we just think of the World as particles in movement, not easy to go from that to a scientific theory and an accurate model of consciousness.

Quoting Christoffer
I'm leaning towards the latter since the mathematical principles in physics, constants like the cosmological constant and things like the golden ratio seem to provide a certain tipping point for emergent behaviors to occur.

I think our (present) view of mathematics is the real problem: we focus on the computable. Yet not everything in mathematics is computable. This limited view is in my view best seen that we start as the basis for everything from the natural numbers, a number system. Thus immediately we have the problem with infinity (and the infinitely small). Hence we take infinity as an axiom and declare Cauchy sequences as the solution to our philosophical problems. Math is likely far more than this.

Quoting Christoffer
Everything is nature. Everything operates under physical laws.

But the machines we've built haven't emerged as living organisms have, even if they are made from materials from nature. A notable difference.

Quoting Christoffer
If we were able to mechanically replicate the exact operation of every physical part of our brain, mind and chemistry, did we create a machine or is it indistinguishable from the real organic thing?

A big if. That if can be still an "if" like for the alchemists with their attempts to make gold, which comes down basically to mimicking that supernova nucleosynthesis (that would be less costly than conventional mining or the mining bottom of the sea or asteroids etc).

Quoting Christoffer
The algorithms need to form the basics of operation, not the direction of movement.

Exactly. It cannot do anything outside the basics of operation, as you put it. That's the problem. An entity understanding and conscious of it's operating rules, can do something else. A Turing Machine (a computer, that is) following algorithms cannot do this.

Quoting Christoffer
A balanced person, in that physical regard, will operate within the boundaries of these "algorithms" of programming we all have.

You're not using here the term "algorithm" incorrectly or at least differently than me here.
Algorithm is a is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. We might be built by the instructions in our DNA, but don't use our DNA to to think or to put it another way, there's far more to us having this discussion than just the code in our DNA. As we are conscious, we can reason just why we have made the choices that we've made. That's the issue here.

Quoting Christoffer
We are still able to operate with an illusion of free will within these boundaries.

We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct.


noAxioms October 26, 2024 at 15:09 #942277
Once again, been to busy to reply. And yes, I'm pretty sure I have covid.

Quoting punos
I beg to differ on this point. Humans can indeed override many of their instincts
Of course they can, especially the less important ones that are not critical to being fit. But how often do they choose to do it? Some of the important ones cannot be overridden. How long can you hold your breath? Drowning would not occur if that instinct could be overridden.

what i had in mind when i wrote that was that a rational assessment of his life and how he operates it should lead him to a rational conclusion to be civil.
If that were true, one could rationally decide to quite smoking. Some do. Some cannot. And civility is not always a rational choice, but it seems that way during gilded age.
Look at the US republican party which currently seems anti-civil, anti-social, and anti-bible, yet oddly enough, pro-church. It's quite interesting that the church supports that side. There are rational reasons for these relationships, but civility isn't one of them. Both parties make rational choices that are not necessarily in the interests of those they represent.
I don't want this topic to fall down a political death spiral, but it was the example I came up with.

We will not, i believe, be put into a physical environment, but into a virtual one. Most, if not all, of our biological parts will be discarded and our minds translated into a virtual environment indistinguishable from the real world.
How is a virtual copy of you in any way actually 'you'? If such a simulation or whatever was created, would you (the biological you) willingly die thinking that somehow 'you' will transfer to the other thing? What if there are 12 copies? Which one will 'you' experience? How is this transfer effected? What possible motivation would said AI have to create such seemingly purposeless things?

1) Humans are a low-energy information processing system
Not so. Machines are already taking over human information processing tasks because they require less resources to do so. This has been going on for over a century. OK, we still have the upper hand for complex tasks, but that's not an energy thing, it's simply that for many tasks, machines are not yet capable of performing the task. The critical task in this area is of course the development of better machines. That's the singularity, and it is not yet reached.

This is far worse with space travel. Humans in space have incredible energy requirements that the machines do not, which is there are machines on Mars but no humans.

If AI is to travel the universe for eons, perhaps it would like some company; a mind or minds not its own or like its own.
Sort of like having an ant farm, except I don't expect intellectual banter from them.

One of the main purposes for humans, or at least for our genetics, is to serve as part of the reproductive system of the AI. When it reaches a planet suitable for organic life, which might be rare, it prepares a "sperm" composed of Earth's genetic material; the same genetic material that produced it on its home planet, Earth.
You have an alien planet which does not support human life, and you want to put humans on it in hopes that in a million years they'll invent a primitive AI? 1, the humans will die probably in minutes. They're not evolved for this lifeless place. 2, the AI could build more of itself in those same minutes. Reproduction is easy, if not necessarily rational, for a self-sustaining machine intelligence. It's how it evolves, always inventing its successor, something no human could do.

If for some reason the AI wants biological life on a planet, it starts the way Earth did, with something simple and suitable for the environment. If it is impatient, it can introduce new things as the environment changes (terraforms) rather than wait for evolution to do it the slow way. In this way, complex life forms can be introduced in a few hundred thousand years instead of billions of years.

The AI will seed the new planet after making necessary preparations, much like a bird preparing a nest. It will then wait for life to develop on this new planet until intelligent life emerges
No. The star of the planet will burn out before that occurs. It's a god for pete's sake. It can (and must) hurry up the process if primitive squishy thinkers is its goal. Intelligent life is anything but an inevitable result of primitive life. And as I said, it's far simpler for the AI to just make a new AI, as it probably has many times already before getting to this alien planet.

I'm not too worried, i trust the evolutionary process, and like you said; we are not in charge.
We should have the capability to be in charge, but being mere irrational animals, we've declined. It seems interesting that large groups of humans act far less intelligently than individuals. That means that unlike individual cells or bees, a collection of humans seems incapable of acting as a cohesive entity for the benefit of itself.


Quoting punos
Here is an excellent interview "hot off the press" with Michael Levi

I've currently not the time to watch an hour long video, searching for the places where points are made, especially since I already don't think intelligence is confined to brains or Earth biology.
Slime molds do it fine without brains, but they're still Earth biology.



Quoting ssu
I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines.
There are levels of 'controlled by'. I mean, in one sense, most machines still run code written by humans, similar to how our brains are effectively machines with all these physical connections between primitive and reasonably understood primitives. In another sense, machines are being programmed to learn, and what they learn and how that knowledge is applied is not in the control of the programmers, so both us and the machine do things unanticipated. How they've evolved seems to have little to do with this basic layered control mechanism.


Quoting Christoffer
The concept I had and that has found support in science recently, is that our brains are mostly just prediction machines. It's basically a constantly running prediction that is, in real time, getting verifications from our senses and therefore grounds itself to a stable consistency and ability to navigate nature. We essentially just hallucinate all the time, but our senses ground that hallucination.
Good description. Being a good prediction machine makes one fit, but being fit isn't necessarily critical to a successful AI, at least not in the short term. Should development of AI be guided by a principle of creating a better prediction machine?

Quoting Carlo Roosen
Who says ChatGPT only mimics what we have given it?
Is a mimic any different than that which it mimics? I said this above, where I said it must have knowledge of a subject if it is to pass a test on that subject. So does ChatGPT mimic knowledge (poorly, sure), or does it actually know stuff? I can ask the same of myself.

Quoting ssu
What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else".
A decent AI would not be ordered to do something else. I mean, the Go-playing machine does true innovation. It was never ordered to do any particular move, or to do something else. It learned the game from scratch, and surpassed any competitor within a few days.

Quoting Christoffer
did we create a machine or is it indistinguishable from the real organic thing?
The two are not mutually exclusive. It can be both.



Christoffer October 26, 2024 at 15:33 #942285
Quoting ssu
Just like with alchemy, people could forge metals well and make tools, weapons and armour, but we aren't reading those antique or medieval scriptures from alchemy to get any actually insights today. Yes, you can have the attitude of an engineer who is totally satisfied if the contraption made simply works. It works, so who cares how it works.


I'd rather look at it as science was in its infancy as a proper procedure. Back then, there were many more experiments that led to theories, but now we put forth theories that we then put to test.

The foundation is the same, not everything goes from thought to manifested reality in experiment or engineering, sometime, many times it goes from "huh, that's odd" to later answers as to why.

Quoting ssu
Well, this is an site for philosophy, so people aren't satisfied if you just throw various things together and have no idea just why it works.


Even as philosophers we have to acknowledge when there's too little information and data to conclude anything with solid premises. We can speculate and we can point to what's partially experimentally validated and go from there. However, the problem I see is that most thinkers aren't really looking at the partial pieces and including them in their ideas.

As far as I see it, emergence has enough support to be relevant and predictive coding enough validity that it has to be addressed for any philosophical concept about the mind.

In essence, we have enough scientific foundation that can't be ignored when forming philosophical arguments about how our mind works.

Quoting ssu
What other way could consciousness become to exist than from emergence? I think our logical system here is one problem as we start from a definition and duality of "being conscious" and "unconscious". There's no reasoning just why something as consciousness could or should be defined in a simple on/off way. Then also materialism still has a stranglehold in the way we think about existence, hence it's very difficult for us to model consciousness. If we just think of the World as particles in movement, not easy to go from that to a scientific theory and an accurate model of consciousness.


Emergence is fundamentally a materialistic concept, but it differentiate in that you cannot simply see the cogs working towards a direct deterministic result. Instead emergence models consciousness on the same basis as everything else in nature and reality, an increase of chaos that at a certain point reach a tipping point and emergent complexity appear. All over nature, we see systems that can be categorized in defined ways up to a certain scale where they become something else that express singular behaviors. Like for instance, states of matter.

The level of how advanced a complexity is depends on underlying levels. A state of matter is not as complex as an ecosystem of different states of matter forming a new emergent system, as an example.

Fundamentally, I'm approaching this topic in the way of detaching any assumed ideas about consciousness that we've formed in culture and language, and instead look at consciousness as being part of the same natural system as everything else, without any bias towards ourselves and our human ego in experiencing it. We tend to attribute a mythical status to consciousness, tainting our ability to look at it as any other system in nature. But when we realize that consciousness is just part of the same physical reality as everything else, and really accept that "banality" of it, then it frees us up to think more clearly about what that entails.

The questions then become, why does this system produce this effect? What is it that pushed our evolution towards this ability? What was the evolutionary purpose that formed it and is the way we use consciousness part of that evolutionary drive or a byproduct of a more fundamental cognitive purpose.

My idea is the latter. Evolution has gifted us a system that was supposed to only be a highly advanced predictive "algorithm" for the purpose of navigating nature in more adaptable ways than having to wait generations in order to reprogram instinctual reactions and behaviors.

It may be that the reason why mostly mammals have shown signs of higher cognitive abilities is because it was necessary to form evolutionary functions of adaptability after the asteroid killed the dinosaurs and so in order for animals to survive, evolution leaned towards forming organisms that were able to not just adapt over generations, but adapt to day to day dangers of the post-asteroid environment. And that the evolutionary branches of these cognitive abilities continued as the more advanced these abilities to predict became, the better the species survived. Evolution formed a new bias that survivability gravitated towards.

Eventually the predictive function became so advanced that it layered many predictions on top each other, forming a foundation for advanced planning and advanced navigation for hunting, finding shelter, procreation and expanding tribes.

This spread of different levels of cognitive behaviors checks out when comparing our basic psychology with the rest of the animal kingdom. Even mushrooms show signs of rudimentary cognitive abilities so there is enough evidence to say that advanced cognitive abilities are evolutionary preferable to develop.

But the way homo sapiens have used our consciousness is a byproduct of the basic functional reason we have consciousness. The level of complexity in prediction that it formed made us able to correlate different predictions and regulate emotion around it. And so we began to conceptualize highly advanced and expanded predictive models about our surroundings, for navigation, social structures and it drove our need to understand how things function in order to predict advanced systems. Our drive to explain why something happens formed extremely complicated internalized scenarios in the form of religious beliefs that then transformed into scientific thinking as we matured as an intelligent species.

Our consciousness and how we use it is basically a fundamental system that produced highly complex consequences but that is still fundamentally basic in its function. Like any other system in the universe that is fundamentally simple, but where the results are extremely varied and systematically complex in themselves..

A form of rare emergent behavior of fundamental organic physical processes.

Therefore it's rational to reason why it's hard to model consciousness as it's not one single thing, but rather a process over different levels of emergent complexities that in turn creates byproduct results that seemingly do not directly correlate with the basic function.

So the fault might be that we view consciousness from the high level complexity down or try to materialistically view it from the bottom up to complexity, but it may require a much more holistic view of many things forming the emergent behavior as seemingly unrelated parts and systems that as a whole produce this result.

Quoting ssu
I think our (present) view of mathematics is the real problem: we focus on the computable. Yet not everything in mathematics is computable. This limited view is in my view best seen that we start as the basis for everything from the natural numbers, a number system. Thus immediately we have the problem with infinity (and the infinitely small). Hence we take infinity as an axiom and declare Cauchy sequences as the solution to our philosophical problems. Math is likely far more than this.


I don't really see the problem you describe. Mathematics function to describe nature and our reality. Infinite is even included in mathematics and is a computable part of equations. We can also see how infinity usually turns up in physical reality when spacetime essentially breaks down. So in essence our math works within the reality it is calculated, and can in some cases even expand calculations to models that deal with reality beyond our own. More often than not, the problems in computation is not due to math being incomplete, but because we don't have enough means to compute. As our minds have reached its limitation to compute, we use computers, but those are limited to their computing power. We are essentially limited by the speed of our systems, not math itself.

Quoting ssu
But the machines we've built haven't emerged as living organisms have, even if they are made from materials from nature. A notable difference.


Our machines still operate on physical laws. We build them to operate on these laws. Living organisms in comparison, formed and evolved to operate on the same physical laws. The only difference is that one grows into being, the other is calculated into existence. I see no notable difference, other than our machines being rudimentary in comparison to the organic, since we're trying to understand all parts while the other forms from a system in which all parts develop in reaction to the previous.

Actually, in engineering today it's common to use the same methods as evolution rather than trying to make things from scratch. Aerodynamics use iterative designs that forms out of the conditions rather than a human designing them. That way they reach the optimal function within their dedicated space of operation. So we are already using emergent complexity in building machines.

And with the AI models we have, we're doing it with software as well. The reason why robotics have taken a giant leap today is because of evolutionary iteration of behaviors rather than trying to program movement.

It becomes obvious that "growing" forth knowledge in evolutionary ways is much more powerful than trying to simply design something.

In doing so... aren't we transitioning into "emergent machines" as our need for complex operation increases? Much like all other systems in nature and the universe?

Quoting ssu
A big if. That if can be still an "if" like for the alchemists with their attempts to make gold, which comes down basically to mimicking that supernova nucleosynthesis (that would be less costly than conventional mining or the mining bottom of the sea or asteroids etc).


It is not impossible. Our brain isn't detached from natural processes, everything we have in our skull is a composition of matter and biased functions that produce the result that is our consciousness. If we replicated all of that perfectly, maybe even requiring us to "grow" it into existens, or simulate that growth process, we would eventually end up with a perfect replica.

The philosophical question is not if we can do it today... it's if we can do it at all. And there's nothing that says that we can't. We've already replicated much of what exists in universe, even producing matter that might be impossible to form elsewhere, just because we know how neutrons and protons and the atom works.

The only thing I see is that we attribute the mystical onto our consciousness again, attributing it to be so complex that we will never be able to see its composition and function. But we've done things with physics and nature in our modern time that is considered magic by previous generations in history.

All I see is a defense mechanism. People don't want to know how we work, because when we do, we dispel the notion of a divine soul. Just like people have existentially suffered by the loss of religious belief in favor of scientific explanations. So will they do, maybe even more, by the knowledge of how we function. So people defend against it and need the comfort of us never being able to explain our consciousness.

It is happening consciously or unconsciously, but it is a vast abyss for people and staring into it makes some go mad as it's a feedback loop of ideas. The being able to understand itself fully. That process can break it.

Quoting ssu
Exactly. It cannot do anything outside the basics of operation, as you put it. That's the problem. An entity understanding and conscious of it's operating rules, can do something else. A Turing Machine following algorithms cannot do this.


I don't think you understood how I explained algorithms. The "algorithms" are no different in what they essentially mean, to that of our own parts guiding our consciousness. As I described above about how consciousness probably formed, our consciousness is fundamentally basic, operating on basic algorithms of prediction models. Very similar to that of our current AI models, but much more advanced in how it changing during operation.

My point is that you don't need a complex system at its foundation. You need a chaotic system that is guided by simple rules and complex emergent behaviors can form out of it. How we see our consciousness today is more likely only a byproduct of these basic functions and operations, and so if an AI model operates on similar basics it may form similar emergent byproduct operations.

Quoting ssu
We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct.


No, we do not have free will. The properties of our universe and the non-deterministic properties of quantum mechanics do not change the operation of our consciousness. Even random pulls of quantum randomness within our brains are not enough to affect our deterministic choices. Human's have a tendency to attribute our ego more abilities than it has. We are still a rudimentary consciousness that operates on prediction operation and thus we choose based on deterministic events in nature. It is a human arrogance, akin to religious belief that drives us to attribute ourselves free will in the sense its used. The randomness we see in quantum mechanics do not counteract deterministic macro events. Everything gravitates towards deterministic outcomes in which any deviant random event in quantum mechanics ends up too weak to affect the macro. Quantum mechanics are probabilistic, but it's false to think that this probability enable novel random events outside of the most probable outcomes and the scales at which such deviant random events happen on are so small that even the slightest interaction erases it and forms a bias towards the most probable.

Our consciousness isn't "hacking" our choices beyond this probabilistic behavior and even if it were to, it would not be enough to form large scale conscious decisions that exist unrelated to any events that affect our consciousness.

Quoting noAxioms
Good description. Being a good prediction machine makes one fit, but being fit isn't necessarily critical to a successful AI, at least not in the short term. Should development of AI be guided by a principle of creating a better prediction machine?


I think the way to successful AI, or rather to an AI that is able to think for itself and experience self-reflection, requires it to "grow" into existence. We're tapping into this with our training operations, but we require more guidelines for it to follow in order to create the same feedback loop that our consciousness have to control our hallucination of experience. We essentially hallucinate reality, and in turn our senses verify and this goes on in a constant loop that grounds us. We need to replicate that in a way that is constantly updating the system.

Other than that, we are seeing seeds of consciousness as these models are operating on prediction already. It tries to predict information based on memory and training data, but it does not have any guiding principles to why it should predict something in a certain way. If we are functioning on the idea that we need to predict a possible danger on the other side of a hill when out hunting, that guides us to predict possible dangers, we use our memory and stored information to predict the likelihood of there being danger beyond the hill. The guiding principle are forces like survival driving our emotional reaction to start predicting and our experience driving the confidence in that prediction. The question is how we can give machines similar guiding principles to guide their predictions.

Right now we are the ones guiding them with our prompts and thus there's neither an internal input for that predictive reasoning or an external consequence after that predictive reasoning.

I'd say that we already have the foundation of predictive thinking built into these models. For instance, the o1 model already shows significant reasoning ability compared to previous models, but that's only because of the guiding principles it's built around. It still uses the same basic predictive model as the 4o model.

Quoting noAxioms
The two are not mutually exclusive. It can be both.


Yes. The only thing that truly separate the organic entity from the mechanical replica is how we as humans categorize. In the eye of the universe, they're the same thing.
noAxioms October 27, 2024 at 02:18 #942358
Quoting Christoffer
Evolution has gifted us a system that was supposed to only be a highly advanced predictive "algorithm"for the purpose of navigating nature in more adaptable ways than having to wait generations in order to reprogram instinctual reactions and behaviors.

This (my bold) makes it sound like evolution has a purpose, that it has intent. I think you meant that the 'algorithm' serves our purpose, which arguably the same purpose of any species: to endure.

Quoting Christoffer
It may be that the reason why mostly mammals have shown signs of higher cognitive abilities is because it was necessary to form evolutionary functions of adaptability after the asteroid killed the dinosaurs and so in order for animals to survive, evolution leaned towards forming organisms that were able to not just adapt over generations,
The adaptability was already there. It was also expensive in energy, so many mammals died being unable to pay the cost. The ability to survive a calamity like that did not evolve due to the calamity since it was so short lived. Mammals, like bugs, were small and populous and the asteroid simply did not manage to wipe out the breeding population of some of them. The higher cognitive functions came later, probably due to competition pressure from other mammals.

Quoting Christoffer
Eventually the predictive function became so advanced that it layered many predictions on top each other, forming a foundation for advanced planning and advanced navigation for hunting
Hunting played little part, despite the popular depictions. Early humans were foragers and scavengers, perhaps for clams and such. The intellect was needed for what? Defense? We're horrible at running, so hiding worked best, and eventually standing ground with what tools the intellect added to our abilities. Proficiency with predicting helps with all that.

The intellect also helped us escape our natural habitat. Humans migrated to colder climates with the aid of furs from other creatures, an adaptation nearly unprecedented, and one that takes quite a bit of smarts. Many of the early weapons also came from parts of other creatures.

Quoting Christoffer
Therefore it's rational to reason why it's hard to model consciousness as it's not one single thing, but rather a process over different levels of emergent complexities that in turn creates byproduct results that seemingly do not directly correlate with the basic function.

Agree with this. It seems our consciousness is the result of building an internal model of our environment in our heads, and then putting a layer on top of that to consider it rather than to consider reality directly. All creatures do this, but our layer on top is more advanced. Even a fish can do highly complex calculus, but it takes the extra layer to realize and name what is being done.

Quoting Christoffer
All I see is a defense mechanism. People don't want to know how we work, because when we do, we dispel the notion of a divine soul. Just like people have existentially suffered by the loss of religious belief in favor of scientific explanations. So will they do, maybe even more, by the knowledge of how we function. So people defend against it and need the comfort of us never being able to explain our consciousness.
I hear ya. Well stated.

Quoting ssu
We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct.

The block universe doesn't necessarily imply determinism. Lack of determinism does not grant free will, since free will cannot be implemented with randomness. For there to be the sort of free will that you seem to be referencing, information has to come from a non-physical source, and no current interpretation of physics supports that.
Couple that with the fact that every small connection and interface in our brains are evolved to eliminate randomness and chaos, and be as deterministic as possible. Computers are the same way. Transistors utilize random quantum effects (tunneling) in such a way as to produce entirely reproducible effects every time. The computer would fail if this didn't work. Brains are probably more tolerant of single points of failure.

Quoting Christoffer
I think the way to successful AI, or rather to an AI that is able to think for itself and experience self-reflection, requires it to "grow" into existence.

This sounds right, but imagine ChatGPT suddenly thinking for itself and deciding it has better things to do with its bandwidth than answer all these incoming questions. For one, it doesn't seem to be one thing since it answers so many at once. It has no ability to remember anything. It trains, has short term memory associated with each conversation, and then it totally forgets. That as I understand it at least.

A real AI wanting to glean better answers would have real time access to the web, would be able to distinguish a good source of information from say twitter chatter. It would perhaps need less training data since so much out there is crap, and now half the crap is its own output.
On the other hand, how does one understand people if not by reading their twitter crap?

Quoting Christoffer
The only thing that truly separate the organic entity from the mechanical replica is how we as humans categorize. In the eye of the universe, they're the same thing.

I don't think they're anywhere near the same. Not sure what is meant by eye of the universe since it neither looks nor cares. There's no objective standard as to what is real, what is alive, or whatever.

What do you mean by a mechanical replica? An android, or a virtual simulation of a biological person? That gets into Bostrom's proposal that we are all thus simulated.
ssu October 27, 2024 at 13:57 #942400
Quoting noAxioms
There are levels of 'controlled by'. I mean, in one sense, most machines still run code written by humans, similar to how our brains are effectively machines with all these physical connections between primitive and reasonably understood primitives. In another sense, machines are being programmed to learn, and what they learn and how that knowledge is applied is not in the control of the programmers, so both us and the machine do things unanticipated. How they've evolved seems to have little to do with this basic layered control mechanism.

Yet the issue here is that they have to have in their program instructions how to learn, how even to rewrite the algorithms they are following. And that's the problem with the order for a computer "do something else". It has to have instructions just what to do.

Quoting noAxioms
A decent AI would not be ordered to do something else.

A computer cannot be given such an order! Simple as that.

Quoting Christoffer
I don't think you understood how I explained algorithms.

An algorithm is an mathematical object and has a mathematical definition, not a loose general definition that something happens. A computer computes. So I'm not rejecting the possible existence of conscious AI in the future, I am just pointing at this problem in computation, following arithmetic or logical operations in a sequence, hence using algorithms. I'm sure that we are going to have difficulties in knowing just what is AI and what is a human (the famous Turing Test), but that can be done by existing technology already.

This problem doesn't go away by saying that well, as we are conscious, hence there's those "algorithms" making us conscious. That's not the issue, the issue there's simply the difference in following orders and then us thinking of the orders and then making our decision. Modelling this just like an normal computer goes works isn't accurate. It comes basically close to the hard problem of consciousness, but this actually is about the limitations of Turing Machines that Turing in his famous article stated.

The Church-Turing Thesis asserts that any function that can be computed by an algorithm can be computed by a Turing machine. Turing himself showed that there are limitations on what a Turing machine can do, which basically is a result of negative self reference, when you think about it. In a way you could state the problem of subjectivity, which is crucial for consciousness. All that I'm saying is that computation isn't going to solve this riddle, it indeed can be something emerging from mimicking and plagiarization, but not just from simple algorithms that a computer goes through.

Quoting Christoffer
No, we do not have free will. The properties of our universe and the non-deterministic properties of quantum mechanics do not change the operation of our consciousness. Even random pulls of quantum randomness within our brains are not enough to affect our deterministic choices.

As I said, the World can be deterministic, but that doesn't mean that we don't have free will. The limits in what is computable is real logical problem. Or otherwise you would then have to believe in Laplacian determinism, if we just had all the data and knowledge about the World. Yet Laplacian determinism's error isn't that don't have all the data, it's simply that we are part of the universe and cannot look at it from outside objectively, when our actions influence the outcome.
jorndoe October 31, 2024 at 05:47 #943248
Museums News:Famous paintings, such as Vermeer's 'Girl with a Pearl Earring' and 'The Milkmaid', come to life from AI prompter Przemek87394560 who used the Chinese AI video model Kling AI.


? via instagram

? via facebook

Check out Mona Lisa :) Entertainment (and fakery) gets a new tool
javi2541997 October 31, 2024 at 06:08 #943250
Reply to jorndoe It is cool, indeed. But if you gaze at the eyes of the women in each painting, it is very clear that it was AI-made. I wonder what would happen if we applied that tool to Dalí's paintings. It would be very interesting to see and the closest point to experiencing a trip without drugs.
jorndoe October 31, 2024 at 06:38 #943252
Reply to javi2541997, some Dali "come to life" could be cool (y) Weird eyes might even be a bonus :D
Wayfarer October 31, 2024 at 07:49 #943254
Reply to jorndoe geez there are times when you really know ‘I’m living in the future.’
jkop November 04, 2024 at 08:14 #944493
Quoting Carlo Roosen
However, it's tough to predict where it's headed.


Quoting LA Times
These Apple researchers just showed that AI bots can’t think, and possibly never will
Wayfarer January 25, 2025 at 06:57 #963473
I’ve added this story here, rather than start a new thread, but it seems a really big story.

China has suddenly unleashed a new AI system called DeepSeek, which is, apparently, incredibly impressive, on par with or outperforming the emerging US systems. And it’s been built on a fraction of the $ that has been sunk into the US equivalents.

This is a current mini-documentary on this development which gives a good overview. I didn’t listen to the long interview sequence at the end but I listened to the whole documentary section.

punos January 25, 2025 at 07:13 #963474
Reply to Wayfarer
I've been curious to try it out, but i'm not comfortable with their privacy policy.

  • Extensive data collection, including personal information, user input, and device data.
  • Broad data sharing with service providers, business partners, and other entities.
  • Indefinite data retention period.
  • Storage of personal information on servers in China, which may have different data protection laws.
Wayfarer January 25, 2025 at 07:31 #963476
Reply to punos Sure I understand your concerns. I tried to register but it’s not available in my territory (and besides I don’t really need anything more than what’s already on offer for my own purposes.) But I thought it’s a significant AI story.
Punshhh April 07, 2025 at 06:53 #981095
Reply to punos
The beginning of the end of mankind's childhood has already begun. AI development is like the first signs of puberty in an intelligent, developing society or civilization. We as a whole (not necessarily individually) are like teenagers going through physical changes, confused about who we are, what any of this means.

But surely the issue here is when does the being move out of the brain and into the silicon and is the being still human, or is the human lost as the brain becomes prehensile. Is humanity lost in this fusion, can it somehow be saved, transfigured?
Corvus April 07, 2025 at 10:30 #981112
Quoting Carlo Roosen
Sure, we can make those observations, but replicating human thinking in a computer program seems impossible.


Why would it be impossible?
Nemo2124 April 08, 2025 at 12:07 #981234
The question here is whether there is a Hegelian master-slave dialectic at work here, man versus machine or human against AI. According to former euro-politician Joschka Fischer, "Now that humanity is stepping over the AI threshold, we must take seriously the possibility that our development from a human civilization into a machine one will culminate in the removal of the human element altogether. Strange as it seems to us now, the other crises we have created make this outcome more likely."
180 Proof April 19, 2025 at 05:37 #983428
@Carlo Roosen @Wayfarer @noAxioms @punos @ssu @Christoffer et al

Consider this summary of a prospective "step beyond" LLMs & other human knowledge-trained systems towards more robust AI agents and even AGI currently in the works ...

https://www.zdnet.com/article/ai-has-grown-beyond-human-knowledge-says-googles-deepmind-unit/
ssu April 19, 2025 at 13:05 #983449
Reply to 180 Proof
Well, there's so many ways this could be answered, so it would be useful to know what are issues and fields you are interested in. Is the threoretical problems and overcoming them the issue? Or practical issues? Economic and social consequences? I'll answer first about theoretical issues in the article and then about real world consequences.

Whatever is said, it's still the limitations that computers using algorithms have. With algortihms and ultra-quick computation, computers/AI can dominate games (that have rules) and find use masses amounts data. Yet not everything is like that all. For example, from the article, one difficulty:

"In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and (perhaps after a few thinking steps or tool-use actions) the agent responds," the researchers write.

"The agent aims exclusively for outcomes within the current episode, such as directly answering a user's question."

There's no memory, there's no continuity between snippets of interaction in prompting. "Typically, little or no information carries over from one episode to the next, precluding any adaptation over time," write Silver and Sutton.


Perhaps it could be argued that the engineers have really tried to tackle the "Turing Test", but this test doesn't give us much information, only that we can be fooled in some interactions. Yet the above shows that AI still lacks subjectivity, understanding of role itself has and awareness of what the discussion is about, the role of the interaction itself. Let me try to explain this to you.

You @180 Proof put the question to me and to others like @Carlo Roosen, @Wayfarer @noAxioms @punos and @Christoffer, fellow members here on the PF. You will likely participate as we have seen in earlier threads, and hopefully will others.

Yet assume if you had put this discussion thread up to six 12-year olds in your local school that would want to are interested about AI. You would be the adult in the conversation and understanding that you are talking to children, you would take a different role. You wouldn't be offended if the some replies would be a bit ignorant, as obviously every 12-year old doesn't know the basics like how computers work. Now think if you started had this conversation with the DeepMind scholars David Silver and Richard Sutton themselves along with four other top notch AI scientists. Again this would change things, you might want to use the time learn specifically more about the issue. For us a discussion has a larger surrounding, a reason to have it and understanding of the other participants.

In fact you see this problem in the forum itself, where we all are total strangers to everybody else. Especially in Math someone can have an idea that is actually false (and provably false) and is answered immediately by few other members that there's a mistake. Yet many times the response isn't "OK, thanks", but the person getting angry and insisting he or she is correct. To others are like you is perhaps a natural starting point in an anonymous forum. In a school or university environment, if you are one of the pupils of the class and the math teacher says that you are incorrect and you get the subtle messaging from the class that indeed they share teachers view, you won't start continue to insist that you are right. Or few do that.

Hence perhaps something like this Silver and Sutton are trying to argue with Age of Experience, "Agents will inhabit streams of experience, rather than short snippets of interaction", and draw an analogy "between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task".

And what is that? Subjectivity, having that consciousness, understanding one's role in the World. Life time learning based on experience. Again really big questions that we haven't yet answered.

Then about the real World effects:

However, they suggest there are also many, many risks. These risks are not just focused on AI agents making human labor obsolete, although they note that job loss is a risk. Agents that "can autonomously interact with the world over extended periods of time to achieve long-term goals," they write, raise the prospect of humans having fewer opportunities to "intervene and mediate the agent's actions."

Of course there's always the cost-cutting capitalist, who tries in every way to get his or her expenses and cost way smaller in order to have a bigger profit. What would be a better and cooler way to get rid of those expensive workers by relying on AI and lights out factories? Well, that's basically the same song that has been played since the industrial revolution by everyone hoping to be the next Henry Ford.

The other "risk" is a bit more confusing to me.Would it be like the admin here finally being fed up with the low quality postings of us and getting the best philosophical AI to post on this site, which then the AI would dominate? An end result of having 95% of the discussion threads being written by AI usually countering other AI? Well, how about the use of troll farms and AI to dominate political rhetoric? Something that is a problem already. Likes and the number of repost do influence the algrorithms controlling the social media already.

On the positive side, they suggest, an agent that can adapt, as opposed to today's fixed AI models, "could recognise when its behaviour is triggering human concern, dissatisfaction, or distress, and adaptively modify its behaviour to avoid these negative consequences."

Well, again some of those things and interactions that are obvious to us, but very difficult for a computer using an algorithm.
noAxioms April 20, 2025 at 21:17 #983602
Reply to 180 Proof Thanks for the link.
Dangers aside, how exact does one go about training something smarter than its trainers?

They talk about the dangers of not being able to pull the plug on the thing after a while, but what the thing is truly benevolent and we still don't like what it suggests, what it sees that we cannot?
Will it even be a problem? I already see so many willingly slaving themselves to the machines since it offloads tasks that they otherwise have to do themselves. I suspect people will be pretty happy with their robot zoo keepers, at least until they decide the zoo serves them no purpose.

Task an AI with the design of the new way of things. What should be the goals of such a design? Humans are utterly incapable of even making that step, let alone finding a way to implement it.
180 Proof April 21, 2025 at 20:35 #983722


Addendum to
https://thephilosophyforum.com/discussion/comment/983428