Semiotics and Information Theory

Count Timothy von Icarus July 22, 2024 at 19:43 7825 views 65 comments
John Deely wrote a dialogue where he envisioned what it would be like to explain his understanding of semiotics to a standard realist. It's an idealized conversation of sorts. Although it the original is quite long, it was edited down to a bit under an hour and was performed by two actors and recorded. Here is the video:

https://youtu.be/AxV3ompeJ-Y?si=34TM8E5xiEowIPV2

Here is the original dialogue, which is significantly longer: https://ojs.utlib.ee/index.php/sss/article/view/SSS.2001.29.2.17/12520

I had discovered this awhile back, but reading Deely again made me recall it. In the conversation Deely talks about his hope that philosophy will come out of its "long semiotic dark age." I'm not sure if we're there yet, but it does seem to me that information theory (along with computational approaches and complexity studies) are becoming pretty ubiquitous in physics, and to a lesser extent in biology and the social sciences.

There seems to me to be a great deal of potential for crossover between IT and semiotics, but you can, and often do, see IT described in entirely dyadic and mechanistic terms. I have not been able to find any great sources on comparing the two. Terrance Deacon's paper on bringing biosemiotics into biology is interesting food for thought, but it leaves open a lot of questions on the wider potential for crossover.

Anyhow, I figured I'd share the video because I think it's great, and try to gin up some discussion on this. I have always thought that information theory suggested a metaphysics that is at odds with popular "building block" metaphysics (i.e. "things are what they are made of" and "everything interacts the way it does because of the fundamental parts things are made of."). The reason for this is that information is always relational and contextual, not "in-itself."

For example, a hot cup of coffee might be a clue at a murder scene. The cup is still hot, so we know someone made it recently. However, knowing "the precise location and velocity of every particle in the cup" would not give us access to this "clue." The information that the cup of coffee was made recently lies in the variance between its temperature and the ambient environment. Likewise, if it was iced coffee, and the ice had yet to melt, we could also tell that it could not have been there long, although this information cannot be had from taking the ice cubes in isolation.

Bits don't really work well as "fundamental building blocks," because they have to be defined in terms of some sort of relation, some potential for measurement to vary. IT does seem to work quite well with a process metaphysics though, e.g. pancomputationalism. But what about with semiotics? I have had a tough time figuring out this one.

Comments (65)

Count Timothy von Icarus July 22, 2024 at 19:56 #919545
User image

User image

At a superficial level, it's easy to see how in the Shannon-Weaver model in IT the information source could be the object and the destination the interpretant.
apokrisis July 22, 2024 at 20:30 #919551
Quoting Count Timothy von Icarus
Bits don't really work well as "fundamental building blocks," because they have to be defined in terms of some sort of relation, some potential for measurement to vary.


Information theory really is anti-semiotic in its impact. Interpretance is left hanging as messages are reduced to the statistical properties of bit strings. They are a way to count differences rather than Bateson’s differences that make a difference.

It becomes a theory of syntactical efficiency rather than semantic effectiveness. An optimisation of redundancy so signal can be maximised and noise minimised.

Which is fine as efficient syntax is a necessary basis for a machinery of semiosis. But the bigger question of how bit strings get their semantics is left swinging.

Computation seems to confuse the issue even more. The program runs to its deterministic conclusion. One bit string is transformed into some other bit string. And the only way this carries any sense is in the human minds that wrote the programs, selected the data and understood the results as serving some function or purpose.

In the 1990s, biosemiosis took the view that the missing ingredient is that an information bit is really an entropy regulating switch. The logic of yes and no, or 1 and 0, is also the flicking on and off of some entropically useful state. Information is negentropy. Every time a ribosome adds another amino acid to a growing protein chain, it is turning genetic information into metabolic negentropy. It is making an enzyme that is a molecule with a message to impart. Start making this or stop making that.

So information theory does sharpen things by highlighting the syntactical machinery - the bank of switches - that must be in place to be able to construct messages that have interpretations. There is the thing of a crisp mechanical interface between an organism’s informational models and the world of entropy flows it seeks to predict and regulate.

But in science more generally, information theory serves as a way to count entropy as missing information or uncertainty. As degrees of freedom or atoms of form. It is part of a pincer movement that allows order to be measured in a fundamental way as dichotomous to disorder.

Biology is biosemiotic as it is about messages with dissipative meanings. Thoughts about actions and their useful consequences.

But physics is pansemiotic as it becomes just about the patterns of dissipation. Measuring order in terms of disorder and disorder in terms of order. A way to describe statistical equilibrium states and so departures from those states.
Count Timothy von Icarus July 24, 2024 at 15:44 #919978
Reply to apokrisis

Information theory really is anti-semiotic in its impact. Interpretance is left hanging as messages are reduced to the statistical properties of bit strings. They are a way to count differences rather than Bateson’s differences that make a difference.


I think that's a fair interpretation of how IT is generally interpreted. Popular interpretations of the extreme usefulness of information theory often try to explain everything in terms of "limits on knowledge" for some agent/physical system. That is, information is useful as a concept but lacks "mind-independant existence." Perspective isn't "fully real" in a sense, and the true distribution for any variable vis-á-vis any physical system receiving a signal is just the very interactions that actually occur. The appearance of probability on the scene is just an artifact of our limitations, something that doesn't exist in the pristine "view from nowhere."

But it does seem that some views leave the door open for a more radical reinterpretation of the dominant mechanistic, dyadic paradigm. Relational quantum mechanics for instance looks at QM through the lens of Nagarjuna's concept of relations, not things, being primary. And there are a few other examples I can think of where a sort of "perspective" exists in physical interactions "all the way down."

I've also seen semiotic accounts of computation, but this has always focused on the computing devices designed by humans, rather than biological or physical computation writ large. Because computation can also be described as communication, it seems like there is an opening here for a triadic account, although I've yet to find one.

In his collected papers, Peirce remarks at one point about his despair over getting people to realize that the "interpretant" need not be an "interpreter." I've seen some explanations of information that try to get at this same sort of idea (e.g. Scott Mueller's Asymmetry: The Foundation of Information). The core insight is that "which differences make a difference," is contextual. When one thing is linked to something else via mediation, the nature of the interpretant matters. I'll agree that this is more obvious when it comes to organisms and their specific umwelts, but it seems true "all the way down," as well.

But I have searched in vain so far for anything that attempts to really fit these together. One big thing that semiotics would help overcome is the nominalism implicit in seeing information just in terms of numbers and probability. The idea that certain signals play a role in making the recipient think of one thing over anything else gets washed out when information appears as sheer probability.
Count Timothy von Icarus July 24, 2024 at 19:09 #920005
The hyper nominalism and relativism of some post-structuralist might be an example of what it would be good to avoid. These theories often invoke semiotics, but it's Sausser's brand, which ends up leaving the sign and its interpretation floating free of the signified.

Information can be seen as ultimately arbitrary only when it is divorced from its source however. E.g., a random text generator might produce all sorts of texts for us, but ultimately all it reveals to us is information about the randomization process used to output the text. Even if a random text generator were to spit out a plausible explanation of a cure for cancer, we should have no reason to think it is telling us anything useful, since there are very many more ways to fail to describe a proper medical treatment than ways to describe a proper one.

Hence, the "information source" cannot be divorced from the message if we want to understand the message (nor can we ignore the recipient). But often it seems the source and recipient are ignored, abstracted away. I think one assumption at work here is the idea that processes can always be reduced to understand them better, that facts about complex things can always be explained in terms of "parts," and that parts are always more fundemental.
Count Timothy von Icarus July 24, 2024 at 21:26 #920044
User image

Ah, I've found a candidate. Hopefully it's good.
apokrisis July 24, 2024 at 22:57 #920067
Quoting Count Timothy von Icarus
I've also seen semiotic accounts of computation, but this has always focused on the computing devices designed by humans, rather than biological or physical computation writ large. Because computation can also be described as communication, it seems like there is an opening here for a triadic account, although I've yet to find one.


My view is that computation always has been and always will be a technological extension of human semiosis. It isn't its own form of life or mind. AI is pipe dream. Information technology exists to extend our reach over entropic nature, just like any other mechanical contrivance we tack on to our human enterprise as we turn our entire planet into an anthrosphere shaped in our own image.

So at the level of giving computation meaning, it only can have meaning in the ways it amplifies our human ability to "harvest the biosphere" as Smil puts it. To extend the business of controlling the wider world's entropic flows.

That is the pragmatics. Howard Pattee's Artificial Life Needs a Real Epistemology is good on this. Semiosis doesn't exist unless the meaningful relation is the one of an informational model plunged into the business of regulating its entropic world.

A computer is just a machine – a rack of switches – plugged into a wall socket. It has no need to care about the source of energy that flips its logic circuits. Humans operate its metabolism, just as humans also find whatever meaning is to be had in its circuit states. The computer is no more alive or mindful than an artificial leg, flight of stairs, or chainsaw.

An irony of the current AI hype boom is that the good reason for the promotion of large language models is that they demand huge compute power. There is money to be made in building ever more energy-expensive cloud computing services and their vast data centres.

The tech giants were pushing "big data" as the business world revolution a few years ago, just as social media was supposed to be the great next step GDP productivity booster. Sold a lot of data centre, but corporations eventually realised big data was just another productivity con.

And here we go again with AI.

So yes there is a triadic account as you do need a bank of switches standing between the informational model of the world and the entropy that is what sustains the organism with its model.

But this is the biosemiotic model of semiosis. The one Salthe, Pattee and others moved towards after being influenced by Peirce's larger corpus finally beginning to hit the public space in the 1980s and 1990s.

The interpretant would be some neural or genetic structure of habit (or linguistic or numeric habit in social and technological humans). The referent would be some negentropic action in the world. The signifier becomes the interface between model and mind – the rack of switches which encode a set of off-on connections between the two sides of this biosemiotic equation.

Or in Pattee's terms, the molecule that is also the message. The enzyme that is acting as the on-off switch of some metabolic process. The receptor cell or motor neuron that is the on-off switch in terms of a sensory input or motor out put – the transduction step needed to turn thought into action, or action into thought.













Treatid July 28, 2024 at 16:38 #921000
Reply to Count Timothy von Icarus

I'm a neophyte to semiotics.

Initial reaction on watching the video:

  • This mostly seems sensible to me.
  • I'm uncomfortable with the usage of "things" and "objects"/"objective".
  • This seems to wildly overcomplicate things.
  • I'm deeply uncomfortable with the idea that humans are qualitatively different, not just quantitatively different. (I think this is connected with ideas around emergence).


Cognition is uniquely human

The idea that humans have a unique ability to understand signs is a direct callback to the divinity of humanity.

It implies that humans have access to a special mechanism that isn't part of the rest of creation.

To believe in this version of semiotics, I am tasked with believing that God gave humanity access to mechanisms that are not available to mere mortal animals.

Even with a more mundane "emergent behaviours" justification, this seems to me to exhibit characteristics of trying to fit the evidence to the prejudices.

Things & Objects

A thing is its relationships.

In principle there is nothing wrong with referring to "thing" when we mean "the relationships of thing". Not least because "thing" is always "the relationships of thing". "Thing" without relationships is moot, irrelevant.

At a surface level - the interlocutors are referring to networks of relationships as things and I was totally onboard...

However, there appears to be an assumption that it is possible to define the terminology. To specify what semiotics is. That the notion of signs is a fixed, static target.

I can understand the urge to define the subject. The presumption of definitions is near universal. But words are their relationships - When semiotics fails to be consistent in this regard it fails at its primary purpose.

-------------------------------------------------------------------------------------------------

State of play

(Some) people can see that descriptions are relational.

Process philosophy and semiotics are approaches to knowledge based on this observation.

Even knowing that all definitions are relational, and that relationships change; these subjects are still assuming the existence of fixed definitions.

Fixed definitions

We know that we can't define X. We can only describe the relationships of X, not X itself.

Even so, the idea of a fixed X persists. Semiotics claims to be an X - A static subject matter - a fixed point of knowledge.

The first problem of semiotics is defining what semiotics (and all related terminology) is.

In trying to define itself, semiotics contradicts its premise

Interlude

I'm not sure how hard I need to sell this. Your general/academic knowledge of philosophy is far greater than mine and you've been investigating this subject matter for some time.

You've demonstrated that you have the individual pieces.

On the other hand, I'm describing a universe where nothing can be defined and meaning/knowledge/understanding are in permanent flux.

This is territory diametrically opposed to The Law of Identity.

In a world where the meaning of a sentence changes while you are reading it; logic doesn't apply.

There is no way to start from an assumption of a static objective universe and arrive at a dynamic universe of changing relationships.

We have to go back to first principles, disposing of all the assumptions predicated on The Law of Identity. Like, for example, the idea that mathematics has some fixed, objective reality. Or that semiotics and process philosophy can be defined.

Humanity exists

No-one has ever described a single instance of a fixed, unambiguous definition independent of relationships.

Our concept of X has always been "the relationships of X".

Semiotics cannot define what semiotics is except through descriptions of relationships. As relationships change, semiotics changes.

We already operate without fixed definitions.

Denying the possibility of fixed knowledge isn't nihilistic. We never had static meaning. Its absence isn't going to cause modern society to collapse.

Mathematics never defined what axioms are, let alone a single axiom. Logic cannot specify the fixed initial state from which deductions progress. Every definition in Quantum Mechanics is circular.

And still we have society and communication.

Rebuilding from first principles

Once we stop trying to do the impossible, everything else (the possible) is trivial in comparison.

We can describe relationships (with respect to other relationships).

That is it. That is our foundation; our First principle.

Any and all effort invested into describing relationships is productive.

Effort put into defining semiotics is wasted - it is an impossible task.

The difficult task is to stop attempting the impossible. "Define your Terms!" has been a defining mantra of modern thought. Possible or not - the habit of trying to define terms runs deep, to the point that subjects specifically addressing the relational nature of meaning are still reflexively trying to define their terms.

You already know how to describe relationships. The trick is not to conflate descriptions with definitions. Descriptions are possible. Definitions (c.f. The Law of Identity) are not possible.
wonderer1 July 30, 2024 at 17:29 #921690
Quoting Treatid
The idea that humans have a unique ability to understand signs is a direct callback to the divinity of humanity.

It implies that humans have access to a special mechanism that isn't part of the rest of creation.

To believe in this version of semiotics, I am tasked with believing that God gave humanity access to mechanisms that are not available to mere mortal animals.

Even with a more mundane "emergent behaviours" justification, this seems to me to exhibit characteristics of trying to fit the evidence to the prejudices.


It seems to me one can dispense with theism, recognize that More is Different and that humans have more cortical neurons than any other species, and thereby have a basis for recognizing a uniqueness to humans.

Count Timothy von Icarus July 30, 2024 at 17:34 #921691
Reply to Treatid


The idea that humans have a unique ability to understand signs is a direct callback to the divinity of humanity.

It implies that humans have access to a special mechanism that isn't part of the rest of creation.

To believe in this version of semiotics, I am tasked with believing that God gave humanity access to mechanisms that are not available to mere mortal animals.


I am not sure where you got that from. The conversation has several examples of animals making use of signs. The interpretant need not be an "interpreter." We could consider here how the non-living photoreceptors in a camera might fill the role of interpretant (or the distinction of virtual signs or intentions in the media).
Gnomon July 30, 2024 at 21:25 #921719
Quoting Treatid
It implies that humans have access to a special mechanism that isn't part of the rest of creation.
To believe in this version of semiotics, I am tasked with believing that God gave humanity access to mechanisms that are not available to mere mortal animals.

Would you prefer to believe that Random Evolution "gave" some higher animals the "mechanism" of Reasoning? For philosophers, rationality is not a material machine, but the cognitive function of a complex self-aware neural network that is able to infer (to abstract) a bare-bones logical structure (invisible inter-relationships) in natural systems*1. Other animals may have some similar abilities, but for those of us who don't speak animal languages, about all we can do is think in terms of analogies & metaphors drawn from human experience.

Peirce's 19th century theory of Semiotics is very technical and over my head. Which "version of semiotics" are you referring to : Peirce's abstruse primitive discussion, or the more modern assessment which includes a century of evolving Information Theory*2? I suppose the OP is talking about the latter.

Besides, Peirce's notion of God*3 was probably somewhat cryptic and definitely unorthodox & non-traditional. So, a philosopher might as well substitute "Nature" for "God" as the Giver of a specialized mechanism for abstracting & symbolizing the logical structure of world systems. Would that be easier to "believe", in the context of this thread? Can you imagine Nature as a "living spontaneity"? :smile:


*1. Reason, in philosophy, is the ability to form and operate upon concepts in abstraction, in accordance with rationality and logic.
https://www.newworldencyclopedia.org/entry/Reason

*2. Information and Semiotics :
Information is a vague and elusive concept, whereas the technological concepts are relatively easy to grasp. Semiotics solves this problem by using the concept of a sign as a starting point for a rigorous treatment for some rather woolly notions surrounding the concept of information.
https://ris.utwente.nl/ws/portalfiles/portal/5383733/101.pdf

*3. C.S. Peirce's "Neglected Argument" for God :
[i]"The endless variety in the world has not been created by law. It is not of the nature of uniformity to originate variation, nor of law to beget circumstance. When we gaze upon the multifariousness of nature we are looking straight into the face of a "living spontaneity." . . .
" … there is a reason, an interpretation, a logic in the course of scientific advance, and this indisputably proves to him who has perceptions of rational or significant relations, that man's mind must have been attuned to the truth of things in order to discover what he has discovered. It is the very bedrock of logical truth."[/i]
https://www.icr.org/article/cs-peirces-neglected-argument/
wonderer1 July 30, 2024 at 21:38 #921723
Quoting Gnomon
For philosophers, rationality is not a material machine, but the cognitive function of a complex self-aware neural network that is able to infer (to abstract) a bare-bones logical structure (invisible inter-relationships) in natural systems*1.


I think a philosopher might be open to facing the truth of the nature of our minds, whatever that might be.

It sounds like you are saying that a philosopher is someone with a closed mind on the subject. Is that about right?
Joshs July 31, 2024 at 13:00 #921843

Reply to wonderer1 Quoting wonderer1
It seems to me one can dispense with theism, recognize that More is Different and that humans have more cortical neurons than any other species, and thereby have a basis for recognizing a uniqueness to humans.


Ah, but the devil is in the details. To what extent, if any, does this difference in degree between the neurons of humans
and other animals translate into a difference in kind? Think of all of the historical assumptions concerning brain-based qualitative differences between human and animal behavioral capabilities that have turned out to be wrong-headed:
Only humans have language
Only humans have heritable culture
Only humans have cognition
Only humans have emotion
Only humans use tools

Perhaps more isnt so different after all.
flannel jesus July 31, 2024 at 13:04 #921844
Quoting Count Timothy von Icarus
For example, a hot cup of coffee might be a clue at a murder scene. The cup is still hot, so we know someone made it recently. However, knowing "the precise location and velocity of every particle in the cup" would not give us access to this "clue." The information that the cup of coffee was made recently lies in the variance between its temperature and the ambient environment. Likewise, if it was iced coffee, and the ice had yet to melt, we could also tell that it could not have been there long, although this information cannot be had from taking the ice cubes in isolation.


Seems to me like the precise location and momentum of every particle (quantum indeterminacy notwithstanding) -- not just of the cup, but of the environment too -- would have implicit discoverable facts in it, like the variance between its temprature and the ambient encironment, or whether the ice was melted.
Gnomon July 31, 2024 at 16:14 #921869
Quoting wonderer1
I think a philosopher might be open to facing the truth of the nature of our minds, whatever that might be.
It sounds like you are saying that a philosopher is someone with a closed mind on the subject. Is that about right?

Wow! Where did you get that off-the-wall idea from an assertion about the relationship between Information and Logic? What then, is the truth about the true "nature of our minds"? Are you saying that the harsh truth is that the Mind is nothing more than a Brain? Or that Logic is objective and empirical?

Is the ability to discern the invisible logical structure of ideas & events & brains, a sign of a "closed" mind? Or is the ability to see the material constituents of things a sign of an "open brain"? Please clarify your veiled put-down. :smile:

In the Physicalism belief system, Metaphysics (i.e. Philosophy) is meaningless
User image
Count Timothy von Icarus July 31, 2024 at 19:35 #921901
Reply to flannel jesus

If you include the entire room you would have the temperature difference. Complete knowledge of the room alone would not give you the cup's status as a clue in a murder case though. In fact, to understand that sort of relationship and all of its connotations would seem to require expanding your phase space map to an extremely wide temporal-spatial region.

The same is true for a fact like "this cup was produced by a Chinese company." This fact might be written on the cup, but the knowledge that this is what the writing actually says cannot be determined from knowledge of the cup alone, and likely not the room alone either.

Likewise, knowledge of the precise location/velocity of every particle in a human brain isn't going to let you "read thoughts," or anything of the sort. You need to correlate activity "inside" with what is going on "outside." That sort of fine grained knowledge wouldn't even give you a good idea what's going to happen in the system (brain) next since it's constantly receiving inputs from outside.

But "building block" reductionism has a bad habit of importing this sort of knowledge into its assumptions about what can be known about things sans any context.
apokrisis July 31, 2024 at 20:14 #921909
Quoting wonderer1
recognize that More is Different and that humans have more cortical neurons than any other species, and thereby have a basis for recognizing a uniqueness to humans.


The human difference is we have language on top of neurobiology. And the critical evolutionary step was not brain size but vocal cords. We developed a throat and tongue that could chop noise up into a digitised string of vocal signs. Only humans have the motor machinery to be articulate and syntactical.

Exactly when homo gained this new semiotic capacity will always be controversial. But the evidence says probably only with Homo sapiens about 100,000 years ago. And the software of a complex grammar to take full advantage of the vocal tract may have come as late as 40,000 years ago judging by the very sudden uptick in art and symbolism.

flannel jesus July 31, 2024 at 20:23 #921911
Quoting Count Timothy von Icarus
If you include the entire room you would have the temperature difference


Yup. So that information isn't in principle absent from knowing the position and velocity of all the relevant stuff.

Quoting Count Timothy von Icarus
In fact, to understand that sort of relationship and all of its connotations would seem to require expanding your phase space map to an extremely wide temporal-spatial region.


Only as wide as the effective stuff that makes it meaningful in the first place - which is quite wide indeed
apokrisis July 31, 2024 at 20:24 #921912
Quoting Joshs
Perhaps more isnt so different after all.


Yep. Different was more in the case of Homo sapiens. :smile:

A step up the semiotic ladder. The brains of Neanderthals were bigger (simply because of their bigger frames). But their vocal tracts not redesigned to the extent that can be judged.

There had to be brain reorganisation too. A new level of top down motor control over the vocal cords would be part of the step to articulate speech, as might be a tuning of the auditory path to be able to hear rapid syllable strings as sentences of words.

So the human story is one of a truly historic leap. The planet had only seen semiosis at the level of genes and neurons. Now it was seeing it in terms of words, and after that, numbers.
Lionino July 31, 2024 at 20:28 #921914
As a fun fact, we could determine the coffee mug is hotter than its environment by cutting the mug diametrically and taking the average speed of the particles of each infinitesimal dx vertical line from one end of the diameter to the other end, which is exactly the temperature gradient.

[hide="Reveal"]User image

User image[/hide]

If there is a gradient peaking at the center and bottoming on the corners, the mug is sending heat to its environment. If the gradient peaks on the corner, the mug was taken from the fridge.

The temperature of an ideal gas and the speed of its particles i=1, 2, …, N with mass m is had in:

[math] U = \sum_{i=1}^{N}\frac{mv_{i}^{2}}{2}=\frac{3nRT}{2} [/math]

Quoting apokrisis
But the evidence says probably only with Homo sapiens about 100,000 years ago.


Not necessarily. Neanderthals had language, and they split from us 500k years ago.
apokrisis July 31, 2024 at 20:44 #921920
Quoting Lionino
Not necessarily. Neanderthals had language, and they split from us 500k years ago.


So you know they had grammatical speech? What evidence are you relying on.

I know this is a topic that folk get emotionally invested in. It seems unfair for Homo sapiens to draw a line with our biological cousins. Especially as we mixed genes with anyone who happened to be around.

But the evidence advanced for Neanderthals as linguistic creatures often goes away over time. For example, a finding of Neanderthals with advance tool culture becomes more plausibly explained by sapiens making a couple of brief unsuccessful first forays into Europe before a third is suddenly explosively successful and Neanderthals are gone overnight.

It doesn’t matter to the vocal tract argument whether Neanderthals had it or not. But the evidence leans on the side of not.
Lionino July 31, 2024 at 21:14 #921923
Quoting apokrisis
So you know they had grammatical speech?


What do you mean by "grammatical speech"? I take it from what I have read over the years. One article that I can find is this https://theconversation.com/how-neanderthal-language-differed-from-modern-human-they-probably-didnt-use-metaphors-229942
Some believe even that Homo habilis had primitive speech 1M years ago. I don't find the idea absurd. On the other side of the extreme, the theories that suggest speech showed up 50k years ago are absurd as soon as we look into palaeoanthropology.

Quoting apokrisis
It seems unfair for Homo sapiens to draw a line with our biological cousins.


I draw the line, genetics does so as well. No issues with that.

As a sidenote, some models suggest that Neanderthals acquired about 5% of human admixture some 200kya, before any movement of great waves of humans out of Africa. Not very relevant to any of the points here, but good to keep in mind.
apokrisis July 31, 2024 at 22:22 #921930
Quoting Lionino
What do you mean by "grammatical speech"?


Speech with a fully modern syntactic structure.

Quoting Lionino
I take it from what I have read over the years.


Quoting Lionino
On the other side of the extreme, the theories that suggest speech showed up 50k years ago are absurd as soon as we look into palaeoanthropology.


I will treat this as opinion until you make a better argument. This is a topic I've studied and so listened to a great many opinions over the years.

The story of the human semiotic transition is subtle. Sure all hominids could make expressive social noises as a proto-speech. Even chimps can grunt and gesture in meaningful fashion that directs attention and coordinates social interactions. A hand can be held out propped by the other hand to beg in a symbolising fashion.

But the way to think about the great difference that the abstracting power of a fully syntactical language made to the mentality of Homo sapiens lies in the psychological shift from band to tribe.

The evidence of how Erectus, Neanderthals and Denisovans lived is that they were small family bands that hunted and foraged. They had that same social outlook of apes in general as they lacked the tool to structure their social lives more complexly.

But proper speech was a literal phase transition. Homo sap could look across the same foraging landscape and read it as a history and genealogy. The land was alive with social meaning and ancestral structure. The tribal mentality so famous in any anthropological study.

It is hard to imagine ourselves restricted to just the mindset of a band when we have only experienced life as tribal. However this is the way to understand the essence of the great transformation in pragmatic terms.

Theories of the evolution of the human mind are bogged down by the very Enlightenment-centric view of what it is to be human. Rationality triumphing over the irrational. So we look for evidence of self-conscious human intelligence in the tool kits of the paleo-anthropological record. Reason seems already fully formed if homo could hunt in bands and cook its food even from a million years ago, all without a vocal tract and a brain half the size.

But if we want to get at the real difference, it is that peculiar tribal mindset that us humans could have because speech allowed our world to seem itself a lived extension of our own selves. Every creek or hillock came with a story that was "about us" as the people of this place. We had our enemies and friends in those other bands we might expect to encounter. We could know whether to expect a pitch battle or a peace-making trading ritual.

The essentials of being civilised in the Enlightment sense were all there, but as a magic of animism cast over the forager's world. The landscape itself was alive in every respect through our invention of a habit of socialising narration. We talked the terrain to life and lived within the structure – the Umwelt – that this created for us. Nothing we could see didn't come freighted with a tribal meaning.

At that point – around 40,000 years ago, after sapiens as an "out of Africa coastal foraging package" had made its way up through the Levant – the Neanderthals and Denisovans stood no chance. Already small in number, they melted into history in a few thousand years.

The animistic mentality was the Rubicon that Homo sapiens crossed. A vocal tract, and the articulate speech that this enabled, were the steps that sparked the ultimate psycho-social transformation.













Lionino July 31, 2024 at 23:09 #921940
Quoting apokrisis
Speech with a fully modern syntactic structure.


Then I ask you what "fully modern syntactic structure" means.

Quoting apokrisis
I will treat this as opinion until you make a better argument.


By 50kya, Caucasoid, Mongoloid and Australoid had diverged. If language hadn't developed by then, and given that all these groups have their own indigenous languages, that would mean we have speech surfacing independently a few times in humans around the same time, and yet having quite the similar sound inventory — which is not the case for San, who have click sounds but diverged in evolution much earlier (~200kya by some estimates) —; evidently that is a much more unlikely scenario than them simply coming from an earlier (before 50kya) ancestor.
apokrisis July 31, 2024 at 23:33 #921943
Quoting Lionino
By 50kya, Caucasoid, Mongoloid and Australoid had diverged.


Yep. I mentioned the "out of Africa coastal foraging package" – the explosive move that swept across the globe from about 60kya. Helped perhaps by changing climate as well as a newly evolved mentality.



wonderer1 August 01, 2024 at 01:41 #921952
Quoting apokrisis
The human difference is we have language on top of neurobiology.


It would be kind of silly to think there is only one difference.

Quoting apokrisis
And the critical evolutionary step was not brain size but vocal cords.


Considering all the bird species able to mimic human speech, it doesn't seem as if you have thought this through.

I'm fairly confident that you aren't in a position to prove that the mutation leading to ARHGAP11B wasn't a critical step on the path leading to human linguistic capabilities.

ARHGAP11B is a human-specific gene that amplifies basal progenitors, controls neural progenitor proliferation, and contributes to neocortex folding. It is capable of causing neocortex folding in mice. This likely reflects a role for ARHGAP11B in development and evolutionary expansion of the human neocortex, a conclusion consistent with the finding that the gene duplication that created ARHGAP11B occurred on the human lineage after the divergence from the chimpanzee lineage but before the divergence from Neanderthals.

apokrisis August 01, 2024 at 02:12 #921959
Quoting wonderer1
It would be kind of silly to think there is only one difference.


It is silly of you to say that until you can counter that argument in proper fashion.

As is usual in any field of inquiry, we can fruitfully organise the debate into its polar opposites. Let one side defend the "many differences" in the usual graded evolution way, while the other side defends the saltatory jump as the "one critical difference" as the contrary.

To just jump in with "that's silly" is silly.

Quoting wonderer1
Considering all the bird species able to mimic human speech, it doesn't seem as if you have thought this through.


Yeah. I mean what can one say? You've reminded me of being back in the lab where we slowed down bird calls so as to discover the structure that is just too rapid for a human ear to decode. And similar demonstrations of human speech slowed down to show why computer speech comprehension stumbled on the syllabic slurring that humans don't even know they are doing.

Do you know anything about any of this?

Quoting wonderer1
I'm fairly confident that you aren't in a position to prove that the mutation leading to ARHGAP11B wasn't a critical step on the path leading to human linguistic capabilities.


I can see you are fairly confident about your ability to leap into a matter where you begin clueless and likely have no interest in learning otherwise.

Quoting wonderer1
you aren't in a position to prove that the mutation leading to ARHGAP11B wasn't a critical step on the path leading to human linguistic capabilities.


I can certainly have a good laugh at your foolishness. A gene for cortex folding ain't a gene for grammar.
Jaded Scholar August 01, 2024 at 02:37 #921966
Quoting apokrisis
Speech with a fully modern syntactic structure.

As soon as I read this, I also wanted to ask:
Quoting Lionino
Then I ask you what "fully modern syntactic structure" means.


I'm not trying to pile onto one side of a disagreement, and I can see that you've both learned a great deal on the subject of human evolutionary development, but I think Reply to Lionino's arguments on the differences/similarities between homo sapiens and neanderthalensis make a lot more sense than Reply to apokrisis's (which is to say, they agree most with my own preconceptions).

But in seriousness, Reply to apokrisis's arguments kind of rubbed me the wrong way from the outset, because they contained a kind of derision for the notion of homo sapiens not being superior to non-sapiens, and constantly supported the conclusion that homo sapiens are indeed innately superior. And when your position exactly mirrors the culturally dominant narrative, I think that's when you should be extra careful to analyse why you believe it. One of the most ingrained narratives in modern human culture is the inherent superiority of humans, which is a conclusion that has been dominant across countless generations despite the logic supporting that conclusion being torn down countless times, only to be replaced by some other institutionally respected logic that just happens to keep that conclusion in its place of dominance.

To get into detail, the main factual inconsistencies I've noticed have been in (minor) points like:
Quoting apokrisis
And the software of a complex grammar to take full advantage of the vocal tract may have come as late as 40,000 years ago judging by the very sudden uptick in art and symbolism.

Quoting apokrisis
But their vocal tracts not redesigned to the extent that can be judged.

Apologies if you were intending to use "vocal tract" as byword for "speech apparatus", but if you were not, then these comments reflect outdated and refuted stances on the importance of the vocal tract for complex language [1], and more current research has demonstrated that Neanderthals absolutely did possess traits that actually are necessary for complex language like enhanced respiratory control [2] and all of the same orofacial muscle control that is necessary for complex language in humans (controversially dubbed "the grammar gene") [3]. Moreover, the most significant requirement for complex language (though not the most advanced requirement) is the positioning of our larynx, which is primarily due to walking upright, and thus, was shared with all other hominids [citation needed].
[1] http://linguistics.berkeley.edu/~ohala/papers/lowered_larynx.pdf
[2] https://www.sciencedirect.com/science/article/pii/S0960982207020659
[3] https://doi.org/10.1002%2Fevan.20032

From what I have learned (both in the past and in my brief literature scan just now), and the main objections to the idea that other hominids probably had just as advanced linguistic competence as homo sapiens has been supported by interpolations of neurology (which is inconclusive at best) from skull shapes and artefacts, and even (most embarrassingly) from interpolated dating of the origin of language using currently-existing lingual diversity and assuming that isn't affected by any difference between the (typically) colonial expansion patterns we have used since the agricultural revolution and the nomadic expansion patterns we typically used at every point before ~10,000 BCE (and even then, contemporary conclusions support figures much older than 40kya [4]). The point is that all of the evidence supporting complex language as emerging with/from homo sapiens is approximate and indirect, which is a stark contrast to the evidence in favour of other hominids communicating with equal complexity, which actually includes direct evidence like that cited above.
[4] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3338724/

To shift to less concretely factual, and more relational inconsistencies, I also think it's a bit naive to hold arguments like:
Quoting apokrisis
At that point – around 40,000 years ago, after sapiens as an "out of Africa coastal foraging package" had made its way up through the Levant – the Neanderthals and Denisovans stood no chance. Already small in number, they melted into history in a few thousand years.

Firstly, I'm pretty sure that the 70kya and 40kya periods are times when (likely due to climate change, as you did mention for those periods, to give credit where due) genetic studies suggest that homo sapiens dropped to drastically low numbers, so the small numbers of Neanderthals and Denisovians really don't support your point, especially since they may actually have been larger than the fleeing Sapiens.

But either way, the accelerated expansion of Sapiens throughout all nearby regions in that broader time period, and their continued expansion thereafter is associated with the extinction of vast numbers of other species, like all megafauna, most notably. I want to tie in the Uncanny Valley here, as a phenomena that I found especially fascinating when I learned that it is practically unique to humans (I think there was one single study showing that macaques might have it too). It's possible, maybe even likely, that other hominids had the same trait, but regardless, isn't it interesting that we have an instinctive negative reaction towards anything that looks almost like us, but not quite, especially given our incredibly recent history where, for millions of years, we evolved alongside creatures that literally were almost like us, but not quite? Maybe we were the only hominid that survived because we had more advanced language or tools or whatever, but given the pattern of extinctions that are conclusively tied to our expansion, and our hard-coded repulsion for anything almost-but-not-quite human, I think an evidence-based mindset supports the idea that we are the lone hominid survivors because we were probably the most belligerent creatures around. Obviously, this is not proof, but it is a much better supported hypothesis than the one that we were just better at abstraction and conceptualisation (which I also concede is not a mutually exclusive hypothesis).

Lastly, I want to tie the above points together by noting that the physical evidence you do have on your side, which I think is mostly visual art and ritualistic artefacts - what I think you called a symbolic explosion or something like that - is extremely intellectually irresponsible to use as justification for some narrative like a signification of human development that surpassed the other hominids, and argue that some unknowable internal transition just happened, and then led to us thinking differently. It makes much more sense to consider that maybe, if a highly linguistically developed species has recently migrated to regions populated by other highly linguistically developed species, they might suddenly be faced with the fact that neither of you can understand the complex alien language of the other species, and maybe this naturally leads to the idea that there's a whole lot of benefit in exploring other kinds of communication? And a whole lot of benefit in creating markers and rituals that simplify both intra-species cohesion and inter-species differentiation?

Which is to say that the development of these things by homo sapiens may well be directly associated with our rise to dominance, which is completely in line with your position. But even in if that happens to be true, I think you are doing a great disservice to how clearly you see humanity, and reality itself, if you let yourself be comfortable attributing this to something innate about homo sapiens, instead of something much, much more circumstantial, that we are simply lucky (or belligerent) enough to be the beneficiaries of.
apokrisis August 01, 2024 at 02:45 #921969
Quoting Jaded Scholar
But in seriousness, ?apokrisis's arguments kind of rubbed me the wrong way from the outset, because they contained a kind of derision for the notion of homo sapiens not being superior to non-sapiens


So you project some woke position on to a factual debate? Sounds legit. I shouldn't be offended by your wild presumptions about who I am and what I think should I.

Quoting Jaded Scholar
But even in if that happens to be true, I think you are doing a great disservice to how clearly you see humanity, and reality itself, if you let yourself be comfortable attributing this to something innate about homo sapiens, instead of something much, much more circumstantial, that we are simply lucky (or belligerent) enough to be the beneficiaries of.


Jesus wept. This is so pathetic.






Lionino August 01, 2024 at 02:56 #921976
Quoting Jaded Scholar
As soon as I read this, I also wanted to ask:


Going further, the phrase "fully modern syntax" ("syntactic structure" does not make sense, it is like saying wet water or dark black) doesn't seem to refer to anything. Is it asking if primitive hominids were able to turn any given phrase into its negative equivalent in their languages? I can't tell.

Quoting Jaded Scholar
but I think ?Lionino's arguments on the differences/similarities between homo sapiens and neanderthalensis make a lot more sense than ?apokrisis's (which is to say, they agree most with my own preconceptions).


Thank you. My ego is appeased.
apokrisis August 01, 2024 at 03:09 #921982
Quoting Lionino
Going further, the phrase "fully modern syntax" ("syntactic structure" does not make sense, it is like saying wet water or dark black) doesn't seem to refer to anything.


You could start by Googling Everett’s G1/G2/G3 classification of grammar complexity if you are truly interested.

It is relevant to the OP in that Everett follows Peirce in arguing for an evolution of language where indexes led to icons, and icons moved from signs that looked like the referents, to symbols where the relation was arbitrary.

(I mean did you read that Mithen article you linked to? He just goes astray in thinking the icon/symbol step came before some more general neural reorganisation, which was likely as not itself nothing more that an example of genetic drift than anything that suddenly made sapiens rationally superior.)

Or Luuk and Luuk (2014) "The evolution of syntax: signs, concatenation and embedding" which argues like Everett that word chains become recursive.

The point is not particular whether the Peircean developmental path is right, even if it is logical. The point is that people's whose job it is are quite happy to think that syntactical structure must have evolved to arrive at its modern complexity. Out of simple beginnings, richness can grow.

(Is that woke and diverse enough for the thought police out there?)


wonderer1 August 01, 2024 at 03:13 #921986
Quoting apokrisis
Yeah. I mean what can one say? You've reminded me of being back in the lab where we slowed down bird calls so as to discover the structure that is just too rapid for a human ear to decode. And similar demonstrations of human speech slowed down to show why computer speech comprehension stumbled on the syllabic slurring that humans don't even know they are doing.

Do you know anything about any of this?


Sure, I posted something along similar lines in the shoutbox a year ago:

Quoting wonderer1
Psychologists solve mystery of songbird learning by taking into account the higher flicker-fusion rate of birds.


Anyway, I see from all your posturing that your ego is still as fragile as it was before you took your sabbatical from the forum. That's unfortunate.
Lionino August 01, 2024 at 03:17 #921987
Quoting apokrisis
You could start by Googling Everett’s G1/G2/G3 classification of grammar complexity if you are truly interested.


I did, I didn't find the phrase "fully modern" anywhere. Are you referring to G3 by that?

Quoting apokrisis
Or Luuk and Luuk (2014) "The evolution of syntax: signs, concatenation and embedding" which argues like Everett that word chains become recursive.


Or you mean a language that is recursive?

I did find this though:

Quoting https://singjupost.com/how-language-began-dan-everett-full-transcript/?singlepage=1
Those kinds of grammars are found commonly in the world’s languages, but you can express anything from a G3 grammar in a G1 grammar; mathematically they’re all of equal power. So, once you have symbols and a G1 grammar you have language, full-blown human language. We find those today. Was Homo erectus capable of that? Yes, they were. Did they show the kinds of communication, correction, cooperation, planning that would have required human language? Yes, they did.


It seems kinda contrary to what you are saying.
apokrisis August 01, 2024 at 03:27 #921991
Quoting Lionino
It seems kinda contrary to what you are saying.


Perhaps if you haven't properly delved into what I said.
Jaded Scholar August 01, 2024 at 03:29 #921992
Quoting apokrisis
But in seriousness, apokrisis's arguments kind of rubbed me the wrong way from the outset, because they contained a kind of derision for the notion of homo sapiens not being superior to non-sapiens — Jaded Scholar

So you project some woke position on to a factual debate? Sounds legit. I shouldn't be offended by your wild presumptions about who I am and what I think should I[?]


Good grief. I literally stated - in the very sentence you quoted here! - that my issue was with your naive assertion of the standard dominant narrative. I made no presumptions about who you are, or what you "should" think. Even in the second quote, I put in a very qualified "if you", and did not label you as someone who blindly accepts what he's told to believe, or even necessarily accepts the default narrative of homo sapiens being intrinsically superior because [insert whatever reason fits], and is willing to ignore any and all evidence to the contrary. Though I guess you've very clearly confirmed that that's exactly what you have decided you are.

It's very telling that you ignored every single piece of evidence I cited, and every actual argument I made, and focussed on dismissing my argument based on how my intro and outro made you feel.

To indulge you one last time, I'll refocus on the only bits you read:
If a certain narrative remains dominant within a culture for centuries, despite all of the logic supporting it being demolished over and over, its dominance only supported by whatever new arguments can immediately refurbish its lofty position until they too are empirically disproven within a decade or so, then I think anyone who actually cares what's real and what's an illusion would be happy to have it pointed out that the dominant status of this conclusion is apparently independent of whatever logic currently "supports" it, and therefore is probably complete bullshit.

I never really thought of this idea as being associated with "woke"ness (maybe because the term post-dates Foucault), but now that you mention it, I suppose that, in a literal sense, the term does kind of apply.

So yeah, please feel free to disregard my comments, go back to sleep, and I hope you enjoy living the rest of your life in your comfortable, unchallenging, dream world.
Jaded Scholar August 01, 2024 at 03:30 #921993
Quoting Lionino
Thank you. My ego is appeased.


Then I'm glad I accomplished something here, at least. :rofl:
apokrisis August 01, 2024 at 03:36 #921994
Quoting Jaded Scholar
So yeah, please feel free to disregard my comments, go back to sleep, and I hope you enjoy living the rest of your life in your comfortable, unchallenging, dream world.


Again, where do you think you get this right to insult me without making any attempt to engage with me?

Jaded Scholar August 01, 2024 at 03:41 #921995
Quoting apokrisis
Again, where do you think you get this right to insult me without making any attempt to engage with me?


Oh, wow, I'm so sorry, now that you mention it, I really wish that I'd actually started our dialogue with something like a well-researched 1200-word comment where I genuinely tried to engage with you. If only I'd...
...
...hang on on a sec. :chin:
apokrisis August 01, 2024 at 03:44 #921996
Quoting Jaded Scholar
like a well-researched 1200-word comment


:lol:
Jaded Scholar August 01, 2024 at 03:54 #921997
Reply to apokrisis
You're right, that "well-researched" bit was silly of me to qualify. It's very clear how negatively you react to actual evidence, and citing published academic research was probably very rude within ... whatever worldview you're trying to conform to.
apokrisis August 01, 2024 at 04:24 #922000
Reply to Lionino Getting things back on track, you will remember my semiotic point was how language had to evolve to the point (and I mean culturally evolve more than neuroanatomically) where it could sustain a new kind of social Umwelt – the mind that sees its landscape as its world. Every creek, every hillock, freighted with cultural meaning.

Here is a good article that touches on this in terms of what you argued about the San and click languages being somehow a sign of unbroken antiquity.

Human History Written in Stone and Blood – American Scientist

You can see that there was a sharp transition around 70kya. Blombos cave marks early evidence of symbolic culture.

But then if this was neurological, then why was it swiftly followed by a collapse back to cultural simplicity so soon after? Brains didn't devolve. So it had to be a social structure collapse.

The coastal package produced a population boom and so powered a growth in tribal complexity. You had a widespread trading economy emerge in the manner I described. A complexity of language and thought that organised the landscape into an extended network of human contact.

But perhaps the climate changed. Social interactions frayed and populations shrank back to isolated gene pools. Southern Africa was pushed into a lower level of hierarchical development. It only came back again when the "out of africa" mob returned with the level of linguistic and cultural sophistication to fire up things once more.

So the anthropologist has to speculate on the available evidence. But this is the kind of considered story that emerges. The real Rubicon is the way language transforms the experienced world into a shared fabric of social relations.

If you don't have the population, you don't have the interactions that produce the structural complexity. And language is going to be matchingly simplified when it is no longer useful in everyday life.

So we have an evolutionary account that has to include reaching a critical mass in terms of populations and the intensity of social interactions. The crucial shift from living as a band to living as a tribe as I said.

Neuroanatomy isn't even under a selective pressure for a tribal mentality given that even Neanderthals struggled to exist as more than very thinly spread bands of about 10. Grammatical speech and the symbolic thought it enables are a precursor step – a good reason for why sub-Saharan Africa started to see this sapiens offshoot gathering some steam from 200kya.

But then comes the population density to properly spark the human transition to being a socially constructed animal. We mixed in numbers and so formed hierarchical networks across "owned" landscapes. We fought and traded so needed kinships structures and genealogy stories. Chieftains and agreements. Raiding parties. The trading of goods, wives and slaves.

Becoming political and economic creatures created the population growth that fed back into even more intense political and economic activity.

Anyway read the article as it puts the click languages into the larger context of what was going on in the world. Click phonemes may seem all cool and weird, but they don't really tells us anything about archaic language. They are not some primitive linguistic feature as far as I can see.










Jaded Scholar August 01, 2024 at 04:35 #922003
Reply to apokrisis
Nah, mate. I'm finished. You can enjoy your anthropological fan-fiction in peace. :up:
apokrisis August 01, 2024 at 05:54 #922014
Gnomon August 01, 2024 at 17:28 #922116
Quoting Count Timothy von Icarus
Bits don't really work well as "fundamental building blocks," because they have to be defined in terms of some sort of relation, some potential for measurement to vary. IT does seem to work quite well with a process metaphysics though, e.g. pancomputationalism. But what about with semiotics? I have had a tough time figuring out this one.

Shannon's "bits" were basic to his theory, but can't be absolutely fundamental, because they are composed of two elements (1 & 0) plus the relationship between them. So, I think the metaphysical concept of Relation*1 (relativity) may be more essential, in that it is neither a composite material object nor a member of a mathematical series : the number line, of which 1 & 0 are the end points, the ideal brackets that enclose reality, not real in themselves. The only alternative to a Relation is an Absolute.

I'm not well-versed in academic Semiotics*1, but I don't think any particular sign, or even the general concept of sign, is fundamental, because signs/symbols always point to something else. You might say that Semiotics is also about Relations. And knowledge of relationships is the essence of Information & Reason & Logic : all metaphysical. As the link below*2 reveals, it's hard to even define Relation without referring to multiple non-fundamental entities, or to self-reference. :smile:


*1. Information general, Semiotics specific
Information is a vague and elusive concept, whereas the technological concepts are relatively easy to grasp. Semiotics solves this problem by using the concept of a sign as a starting point for a rigorous treatment for some rather woolly notions surrounding the concept of information.
https://ris.utwente.nl/ws/portalfiles/portal/5383733/101.pdf

*2. What is a relation in metaphysics?
Relations are ways in which several entities stand to each other. They usually connect distinct entities but some associate an entity with itself. The adicity of a relation is the number of entities it connects. The direction of a relation is the order in which the elements are related to each other.
https://en.wikipedia.org/wiki/Relation_(philosophy)
Apustimelogist August 01, 2024 at 22:17 #922142
Quoting Gnomon
they are composed of two elements


You can use as many "elements" as you want!
Gnomon August 02, 2024 at 15:50 #922306
Quoting Apustimelogist
they are composed of two elements — Gnomon
You can use as many "elements" as you want!

I was referring to electronic computer processing of information. In principle the registers could use any voltage, but in practice the voltage is ideally all or nothing --- 1 or 0, 100% or Zero, On or Off, 3.3V or 0V --- to minimize errors in communication. In any case, its the logical relationship between elements (1:0 or 1/0) that is interpreted as information. You could say that the rounded-off 1s and 0s are signs, with lots of space in between, that are not likely to be confused with each other, unlike 1.032 and 1.023.

The "elements" of Shannon information are typically limited to 1s & 0s. That's why it's called Binary Code. In Nature, including the human brain, information processing may not be that precise, hence non-binary --- just like some Hollywood celebrities. :grin:


User image
Metaphysician Undercover August 03, 2024 at 11:39 #922588
Quoting apokrisis
It is relevant to the OP in that Everett follows Peirce in arguing for an evolution of language where indexes led to icons, and icons moved from signs that looked like the referents, to symbols where the relation was arbitrary.


I think you ought to notice that "signs that looked like the referents" indicates written language. And written language is viewed, while spoken language is heard. The two are very different, and have very different uses, so it is quite reasonable to consider that they evolved independently. At some time, in the relatively recent past, the two began to be united, when spoken words were given written symbols, and vise versa. This unification may have produced the "explosion" you refer to, but it in no way signifies the beginning of language use.

In an analysis of many different spoken words, Plato shows that this distinction, between spoken words that sound like the referent, and spoken words which have an arbitrary relation, is not a useful distinction. The ones which appear to have an arbitrary relation may be just so old that the word has evolved so as not to reveal its origins in some sort of similarity. So unless the history of the symbol is clearly known, the distinction may be completely misleading.
wonderer1 August 03, 2024 at 20:14 #922688
Quoting Metaphysician Undercover
I think you ought to notice that "signs that looked like the referents" indicates written language. And written language is viewed, while spoken language is heard. The two are very different, and have very different uses, so it is quite reasonable to consider that they evolved independently.


I'm not sure what is meant by "evolved independently" when we are talking about things evolving in one species.

However, having a greater number of neurons available, to associate in more complex ways, things going on in visual cortex and things goings on in auditory cortex, might have been rather important.
Apustimelogist August 03, 2024 at 23:32 #922709
Quoting Gnomon
The "elements" of Shannon information are typically limited to 1s & 0s


I'm just saying any number of elements are valid since its the logarithmic base and you can choose any base you want without affecting the properties of bits(or trits or nats).
Metaphysician Undercover August 04, 2024 at 00:48 #922731
Quoting wonderer1
I'm not sure what is meant by "evolved independently" when we are talking about things evolving in one species.

However, having a greater number of neurons available, to associate in more complex ways, things going on in visual cortex and things goings on in auditory cortex, might have been rather important.


The point is that spoken language and written language have fundamentally different purposes. The principal use for spoken language is communication, so it would have evolved in ways to facilitate that end. The principal use for written language is to serve as a memory aid, so it would develop to facilitate that end . Therefore the written language is fundamentally personal, rather than communicative. Consider Wittgenstein's private language example. Being personal, there may be aspects of written language which are designed to make it intentionally non-communicative, like secret code is for example. In any case, you ought to be able to see how the two are fundamentally different, as the principle purpose of one is to communicate with others, and the principle purpose of the other is to communicate with oneself at a later time.

This means that it is very probable that the essential aspects of one are not essential aspects of the other, so the development of each of the two, needs to be considered separately. History and archeology may be misleading to us because what persists from ancient times, and is available to us now as evidence, is only the written material. So anthropology. to the extent that it relies on archeology, cannot provide an accurate history of the development of spoken language.
Treatid August 05, 2024 at 15:54 #923062
Quoting Count Timothy von Icarus
I am not sure where you got that from


https://youtu.be/DIL37Rkt4m0?si=8veEEon_DGHD7uNb&t=183

With specific reference to human language.

Quoting wonderer1
It seems to me one can dispense with theism, recognize that More is Different and that humans have more cortical neurons than any other species, and thereby have a basis for recognizing a uniqueness to humans.


At which point the question becomes how much difference qualifies as distinct vs mere degree of difference.

Quoting Gnomon
Would you prefer to believe that Random Evolution "gave" some higher animals the "mechanism" of Reasoning?


I think that "reasoning" is a lot less special than many humans suppose.

Quoting Joshs
Perhaps more isnt so different after all.


Yes - this is a significant part of where I'm coming from.

Emergent

Human reasoning isn't special. Recursive thought (thinking about thinking) is the same mechanism as all other thought.

So - Everything is relationships. Neural networks are physically and notionally, networks of relationships.

I'd like to compare a single relationship to a brush stroke on a paint canvass.

A single stroke of paint tends not to have much in the way of inherent meaning.

Many strokes of paint form the portrait or landscape that we find compelling/moving/boring...

The whole picture forms a shape that we find meaningful.

A single relationship is largely meaningless. Many relationships with multiple loops of connection form a compelling shape.

In a universe composed of relationships - there is little value in comparing individual relationships. Whether a relationship is the signifier or signified is irrelevant.

The Mona Lisa isn't any one brush stroke.

When we perceive the face of a smiling woman - it doesn't matter that the shape is composed of brush strokes, lit LEDs or atomic particles. What matters is that we recognise the shape.

Patterns

One of the neat features of patterns is that they are scale agnostic. Language can describe the patterns we see from sub-atomic up to the visible universe.

Similarly, patterns are material agnostic. We can see a face in the moon and animals in clouds. Okay - those specific examples aren't terribly useful... but it does mean we can look at a bunch of RGB LEDs and perceive Arnold Schwarzenegger telling the hapless victim "I'll be back!".

The downside (if you are invested in human thought being unique) is that learning mathematics and learning to walk are the same process.

We can argue that the "attempt-fail-pain-retry"/"attempt-succeed-reward-reinforce" loop of mathematics is longer (more abstract) and humans are especially good at delayed reward/failure.

Reply to wonderer1's description of specialness is plausible when a small difference of degree can make the difference between success and failure at a given task.

Clearly humans are special enough to have accomplished (*gestures broadly*) all this. But when running away from the bear, you just need to be faster than your companions.

Humans probably are better at learning, retaining and applying patterns - but I see no reason to think that wolves don't have much the same appreciation of rolling hills as apokrisis describes for humans.

No definitions

Quoting Count Timothy von Icarus
The interpretant need not be an "interpreter." We could consider here how the non-living photoreceptors in a camera might fill the role of interpretant (or the distinction of virtual signs or intentions in the media).


One of the defining features of a relational universe is that perception is unique to each observer. In General Relativity, space appears time-like and time appears space-like under certain conditions.

It isn't merely that semiotics can't define the signifier - the signifier changes according to the observer. It isn't that there is a fixed thing that we just happen not to be able to define - there are no fixed things.

Truth is, literally, in the eye of the beholder.

The Mona Lisa doesn't exist. Each person viewing the Mona Lisa experiences their relationships with the Mona Lisa.

[accusative]
You know that the two people looking at the Mona Lisa are seeing different things. And then you assume that there is one single correct perception that all the others are distortions of.
[/accusative]

The Mona Lisa is your perception of it. Your experiences are real.

You, semiotics, mathematics and the majority of philosophers are trying to demote experience to a second class citizen subservient to some special reality that can't be directly observed, can't be described but gives rise to each individual's perception.

Wrap up

When you only have a hammer... H'mm wrong aphorism.

When we can only describe (networks of) relationships; describing relationships is trivial. Describing anything else is impossible.

Human thought is manipulation of relationships. We don't have anything else to manipulate.

There is a possibility that there are processes outside the ability of language to describe. In which case - tough titty. Nothing we can do about that.

We know that we can describe (but not define) relationships.

Language is an aspect of the universe. manipulating language is manipulating the universe.

To understand the nature of language is to understand the nature of the universe.

We can't define semiotics, or mathematics, or humanity.

But we can manipulate the bejeezus out of relationships.

Perception as pattern matching isn't new.

However, we can go beyond it being a nice theory. A relationship based description of cognition is the only game in town.
Bodhy August 07, 2024 at 11:02 #923513
To OP,


Deely is an absolute gift and his philosophy is incredibly rich. I've met and talked with some of his former colleagues and friends at length.

What Deely hoped for IMO is for a recognition that there were two major paths philosophy could have gone down, and Newtonianism and Cartesianism were only one. There was also the budding path John Poinsot was developing which was the sign-based semiotic philosophy, the one which easily naturalizes mind and doesn't plague philosophy with dualisms everywhere.

Unfortunatley, Poinsot was too late for his thought to take off so it was lost to history if not for this reconstructive work.

The information revolution is a promising start, although there is now this kind of schism between matter-information that is still too Cartesian/modernist. Shannon's information theory is a far cry from a full-orbed semiotic theory since it operationalizes information and meaning. There is still a way to go.

I think eventually, the problems of relevance realization and framing will become so pressing that contemporary philoophy/science will need to take on board semiotic philosophy.
Lionino August 07, 2024 at 23:25 #923652
Quoting Bodhy
John Poinsot


João Poinsot, Portuguese theologist. Interesting, first time hearing of him.

What else can you say?
Count Timothy von Icarus August 09, 2024 at 13:48 #923990
Reply to Bodhy

:up:

I've found Robert Sokolowski to be a great updater/rehabilitator of these sorts of ideas too, although he unfortunately doesn't delve into the semiotic side much, sticking more to phenomenology and an updating of Aristotle and St. Thomas.

Reply to Lionino

He's a very important figure in the development of semiotics. Nathan Lyons has a pretty interesting book called "Signs in the Dust" on him, sort of a updating. I thought the most interesting part is the final section on the application of semiotics to non-living things, but he has a good intro on him at the start.
Patterner August 11, 2024 at 00:20 #924329
Quoting Joshs
Only humans have language
I just started reading [I]The Symbolic Species[/I], by Terrence Deacon. Literally only the Preface so far. In it, he tells us about giving a talk about the brain to his son's elementary school.
Deacon:I was talking about brains and how they work, and how human brains are different, and how this difference is reflected in our unique and complex mode of communication: language. But when I explained that only humans communicate with language, I struck a dissonant chord.

“But don’t other animals have their own languages?” one child asked.

This gave me the opportunity to outline some of the ways that language is special: how speech is far more rapid and precise than any other communication behavior, how the underlying rules for constructing sentences are so complicated and curious that it’s hard to explain how they could ever be learned, and how no other form of animal communication has the logical structure and open-ended possibilities that all languages have. But this wasn’t enough to satisfy a mind raised on Walt Disney animal stories.

“Do animals just have SIMPLE languages?” my questioner continued.

“No, apparently not,” I explained. “Although other animals communicate with one another, at least within the same species, this communication resembles language only in a very superficial way—for example, using sounds—but none that I know of has the equivalents of such things as words, much less nouns, verbs, and sentences. Not even simple ones.”
I guess the rest of the book extensively expands on this.
Joshs August 11, 2024 at 02:13 #924363
Reply to Patterner
Deacon:This gave me the opportunity to outline some of the ways that language is special


Michael Tomasello argues that rather than there being something called language as a special phenomenon in itself differentiating humans from other animals, what separates humans from animals is a cognitive complexity that leads to the sophisticated social interaction making language possible.

Patterner August 11, 2024 at 04:25 #924373
Reply to Joshs
I believe Deacon would agree:
Terrence Deacon:...language is not merely a mode of communication, it is also the outward expression of an unusual mode of thought—symbolic representation. Without symbolization the entire virtual world that I have described is out of reach: inconceivable. My extravagant claim to know what other species cannot know rests on evidence that symbolic thought does not come innately built in, but develops by internalizing the symbolic process that underlies language. So species that have not acquired the ability to communicate symbolically cannot have acquired the ability to think this way either.
Joshs August 11, 2024 at 12:40 #924429
Reply to Patterner

Quoting Patterner
?Joshs
I believe Deacon would agree:
...language is not merely a mode of communication, it is also the outward expression of an unusual mode of thought—symbolic representation. Without symbolization the entire virtual world that I have described is out of reach: inconceivable. My extravagant claim to know what other species cannot know rests on evidence that symbolic thought does not come innately built in, but develops by internalizing the symbolic process that underlies language. So species that have not acquired the ability to communicate symbolically cannot have acquired the ability to think this way either.
— Terrence Deacon


I wonder how Deacon would distinguish between human use of word concepts and , for instance, the way a dog responds to a sentence. If a symbol represents a meaning by integrating information from disparate sense modalities, what is a dog doing when it recognizes a word, or an object , on this same basis? And I wonder how he would respond to this recent article in the New York Times?

“Language was long understood as a human-only affair. New research suggests that isn’t so.

Can a mouse learn a new song? Such a question might seem whimsical. Though humans have lived alongside mice for at least 15,000 years, few of us have ever heard mice sing, because they do so in frequencies beyond the range detectable by human hearing. As pups, their high-pitched songs alert their mothers to their whereabouts; as adults, they sing in ultrasound to woo one another. For decades, researchers considered mouse songs instinctual, the fixed tunes of a windup music box, rather than the mutable expressions of individual minds.

But no one had tested whether that was really true. In 2012, a team of neurobiologists at Duke University, led by Erich Jarvis, a neuroscientist who studies vocal learning, designed an experiment to find out. The team surgically deafened five mice and recorded their songs in a mouse-size sound studio, tricked out with infrared cameras and microphones. They then compared sonograms of the songs of deafened mice with those of hearing mice. If the mouse songs were innate, as long presumed, the surgical alteration would make no difference at all.

Jarvis and his researchers slowed down the tempo and shifted the pitch of the recordings, so that they could hear the songs with their own ears. Those of the intact mice sounded “remarkably similar to some bird songs,” Jarvis wrote in a 2013 paper that described the experiment, with whistlelike syllables similar to those in the songs of canaries and the trills of dolphins. Not so the songs of the deafened mice: Deprived of auditory feedback, their songs became degraded, rendering them nearly unrecognizable. They sounded, the scientists noted, like “squawks and screams.” Not only did the tunes of a mouse depend on its ability to hear itself and others, but also, as the team found in another experiment, a male mouse could alter the pitch of its song to compete with other male mice for female attention.

Inside these murine skills lay clues to a puzzle many have called “the hardest problem in science”: the origins of language. In humans, “vocal learning” is understood as a skill critical to spoken language. Researchers had already discovered the capacity for vocal learning in species other than humans, including in songbirds, hummingbirds, parrots, cetaceans such as dolphins and whales, pinnipeds such as seals, elephants and bats. But given the centuries-old idea that a deep chasm separated human language from animal communications, most scientists understood the vocal learning abilities of other species as unrelated to our own — as evolutionarily divergent as the wing of a bat is to that of a bee. The apparent absence of intermediate forms of language — say, a talking animal — left the question of how language evolved resistant to empirical inquiry.

When the Duke researchers dissected the brains of the hearing and deafened mice, they found a rudimentary version of the neural circuitry that allows the forebrains of vocal learners such as humans and songbirds to directly control their vocal organs. Mice don’t seem to have the vocal flexibility of elephants; they cannot, like the 10-year-old female African elephant in Tsavo, Kenya, mimic the sound of trucks on the nearby Nairobi-Mombasa highway. Or the gift for mimicry of seals; an orphaned harbor seal at the New England Aquarium could utter English phrases in a perfect Maine accent (“Hoover, get over here,” he said. “Come on, come on!”).

But the rudimentary skills of mice suggested that the language-critical capacity might exist on a continuum, much like a submerged land bridge might indicate that two now-isolated continents were once connected. In recent years, an array of findings have also revealed an expansive nonhuman soundscape, including: turtles that produce and respond to sounds to coordinate the timing of their birth from inside their eggs; coral larvae that can hear the sounds of healthy reefs; and plants that can detect the sound of running water and the munching of insect predators. Researchers have found intention and meaning in this cacophony, such as the purposeful use of different sounds to convey information. They’ve theorized that one of the most confounding aspects of language, its rules-based internal structure, emerged from social drives common across a range of species.

With each discovery, the cognitive and moral divide between humanity and the rest of the animal world has eroded. For centuries, the linguistic utterances of Homo sapiens have been positioned as unique in nature, justifying our dominion over other species and shrouding the evolution of language in mystery. Now, experts in linguistics, biology and cognitive science suspect that components of language might be shared across species, illuminating the inner lives of animals in ways that could help stitch language into their evolutionary history — and our own.

For hundreds of years, language marked “the true difference between man and beast,” as the philosopher René Descartes wrote in 1649. As recently as the end of the last century, archaeologists and anthropologists speculated that 40,000 to 50,000 years ago a “human revolution” fractured evolutionary history, creating an unbridgeable gap separating humanity’s cognitive and linguistic abilities from those of the rest of the animal world.

Linguists and other experts reinforced this idea. In 1959, the M.I.T. linguist Noam Chomsky, then 30, wrote a blistering 33-page takedown of a book by the celebrated behaviorist B.F. Skinner, which argued that language was just a form of “verbal behavior,” as Skinner titled the book, accessible to any species given sufficient conditioning. One observer called it “perhaps the most devastating review ever written.” Between 1972 and 1990, there were more citations of Chomsky’s critique than Skinner’s book, which bombed.

The view of language as a uniquely human superpower, one that enabled Homo sapiens to write epic poetry and send astronauts to the moon, presumed some uniquely human biology to match. But attempts to find those special biological mechanisms — whether physiological, neurological, genetic — that make language possible have all come up short.

One high-profile example came in 2001, when a team led by the geneticists Cecilia Lai and Simon Fisher discovered a gene — called FoxP2 — in a London family riddled with childhood apraxia of speech, a disorder that impairs the ability of otherwise cognitively capable individuals to coordinate their muscles to produce sounds, syllables and words in an intelligible sequence. Commentators hailed FoxP2 as the long sought-after gene that enabled humans to talk — until the gene turned up in the genomes of rodents, birds, reptiles, fish and ancient hominins such as Neanderthals, whose version of FoxP2 is much like ours. (Fisher so often encountered the public expectation that FoxP2 was the “language gene” that he resolved to acquire a T-shirt that read, “It’s more complicated than that.”)

The search for an exclusively human vocal anatomy has failed, too. For a 2001 study, the cognitive scientist Tecumseh Fitch cajoled goats, dogs, deer and other species to vocalize while inside a cineradiograph machine that filmed the way their larynxes moved under X-ray. Fitch discovered that species with larynxes different from ours — ours is “descended” and located in our throats rather than our mouths — could nevertheless move them in similar ways. One of them, the red deer, even had the same descended larynx we do.

Fitch and his then-colleague at Harvard, the evolutionary biologist Marc Hauser, began to wonder if they’d been thinking about language all wrong. Linguists described language as a singular skill, like being able to swim or bake a soufflé: You either had it or you didn’t. But perhaps language was more like a multicomponent system that included psychological traits, such as the ability to share intentions; physiological ones, such as motor control over vocalizations and gestures; and cognitive capacities, such as the ability to combine signals according to rules, many of which might appear in other animals as well.

Fitch, whom I spoke to by Zoom in his office at the University of Vienna, drafted a paper with Hauser as a “kind of an argument against Chomsky,” he told me. As a courtesy, he sent the M.I.T. linguist a draft. One evening, he and Hauser were sitting in their respective offices along the same hall at Harvard when an email from Chomsky dinged their inboxes. “We both read it and we walked out of our rooms going, ‘What?’” Chomsky indicated that not only did he agree, but that he’d be willing to sign on to their next paper on the subject as a co-author. That paper, which has since racked up more than 7,000 citations, appeared in the journal Science in 2002.

Squabbles continued over which components of language were shared with other species and which, if any, were exclusive to humans. Those included, among others, language’s intentionality, its system of combining signals, its ability to refer to external concepts and things separated by time and space and its power to generate an infinite number of expressions from a finite number of signals. But reflexive belief in language as an evolutionary anomaly started to dissolve. “For the biologists,” recalled Fitch, “it was like, ‘Oh, good, finally the linguists are being reasonable.’”

Evidence of continuities between animal communication and human language continued to mount. The sequencing of the Neanderthal genome in 2010 suggested that we hadn’t significantly diverged from that lineage, as the theory of a “human revolution” posited. On the contrary, Neanderthal genes and those of other ancient hominins persisted in the modern human genome, evidence of how intimately we were entangled. In 2014, Jarvis found that the neural circuits that allowed songbirds to learn and produce novel sounds matched those in humans, and that the genes that regulated those circuits evolved in similar ways. The accumulating evidence left “little room for doubt,” Cedric Boeckx, a theoretical linguist at the University of Barcelona, noted in the journal Frontiers in Neuroscience. “There was no ‘great leap forward.’”

As our understanding of the nature and origin of language shifted, a host of fruitful cross-disciplinary collaborations arose. Colleagues of Chomsky’s, such as the M.I.T. linguist Shigeru Miyagawa, whose early career was shaped by the precept that “we’re smart, they’re not,” applied for grants with primatologists and neuroscientists to study how human language might be related to birdsong and primate calls. Interdisciplinary centers sprang up devoted specifically to the evolution of language, including at the University of Zurich and the University of Edinburgh. Lectures at a biannual conference on language evolution once dominated by “armchair theorizing,” as the cognitive scientist and founder of the University of Edinburgh’s Centre for Language Evolution, Simon Kirby, put it, morphed into presentations “completely packed with empirical data.”

One of the thorniest problems researchers sought to address was the link between thought and language. Philosophers and linguists long held that language must have evolved not for the purpose of communication but to facilitate abstract thought. The grammatical rules that structure language, a feature of languages from Algonquin to American Sign Language, are more complex than necessary for communication. Language, the argument went, must have evolved to help us think, in much the same way that mathematical notations allow us to make complex calculations.

Ev Fedorenko, a cognitive neuroscientist at M.I.T., thought this was “a cool idea,” so, about a decade ago, she set out to test it. If language is the medium of thought, she reasoned, then thinking a thought and absorbing the meaning of spoken or written words should activate the same neural circuits in the brain, like two streams fed by the same underground spring. Earlier brain-imaging studies showed that patients with severe aphasia could still solve mathematical problems, despite their difficulty in deciphering or producing language, but failed to pinpoint distinctions between brain regions dedicated to thought and those dedicated to language. Fedorenko suspected that might be because the precise location of these regions varied from individual to individual. In a 2011 study, she asked healthy subjects to make computations and decipher snatches of spoken and written language while she watched how blood flowed to aroused parts of their brains using an M.R.I. machine, taking their unique neural circuitry into account in her subsequent analysis. Her fM.R.I. studies showed that thinking thoughts and decoding words mobilized distinct brain pathways. Language and thought, Fedorenko says, “really are separate in an adult human brain.”

At the University of Edinburgh, Kirby hit upon a process that might explain how language’s internal structure evolved. That structure, in which simple elements such as sounds and words are arranged into phrases and nested hierarchically within one another, gives language the power to generate an infinite number of meanings; it is a key feature of language as well as of mathematics and music. But its origins were hazy. Because children intuit the rules that govern linguistic structure with little if any explicit instruction, philosophers and linguists argued that it must be a product of some uniquely human cognitive process. But researchers who scrutinized the fossil record to determine when and how that process evolved were stumped: The first sentences uttered left no trace behind.

Kirby designed an experiment to simulate the evolution of language inside his lab. First, he developed made-up codes to serve as proxies for the disordered collections of words widely believed to have preceded the emergence of structured language, such as random sequences of colored lights or a series of pantomimes. Then he recruited subjects to use the code under a variety of conditions and studied how the code changed. He asked subjects to use the code to solve communication tasks, for example, or to pass the code on to one another as in a game of telephone. He ran the experiment hundreds of times using different parameters on a variety of subjects, including on a colony of baboons living in a seminaturalistic enclosure equipped with a bank of computers on which they could choose to play his experimental games.

What he found was striking: Regardless of the native tongue of the subjects, or whether they were baboons, college students or robots, the results were the same. When individuals passed the code on to one another, the code became simpler but also less precise. But when they passed it on to one another and also used it to communicate, the code developed a distinct architecture. Random sequences of colored lights turned into richly patterned ones; convoluted, pantomimic gestures for words such as “church” or “police officer” became abstract, efficient signs. “We just saw, spontaneously emerging out of this experiment, the language structures we were waiting for,” Kirby says. His findings suggest that language’s mystical power — its ability to turn the noise of random signals into intelligible formulations — may have emerged from a humble trade-off: between simplicity, for ease of learning, and what Kirby called “expressiveness,” for unambiguous communication.

For Descartes, the equation of language with thought meant animals had no mental life at all: “The brutes,” he opined, “don’t have any thought.” Breaking the link between language and human biology didn’t just demystify language; it restored the possibility of mind to the animal world and repositioned linguistic capacities as theoretically accessible to any social species.

This summer, I met with Marcelo Magnasco, a biophysicist, and Diana Reiss, a psychologist at Hunter College who studies dolphin cognition, in Magnasco’s lab at Rockefeller University. Overlooking the East River, it was a warmly lit room, with rows of burbling tanks inhabited by octopuses, whose mysterious signals they hoped to decode. Magnasco became curious about the cognitive and communicative abilities of cephalopods while diving recreationally, he told me. Numerous times, he said, he encountered cephalopods and had “the overpowering impression that they were trying to communicate with me.” During the Covid-19 shutdown, when his work studying dolphin communication with Reiss was derailed, Magnasco found himself driving to a Petco in Staten Island to buy tanks for octopuses to live in his lab.

During my visit, the grayish pink tentacles of the octopus clinging to the side of the glass wall of her tank started to flash bright white. Was she angry? Was she trying to tell us something? Was she even aware of our presence? There was no way to know, Magnasco said. Earlier efforts to find linguistic capacities in other species failed, in part, he explained, because we assumed they would look like our own. But the communication systems of other species might, in fact, be “truly exotic to us,” Magnasco said. A species that can recognize objects by echolocation, as cetaceans and bats can, might communicate using acoustic pictographs, for example, which might sound to us like meaningless chirps or clicks. To disambiguate the meaning of animal signals, such as a string of dolphin clicks or whalesong, scientists needed some inkling of where meaning-encoding units began and ended, Reiss explained. “We, in fact, have no idea what the smallest unit is,” she said. If scientists analyze animal calls using the wrong segmentation, meaningful expressions turn into meaningless drivel: “ad ogra naway” instead of “a dog ran away.”

An international initiative called Project CETI, founded by David Gruber, a biologist at the City University of New York, hopes to get around this problem by feeding recordings of sperm-whale clicks, known as codas, into computer models, which might be able to discern patterns in them, in the same way that ChatGPT was able to grasp vocabulary and grammar in human language by analyzing publicly available text. Another method, Reiss says, is to provide animal subjects with artificial codes and observe how they use them.

Reiss’s research on dolphin cognition is one of a handful of projects on animal communication that dates back to the 1980s, when there were widespread funding cuts in the field, after a top researcher retracted his much-hyped claim that a chimpanzee could be trained to use sign language to converse with humans. In a study published in 1993, Reiss offered bottlenose dolphins at a facility in Northern California an underwater keypad that allowed them to choose specific toys, which it delivered while emitting computer-generated whistles, like a kind of vending machine. The dolphins spontaneously began mimicking the computer-generated whistles when they played independently with the corresponding toy, like kids tossing a ball and naming it “ball, ball, ball,” Reiss told me. “The behavior,” Reiss said, “was strikingly similar to the early stages of language acquisition in children.”

The researchers hoped to replicate the method by outfitting an octopus tank with an interactive platform of some kind and observing how the octopus engaged with it. But it was unclear whether such a device might interest the lone cephalopod. An earlier episode of displeasure led her to discharge enough ink to turn her tank water so black that she couldn’t be seen. Unlocking her communicative abilities might require that she consider the scientists as fascinating as they did her.

While experimenting with animals trapped in cages and tanks can reveal their latent faculties, figuring out the range of what animals are communicating to one another requires spying on them in the wild. Past studies often conflated general communication, in which individuals extract meaning from signals sent by other individuals, with language’s more specific, flexible and open-ended system. In a seminal 1980 study, for example, the primatologists Robert Seyfarth and Dorothy Cheney used the “playback” technique to decode the meaning of alarm calls issued by vervet monkeys at Amboseli National Park in Kenya. When a recording of the barklike calls emitted by a vervet encountering a leopard was played back to other vervets, it sent them scampering into the trees. Recordings of the low grunts of a vervet who spotted an eagle led other vervets to look up into the sky; recordings of the high-pitched chutters emitted by a vervet upon noticing a python caused them to scan the ground.

At the time, The New York Times ran a front-page story heralding the discovery of a “rudimentary ‘language’” in vervet monkeys. But critics objected that the calls might not have any properties of language at all. Instead of being intentional messages to communicate meaning to others, the calls might be involuntary, emotion-driven sounds, like the cry of a hungry baby. Such involuntary expressions can transmit rich information to listeners, but unlike words and sentences, they don’t allow for discussion of things separated by time and space. The barks of a vervet in the throes of leopard-induced terror could alert other vervets to the presence of a leopard — but couldn’t provide any way to talk about, say, “the really smelly leopard who showed up at the ravine yesterday morning.”

Toshitaka Suzuki, an ethologist at the University of Tokyo who describes himself as an animal linguist, struck upon a method to disambiguate intentional calls from involuntary ones while soaking in a bath one day. When we spoke over Zoom, he showed me an image of a fluffy cloud. “If you hear the word ‘dog,’ you might see a dog,” he pointed out, as I gazed at the white mass. “If you hear the word ‘cat,’ you might see a cat.” That, he said, marks the difference between a word and a sound. “Words influence how we see objects,” he said. “Sounds do not.” Using playback studies, Suzuki determined that Japanese tits, songbirds that live in East Asian forests and that he has studied for more than 15 years, emit a special vocalization when they encounter snakes. When other Japanese tits heard a recording of the vocalization, which Suzuki dubbed the “jar jar” call, they searched the ground, as if looking for a snake. To determine whether “jar jar” meant “snake” in Japanese tit, he added another element to his experiments: an eight-inch stick, which he dragged along the surface of a tree using hidden strings. Usually, Suzuki found, the birds ignored the stick. It was, by his analogy, a passing cloud. But then he played a recording of the “jar jar” call. In that case, the stick seemed to take on new significance: The birds approached the stick, as if examining whether it was, in fact, a snake. Like a word, the “jar jar” call had changed their perception.

Cat Hobaiter, a primatologist at the University of St. Andrews who works with great apes, developed a similarly nuanced method. Because great apes appear to have a relatively limited repertoire of vocalizations, Hobaiter studies their gestures. For years, she and her collaborators have followed chimps in the Budongo forest and gorillas in Bwindi in Uganda, recording their gestures and how others respond to them. “Basically, my job is to get up in the morning to get the chimps when they’re coming down out of the tree, or the gorillas when they’re coming out of the nest, and just to spend the day with them,” she told me. So far, she says, she has recorded about 15,600 instances of gestured exchanges between apes.

To determine whether the gestures are involuntary or intentional, she uses a method adapted from research on human babies. Hobaiter looks for signals that evoke what she calls an “Apparently Satisfactory Outcome.” The method draws on the theory that involuntary signals continue even after listeners have understood their meaning, while intentional ones stop once the signaler realizes her listener has comprehended the signal. It’s the difference between the continued wailing of a hungry baby after her parents have gone to fetch a bottle, Hobaiter explains, and my entreaties to you to pour me some coffee, which cease once you start reaching for the coffeepot. To search for a pattern, she says she and her researchers have looked “across hundreds of cases and dozens of gestures and different individuals using the same gesture across different days.” So far, her team’s analysis of 15 years’ worth of video-recorded exchanges has pinpointed dozens of ape gestures that trigger “apparently satisfactory outcomes.”

These gestures may also be legible to us, albeit beneath our conscious awareness. Hobaiter applied her technique on pre-verbal 1- and 2-year-old children, following them around recording their gestures and how they affected attentive others, “like they’re tiny apes, which they basically are,” she says. She also posted short video clips of ape gestures online and asked adult visitors who’d never spent any time with great apes to guess what they thought they meant. She found that pre-verbal human children use at least 40 or 50 gestures from the ape repertoire, and adults correctly guessed the meaning of video-recorded ape gestures at a rate “significantly higher than expected by chance,” as Hobaiter and Kirsty E. Graham, a postdoctoral research fellow in Hobaiter’s lab, reported in a 2023 paper for PLOS Biology.

The emerging research might seem to suggest that there’s nothing very special about human language. Other species use intentional wordlike signals just as we do. Some, such as Japanese tits and pied babblers, have been known to combine different signals to make new meanings. Many species are social and practice cultural transmission, satisfying what might be prerequisite for a structured communication system like language. And yet a stubborn fact remains. The species that use features of language in their communications have few obvious geographical or phylogenetic similarities. And despite years of searching, no one has discovered a communication system with all the properties of language in any species other than our own.

For some scientists, the mounting evidence of cognitive and linguistic continuities between humans and animals outweighs evidence of any gaps. “There really isn’t such a sharp distinction,” Jarvis, now at Rockefeller University, said in a podcast. Fedorenko agrees. The idea of a chasm separating man from beast is a product of “language elitism,” she says, as well as a myopic focus on “how different language is from everything else.”

But for others, the absence of clear evidence of all the components of language in other species is, in fact, evidence of their absence. In a 2016 book on language evolution titled “Why Only Us,” written with the computer scientist and computational linguist Robert C. Berwick, Chomsky describes animal communications as “radically different” from human language. Seyfarth and Cheney, in a 2018 book, note the “striking discontinuities” between human and nonhuman loquacity. Animal calls may be modifiable; they may be voluntary and intentional. But they’re rarely combined according to rules in the way that human words are and “appear to convey only limited information,” they write. If animals had anything like the full suite of linguistic components we do, Kirby says, we would know by now. Animals with similar cognitive and social capacities to ours rarely express themselves systematically the way we do, with systemwide cues to distinguish different categories of meaning. “We just don’t see that kind of level of systematicity in the communication systems of other species,” Kirby said in a 2021 talk.

This evolutionary anomaly may seem strange if you consider language an unalloyed benefit. But what if it isn’t? Even the most wondrous abilities can have drawbacks. According to the popular “self-domestication” hypothesis of language’s origins, proposed by Kirby and James Thomas in a 2018 paper published in Biology & Philosophy, variable tones and inventive locutions might prevent members of a species from recognizing others of their kind. Or, as others have pointed out, they might draw the attention of predators. Such perils could help explain why domesticated species such as Bengalese finches have more complex and syntactically rich songs than their wild kin, the white-rumped munia, as discovered by the biopsychologist Kazuo Okanoya in 2012; why tamed foxes and domesticated canines exhibit heightened abilities to communicate, at least with humans, compared with wolves and wild foxes; and why humans, described by some experts as a domesticated species of their ape and hominin ancestors, might be the most talkative of all. A lingering gap between our abilities and those of other species, in other words, does not necessarily leave language stranded outside evolution. Perhaps, Fitch says, language is unique to Homo sapiens, but not in any unique way: special to humans in the same way the trunk is to the elephant and echolocation is to the bat.

The quest for language’s origins has yet to deliver King Solomon’s seal, a ring that magically bestows upon its wearer the power to speak to animals, or the future imagined in a short story by Ursula K. Le Guin, in which therolinguists pore over the manuscripts of ants, the “kinetic sea writings” of penguins and the “delicate, transient lyrics of the lichen.” Perhaps it never will. But what we know so far tethers us to our animal kin regardless. No longer marooned among mindless objects, we have emerged into a remade world, abuzz with the conversations of fellow thinking beings, however inscrutable.

(Sonia Shah is a science journalist and the author, most recently, of “The Next Great Migration: The Beauty and Terror of Life on the Move.”)
Patterner August 11, 2024 at 13:04 #924435
Reply to Joshs
Well I'll get to your post later. Yard work today. But I don't think there's any possibility that any other animal has any language that approaches human language. Because they can't think in the kinds of ways we do. If they were talking, we'd be able to learn each others' languages, and have conversations. We would have been doing this since the time we and any species capable of it found ourselves in the same place. Our cultures and societies would be much different if we had been been coexisting with animals that could communicate like us for the last several thousand years, if not hundreds of thousands.

There are many people who have put great effort into communicating with various other species. Apes and dolphins are big ones. The octopus is supposed to be an intelligent animal, also. But we cannot have a conversation with any of them. They just don't have the ability.

Also, I suspect they'd wipe us out if they could think in those ways.
Bodhy August 13, 2024 at 13:50 #925063
Reply to Count Timothy von Icarus

Thanks for the reminder to read Signs in the Dust! I found out about that book some time ago, meant to read it, fell by the wayside.



Personally, I prefer semiotics or a semtioic phenomenology over classical phenomenology. I think why people took to phenomenology as being a research programme without merit was because phenomenologists themselves have had a tendency to devalue any phenomenological perspective that wasn't derivative from Husserl.

Classical phenomenology I believe is too anthropocentric and arguably, at least in the Husserlian vein might steer too close to a transcendental idealism. Husserl's bracketing means speculation about external world is left out but this may leave it vulnerable to accusations of idealism - but I don't think that's a fair thing to accuse phenomenology as an enterprise of.

I'd want phenomenology to yield metaphysical insight too and I think Deely is correct that we want semiosis to be foregrounded over classical phenomenology. Idealism is not only a worry phenomenology is accused of, but also I don't see the classical perspective working out a mechanism for how creatures actually make sense of their world. The semiotic perspective Deely outlines illuminates the "how" of sense-making and also the "how" of the phenomenon qua appearance. Conscious phenomena are a scaled up and special case of semiosis.

Some clear benefits there - we don't need to put the metaphysical reality of the appearance into abeyance, nor do we have a consciosuness centric method that limits an inquiry into the forms of meaning-making.
Count Timothy von Icarus August 13, 2024 at 15:51 #925093
Reply to Patterner

An interesting quote from Deacon. I like his work, even though I don't find myself agreeing with a good deal of it. Personally, I think there is good grounds for thinking in terms of pansemiosis and there are useful ways to apply the concept of computation to physics and non-living systems, an idea he tends to write off. I mentioned Lyons book before and I think he highlights how the Scholastic idea of virtual signs or St. Thomas' "intentions in the media," can be used for describing how signs exist in non-living contexts.

In particular, computation (or something like it involving real numbers and/or indeterminism) seems to be a useful model of causation, where past states entail future ones (or a range of future ones in a stochastic fashion). But I still think he probably gets something right in the relationship between thermodynamics, life, and the relevance of the "absential."

Reply to Bodhy

I think this is a fair assessment of mainstream contemporary phenomenology. The later Husserl does seem to lurch towards a sort of idealism that I don't think is particularly helpful. Personally, I am more of a fan of Robert Sokolowski's merging of Husserl with Aristotle and and St. Thomas, which comes through with a more sensible "realist" view of phenomenology. The "Phenomenology of the Human Person," is a sort of summa of his work and is really great. However, it doesn't touch on signs or the type of causality unique to signs (making us think one thing instead of another), and this seems like a real miss, something that could ground his realist intuitions.

Eric Perl's "Thinking Being," is another good one in this vein. Perl is interested in drawing out the commonalities in the classical tradition, particularly Parmenides, Plato, Aristotle, Plotinus, and St. Thomas. A big thesis of his is that the phenomenological solution to the problems that comes with the subject/object divide are already present in ancient and medieval thought, and that there are better solutions to contemporary problems just sitting there for us to pick up again. What I particularly like is the chapter on Plotinus, which offers a solid critique of the problem in strict correspondence theories of truth of the sort that continue to dominate analytic philosophy, and end up resulting in deflationary theories of truth and meaning.
Patterner August 15, 2024 at 23:05 #925783
Reply to wonderer1
That's extremely interesting! I can't read it, bit I read about it here:
https://amp.cnn.com/cnn/2024/06/10/science/african-elephants-name-like-calls-intl-scli-scn

Elephants are possibly capable of abstract thought?? I wonder if other animals are capable of any other aspects of thinking we associate only with ourselves. Maybe it isn't these specific capabilities that make human thought and language stand out, but, rather, that we have the combination of all of these aspects.

Or, even crazier, maybe there are species that have capabilities we lack. But, lacking some critical combination, they can't tell us about what they're thinking, and we can't notice their unique quality.
Patterner August 16, 2024 at 11:13 #925921
Reply to Count Timothy von Icarus
That Lyons book is expensive! :lol: And probably way beyond me. I need a good intro to Semiotics. Hopefully, Daniel Chandler is good.
Bodhy August 26, 2024 at 11:43 #928099
Reply to Patterner


I'd recommend Sebeok's book Signs or Deely's introductory text Introducing Semiotic.


I'd also recommend once you have the basics down, getting Deely's magisterial Four Ages of Understanding. I think it's necessary to understand how semiotics is an entire thoughtform and how it relates to the history of philosophy, and Deely's book does this.
Patterner August 26, 2024 at 11:45 #928101
Reply to Bodhy
Thank you! I'm just not feeling Chandler's book.