Some questions about Naming and Necessity
This is a branch from the what is real thread. and I ended up with questions about Naming and Necessity, such as how far a rigid designator can be stripped of properties and still be valuable. We also pondered to what extent N&N is merely straightening out some snags in logic versus saying something about what is necessary and contingent as we look out at the world. Lastly, we ended up disagreeing about whether one person can become another, for instance, can I become Obama?
To flesh out the first question, let's look at the point in the first lecture where Kripke gives an example of the problem N&N was aimed at addressing:
Quoting Naming and Necessity p.24,25
So Kripke is pointing out that reference can't be based entirely on names that stand for descriptions. In fact, we can point to people, places, and specific things without knowing much about them. We can even be wrong about them and still pick them out for the purposes of communication.
The question J and I are wondering about is: how far does this go? When does speech about a proper name become nonsense because a contradiction has arisen between an assertion and something essential about the object of the assertion? How did Kripke handle this question?
More to come:
To flesh out the first question, let's look at the point in the first lecture where Kripke gives an example of the problem N&N was aimed at addressing:
Quoting Naming and Necessity p.24,25
The first topic in the pair of topics is naming. By a name here I will mean a proper name, i.e., the name of a person, a city, a country, etc. It is well known that modern logicians also are
very interested in definite descriptions: phrases of the form 'the x such that cpx', such as 'the man who corrupted Hadleyburg'. Now, if one and only one man ever corrupted Hadleyburg, then that man is the referent, in the logician's sense, of that description. We will use the term 'name' so that it does not include defmite descriptions of that sort, but only those things which in ordinary language would be called 'proper names'. If we want a common term to cover names and descriptions, we may use the term 'designator'.
It is a point, made by Donnellan,8 that under certain circumstances a particular speaker may use a definite description to refer, not to the proper referent, in the sense that I've just defmed it, of that description, but to something else which he wants to single out and which he thinks is the proper referent of the description, but which in fact isn't. So you may say, 'The man over there with the champagne in his glass is happy', though he actually only has water in his glass. Now, even though there is no champagne in his glass, and there may be another man in the room who does have champagne in his glass, the speaker intended to refer, or maybe, in some sense of 'refer', did refer, to the man he thought had the champagne in his glass.
So Kripke is pointing out that reference can't be based entirely on names that stand for descriptions. In fact, we can point to people, places, and specific things without knowing much about them. We can even be wrong about them and still pick them out for the purposes of communication.
The question J and I are wondering about is: how far does this go? When does speech about a proper name become nonsense because a contradiction has arisen between an assertion and something essential about the object of the assertion? How did Kripke handle this question?
More to come:
Comments (132)
Quoting frank
The reference is entirely subjective.
A human is saying your words and it will obviously fall to that persons view. In brief terms its just a lie, and I don't understand why you are putting extra emphasis on this.
I generally do not understand, so if you are willing to explain please be happy to.
That's what Kripke is describing here in lecture 2:
Quoting Naming and Necessity, Lecture 2 p.71
Quoting ibid p.79
@J Would you agree that #6 of the theses explains how an object obtains necessary properties? It's a matter of the speaker's intentions. That's at least one way..
Quoting Red Sky
It's just because the issue of reference became a hot topic in analytical philosophy, starting with Russell and Frege, who advocated that names are symbols for a collection of descrtiptions, through Quine who claimed reference can't really be fixed. It's just part of this ongoing philosophical debate about how speech works.
Pretty clearly, you can't cite the fact that you refer to something as X as the criterion or determination for why you do so!
Quoting frank
Yes, one way, and on one understanding of necessity (a priori). And notice how we're forced to phrase it: the object obtains the properties. Is this magic? :smile: Can this be what Kripke literally means?
BTW, do you take "in the idiolect of the speaker" to be Kripke just being careful (like "in language L"), or is he making some additional point?
This isn't about necessity in general. It's that when I pick an object, like the pillow with the red button, I'm only looking at possible worlds where that object exists. There are possible worlds where the pillow doesn't have a red button, but I don't care about those. For the purposes of my communication, the red button is necessary because it's in all the possible worlds I'm paying attention to. I magically made the red button necessary by fiat.
Quoting J
He's saying that when I rigidly designate an object, like the pillow with the red button, you're supposed to pick up on what I mean by it. It's all about me and my intentions as a speaker. I think we recently had a thread where we were talking about Quine's inscrutability of reference and someone kept saying, "but doesn't intention pick out the thing?" Naming and Necessity is an effort to flesh out that idea, among other things. The distinction between rigid and non-rigid designators becomes valuable when he starts talking about the mind-body problem. It cuts through some fog.
Yes. In fact, maybe we should say, not that the object now has the properties, but rather that those properties (which were always there) are now made necessary. That way we can work our stipulative magic without the illusion of adding or subtracting properties to what we're talking about.
Quoting frank
I think this is a good explanation, thanks. It broadens "idiolect" to include not merely the language that happens to be spoken, but the particular idio-syncratic intentions of the speaker.
Cool, so I'll give an explanation for why Kripke might be ok with the proposition that I could have been Obama. There are a couple of options for how we interpret the word "I" in that statement.
1. I am defined as the frank who was born at a certain place and time of certain parents.
2. I am defined as a mind that can occupy any living body. Here think of Charlie Kaufman's Being John Malkovich, in which the image of the mind as a puppet master recurs, and the main character travels through a tunnel that leads to John Malkovich.
So if the reference of my proposition is 1, then the proposition is false, because we would have a contradiction. If the reference of my proposition is 2, does that work? The problem is that I'm defining myself as my mind, but I'm defining Obama as his body. This is the conundrum of Kaufman's movie.
But since the references of the objects in the proposition are set by intention (whose intention is another cool question), we could have it that the proposition does refer to me as my mind, and Obama as his body, so it works. I think my argument might be a little flimsy. What do you think?
Anyway . . . does your situation play by Kripkean rules? Let me paraphrase what I think you're saying: You, frank, can fix references as you see fit, and as long as someone -- J, let's say -- accepts what you're doing, the two of us know what we're talking about. And in this case, you have the breathtaking audacity to fix the reference for "frank" one way and the reference for "Obama" another! "frank" will refer to a mind, "Obama" to a body. Using that interpretation (borrowing from @Banno here), either one of us can indeed say "frank could have been Obama," because we know what we've agreed that would mean, and there's no contradiction involved.
So far, so good. I think Kripke would be on board too, in the very limited sense that you and I, as a mini-community, can agree to speak as we see fit. But the picture thus described faces challenges.
The first is that the "argument" can only work for you and me. It can't be used to persuade anyone who won't use our idiolect. Which leads to the second: Naturally we'll be asked why the references of "frank" and "Obama" should be fixed in such radically different ways. This would amount to asking, "Why should I join you two in this particular reference-fixing?"
I see a non-serious and a serious answer to this. The non-serious answer is, "Well, it's an ad hoc way of allowing us to speak about the possibility that frank could have been Obama." A reason, admittedly, but not a very good one, since nothing of philosophical interest follows from such ad-hocness.
The serious answer is, "We all know how to use phrases like 'if I were you' and 'if I'd been Obama'. Our idiolect explains, in simple terms, what those phrases mean, and why they're so handy. When we use them, we're automatically adopting an interpretation of each term that allows one to 'be in' the other. Then, when no longer needed, we drop that interpretation and go back to our usual usage. All this is so common as to be literally unremarkable. You could call it a type of equivocation, but it's useful, not confusing." (This isn't my own preferred analysis of how 'if I were you' works, BTW.)
So, two questions: First, is this allowed? And second, At what point do we need to step in and protest that such reference-fixing is ludicrously out of step with how the world is?
I think it is allowed. Again, two people can agree to talk any way they want, as long as they don't expect agreement from others. But now your OP question arises:Quoting frank This example isn't so much a matter of being stripped of properties as it is of being saddled with absurd ones. In our mini-community, we wish to maintain that some subset of persons (which includes frank) are minds, and another subset (which includes Obama) are bodies. I don't know how we'd get that off the ground, as we "look out at the world," to use your phrase. Just for starters, how do you tell the difference? Well, radical solipsism, maybe.
So let me stop, before I confuse myself, and say that the difficult question lies right here: What does Kripke's view about reference commit us to, concerning metaphysics? Because "the world" is metaphysics. You can't jump from how we use language and logic to how such use relates to the world without bringing along your basket of metaphysical assumptions -- about physics, about causality, about realism, about how we know stuff about the world, and much more. When we ask whether our "frank/Obama" idiolect could represent a picture of our world, not just a possible world, we're asking a metaphysical question, IMO.
It wasn't ad hoc. It's what I was thinking about from the beginning of our discussion. I really don't know how the world works. I normally think about it as a tree of possibilities.
Quoting J
It's just straight Descartes. That we can't say the mind is necessarily identical to the body was mentioned by Kripke in N&N. I'm guessing his restraint about metaphysical pronouncements is due in part to Wittgenstein's influence.
Sorry, didn't mean to say that you were bringing it up in an ad-hoc way here. Rather, if someone were to ask you, "Why opt for the bizarre reference-fixing schema with 'frank' as a mind and 'Obama' as a body?" and you were to reply, "Oh, I did that so I can say that it's possible for me to have been Obama," that would be ad hoc, or makeshift, or at any rate not the sort of answer the questioner was presumably looking for. Because it only defers the real question, "Yes, of course, but why do you want to say that?"
Quoting frank
The absurdity I was referring to isn't Cartesian dualism -- nothing absurd about that, though I don't subscribe.
No, I was pointing to the idea that identity could in one person's case be mental and in another physical -- or maybe "arbitrary" is a better word than "absurd." I took you to be raising that in order to see if it was consistent with Kripke's views on reference -- which I think it is, for the "mini-community" that uses that idiolect. Were you additionally suggesting it as a real possibility? Not a Cartesian one, at any rate!
Why do you say that's the real question? When Kripke says Nixon could have lost the election, would you say we need to know why he would say that?
Quoting J
What do you mean by "real" possibility?
Yes, I think so, if we're wondering whether to speak the same idiolect. And K can give the answer, "Because I want to bring out the character of rigid designators, and show what 'N could have lost the election' means." Compare to your being asked, "Why opt for 'frank' as a mind and 'Obama' as a body?" If you reply, "Because that lets me talk about the possibility that I could have been Obama," we then have to decide if "that lets me talk about X" is a good reason. I'm suggesting that it's a genuine, if trivial, reason, but defers the interesting question of why you'd want to talk that way. Whereas the Nixon example is not about "this lets me talk about N losing the election," but about, more directly, "this lets me explain rigid designators." But perhaps there is a comparable reason one could give for your example -- I admit I haven't thought this through in depth.
Quoting frank
I should be fined for using "real", against my own strictures. :smile: Let me rephrase: Were you suggesting the "frank=mind / Obama=body" structure as something that might reflect how things stand in our world? I was assuming you were not, but only using the example to probe Kripke.
Science fiction is fraught with disembodied minds being transferred around, like an uploaded version of a person subsequently downloaded to a clone and whatnot. In fact one of the first books I read as a child was by Jack Vance and had the plot of a guy who wakes up in a body and can't remember what happened in his last iteration. A murder mystery ensues. If you recall, in the Matrix movies an AI manages to download himself into a human. I'm just used to that kind of thing.
Quoting J
I guess it could be. I don't know. Heidegger would say no, I think Adorno would say yes, the issue being whether the self and the environment it evolved in are inextricable. What do you think?
The default assumption is that what goes for one, goes for all, if the property in question is putatively essential (as "identity" would be). If I am a mind, why would any other person be anything else? If tiger A is a mammal, why would tiger B be a bird? etc. I'm calling this an assumption, because there's nothing that immediately shows it must be true, but it would take some powerful reasons to unseat it, I think. Remember, we're talking about our world, not just a possible, "idiolecty" world. In our world, we don't declare one person to be a mind, another a body, except maybe in some unusual cases of brain death or similar perplexities. At any rate, we don't do it when there is no other difference between the two.
Ah, but perhaps we should, if the key difference is between "I" and everyone else. That would be the solipsistic possibility I referred to earlier. Maybe there aren't any other minds! But I don't think that's in the spirit of what you're examining.
What does Adorno say about this? And can you say more about how we might understand persons, if they can be categorized as either minds or bodies, depending?
I see what you're saying. There are definitely situations where concise, unambiguous language is required, like in the repair instructions for a spaceship. Otherwise, language is pretty metaphoric, poetic, unconsciously Shakespearean. I guess it depends on the situation.
Quoting J
For thousands of years there have been people who believed the universe is one giant mind. Some physicists think it's actually a black hole inside a bigger universe, which might also be a black hole. Craziness all around.
Quoting J
Negative Dialectics circles around the idea of the unification of subject and object, sometimes known as mind and body. A dialectical approach says they have to be mutually dependent, and this leads to the idea that the separation is an illusion, if one truly had their shit together, one would see that the two are one. Heidegger could be interpreted as seeing it that way. Adorno disagreed with the rush to some sort of mystical marriage of the opposites because one is apt to become blind and numb that way. Leave the mind and body, soul and Christ, self and world, however you put it, alone. They're separate in our consciousness for a reason. There are necessary dramas playing out, much of which really hurts. Stop trying to bypass it and let that pain transform us.
I actually agree with you. It's pretty strained to say that I could be Obama. It probably just means I'm giving advice, "if I were you..." :grin:
Quoting frank
Yes, either "Here's what you should do . . ." or "Here's what I would do if I found myself in your situation . . ." Interesting that the first is about "you", the second about "me".
This might be a good moment to go back to one of your original questions:
Quoting frank
"Elizabeth Windsor was born of different parents" -- would that be an example?
I'd like to hear your thoughts. And are there some target passages from N&N you think we should look at?
I'm not sure I would agree with that, assuming I understand what you mean.
What is a matter of the speaker's intentions, according to Kripke, isn't what properties the object they mean to be referring to has by necessity (i.e. in all possible worlds) but rather what properties it is that they are relying on for picking it (by description) in the actual world. This initial part of the reference fixing process is, we might say, idiolectical; but that's because we defer to the speaker, in those cases, for determining what object it is (in the actual world) that they mean to be referring to. The second part of Kripke's account, which pertains to the object's necessary properties, is where rigidity comes to play, and is dependent on our general conception of such objects (e.g. the persistence, individuation and identity criteria of object that fall under their specific sortal concept, such as a human being, a statue or a lump of clay). This last part (determining which properties of the object are contingent and which are essential/necessary) isn't idiolectical since people may disagree, but may also be wrong, regarding what the essential properties of various (kinds of) objects are.
Regarding the essentialness of filiation (e.g. Obama having the parents that he actually has by necessity), it may be a matter of metaphysical debate, or of convention (though I agree with Kripke in this case) but it is orthogonal to his more general point about naming and rigidity. Once the metaphysical debate has been resolved regarding Obama's essential properties, the apparatus of reference fixing (that may rely on general descriptions, and then rigid designation, still can world very much in the way Kripke intimated.
The logic of naming and necessity is essentially that of the type system of C++. Hence rigid desgination per-se doesn't imply metaphysical realism nor does it make assumptions about the world. Such speculative conjectures rather belong to the causal theory of reference.
In C++, Kripke's example becomes
/*initialize a constant pointer (i.e. rigid designator) called 'that_man' to the address of a string variable*/
string * const that_man = new string("has champagne");
/*print the value of the address that is rigidly designated by "that_man"*/
cout << that_man; //that_man = 0x2958300 (say)
/*print the value of the variable at 0x2958300*/
cout << *that_man; // *that_man = "has champagne"
/*change the incorrect value of the string variable at 0x2958300 to the correct description*/
*that_man = "has water";
/*try to use that_man non-rigidly to refer to another man*/
string * const another_man = 0;
that_man = another_man; //error: assignment of read-only variable 'that_man'
Right. Once I've picked out an object from the actual world, though many of its properties might be contingent, for my purposes they're essential to the object I'm talking about. Right?
Quoting Pierre-Normand
Why couldn't rigidity come into play regarding a contingent feature of an object?
Say X is a pillow with a red button. Broadly speaking, the button is a contingent feature. But any pillow that doesn't have the button is not the pillow I'm talking about. The button is essential to X.
Quoting Pierre-Normand
True.
I think so, yes.
Saying that they're essential "to the object [you're] talking about" is ambiguous. It admits of two readings. You may mean to say (de re) of the object you are talking that it neccessarily had those properties. That isn't the case, ordinarily. The pillow you are talking about could possibly (counterfactually) have had its button ripped off. In that case, of course, you could have referred to it differently. But it's the same pillow you would have referred to.
On its de dicto reading, your sentence is correct. But then the essentialness that you are talking about belongs to your speech act, not to the object talked about. Say, you want to talk about the first pillow that you bought that had a red button, and you mean to refer to it by such a definite description. Then, necessarily, whatever object you are referring to by a speech act of that kind, has a red button. But this essentialness doesn't transfer to the object itself. In other words, in all possible worlds where your speech act (of that kind) picks a referent, this referent is a pillow that has a red button. But there also are possible worlds where the red-buttoned pillow that you have picked up in the actual world doesn't have a red button. Those are possible worlds where you don't refer to it with the same speech act. In yet other words (and more simply), you can say that if the particular red-buttoned pillow you are talking about (by description) hadn't had a red button, then, in that case, it would not have been the pillow that you meant to refer to.
Good question. I'm a bit busy. I'll come back to it!
With some wild metaphysical shenanigans we might be able to work it out that Obama is the next stage of my existence, parenthood isn't what we think it is, etc. That wouldn't be excluded by Kripke, because he wasn't weighing in on the nature of the universe. But that's the only point that's made by insisting that I could become Obama, that the universe could work differently than the way we think it does. Do you agree with that?
I agree, but in that case we're talking about epistemic possibilities, or epistemic humility.
:up:
Quoting Pierre-Normand
I'm not sure what to say about this. Is "our general conception of such objects" good enough to be going on with? I agree that Kripke seems to think it is. Or maybe that's a bit unfair -- he does try to analyze these conceptions, but it seems to me that he's doing so in terms of how we talk about them, so there's one foot back in de dicto contingency and necessity.
I hope this conversation will continue to work on this, because I'd like to come out with a better understanding of whether Kripke is really an "essentialist" in some semi-Aristotelian sense.
Quoting frank
OK, still working on this.
I can say that Obama might be a robot, but I can't say that Obama could have been a robot. Doesn't that show that the properties of the rigid designator are set by the speaker? Or maybe not, maybe it's just that the exact object is picked out by the speaker. The properties follow from there.
Before we get to nonsense and contradiction, I want to understand a little better what Kripke is saying about reference. Here's the passage you quoted:
Quoting Naming and Necessity p.24,25
What happens if we change the designation to "The man over there who I think has champagne in his glass is happy"? That's where Kripke himself winds up: "The speaker intended to refer . . . to the man he thought had the champagne in his glass." Has the speaker still made a mistake in reference? I think we have to say no. The reference is now based on something the speaker thought, not something that is the case about Mr. Champagne. The speaker can point out Mr. Champagne to me, explain that the man is being designated according to a belief the speaker has about him, and we can both usefully talk about that man and no other. Whether or not Mr. Champagne really has champagne coudn't be relevant.
So how should we describe this difference? Is it a version of de dicto / de re? Sort of. We could rewrite "The man over there who I think has champagne in his glass" as follows: "The man over there about whom I say, 'He has champagne in his glass'." Certainly the fact that the speaker says this about Mr. Champagne is not a necessary de re property. But it is necessary that he say this in order for the designation to refer.
Thinking out loud, really. Does this make sense?
I would only add that "the one holding a glass of champagne" is said for the audience's benefit. I can look at someone and silently think "asshole" and I know who I mean, no hoops jumped through. If I gesture, for you, at someone and say "asshole", you might need clarification about which of the people over there I disapprove of. Hence "the one holding ..." or even "the one holding - what is that? Is that champagne?"
Point being it's not exactly a matter (de dicto) of what the speaker thinks, but really of what the speaker thinks the audience will think. "The guy holding what you would probably think is champagne, but I saw the bottles and ..."
I can't remember if Kripke gets there, and I'm not looking at the book, sorry. But reference is a matter of triangulation, not just what pertains to the speaker or pertains to what she speaks of.
:grin: Yes, I think that's what @Pierre-Normand was pointing out about my pillow example:
Quoting Pierre-Normand
Reference is set by the speaker.
I don't think it's that simple.
In cases where the speaker is mistaken, memory being what it is, it is possible for them to learn what they are trying to refer to.
(Example:
"When Maddux was pitching the last game of the World Series --"
"Maddux didn't pitch the last game; Glavine did."
"Okay then Glavine. No, wait, I know I was thinking of Maddux, so maybe it wasn't the last game I was thinking of ..."
And this can go on. It might turn out the speaker was remembering yet another pitcher he had mixed up with Maddux. It might or might not have been a World Series game.)
The other problem is that even if we say the reference is whatever the speaker intended, besides the problems already suggested above, intention not always being perfectly determined, we have the additional problem that words don't just mean whatever you want them to. The speaker has no choice but to engage in the grubby business of negotiating with the audience to achieve successful reference.
Grice noted the complexity of our intentions when we speak to each other, even in the absence of confounding factors: not only do I intend you to understand that I mean X by saying Y, I also intend you to recognize that I so intend, and I also intend you to recognize that I intend you to recognize that I intend you to understand I mean X by saying Y, and on and on.
So, no, I can't agree that it's just a matter of the speaker "setting" the reference, as if the audience were superfluous.
Set, maybe. There's more.
The example is set at a party, presumably with many men and various drinks. The speaker says "The man over there with champaign in his glass..."; it's water, not champaign, but enough for the hearer to understand that the speaker does not mean any of the other blokes with a beer.
Pretty obviously, the reference is a success if the hearer and the speaker are in agreement as to who is being talked about.
Champaign or water, we have enough to move the conversation on.
And we can conclude that the reference was a success, despite the description being wrong.
I see you made the same point.
That's why I was suggesting that maybe a better way to understand this is "The man over there who I think has a glass of champagne in his hand." That way, the description is not wrong -- he's being identified as being the object of a thought of the speaker.
EDIT: Or no, better to say, "He's being identified using a thought of the speaker." He's the guy I think has a glass of champagne.
All sorts of problems with meaning as speaker intent. The most significant one is that we do not have access to what you intend, only to what you say. So we can't use your intent to fix the referent.
So if no one understands what's being referenced, the reference failed? That doesn't make much sense to me. Referring is something done by fiat.
Quoting Banno
We do it all the time.
If you're using "private" the way Wittgenstein did, the answer depends on the extent to which meaning arises from rule following. If it's mostly rule following, then you couldn't establish rules by yourself.
If you're just asking if you can keep some information to yourself, yes.
@Pierre-Normand Do you agree with that?
Well, yes.
Quoting frank
No. You use what is said or shown. We do not have access to intent. We might infer it, but...
I disagree. The act of referencing does not succeed or fail. It's just done by fiat. Communication can succeed or fail.
Quoting Banno
You do have access to intent by observation. If you have any questions about it you can ask.
Unless it is. This is such a great example because the reference of the word "champagne" is regularly disputed. Are you using the word "champagne" "correctly"? Are you sure? Is there definitely a correct way?
Tell us what you mean by that, and why you think so.
But couldn't we get around that in the way I suggested earlier?:
Quoting J
This way, it's a behavior, not a mental intention, and the speaker still can't be "wrong about the reference", because it doesn't depend on whether the man really has champagne, only on whether the speaker says he does. The man is being identified as the subject of a statement, not as a person with a drink in his glass.
But then there's Srap's problem:
Quoting Srap Tasmaner
In this case, I don't think the ambiguity of "champagne" matters. We know what the speaker is saying: "The man over there about whom I say, 'He has champagne in his glass' is . . . " and then presumably he fills it out with whatever he wants to claim about the guy ("is happy," "is an asshole" etc.). Whether the speaker knows what champagne is, and is conforming to an ostensibly correct usage, is surely beside the point of using the statement about the guy to pick him out.
Compare "The man over there about whom I say, 'He likes glunk' is . . . " We don't need to know a single thing about glunk in order to use the speaker's statement to successfully and incorrigibly fix the reference. All that matters is what he says. Turns out he's wrong about glunk? Turns out there's no such thing as glunk? The guy is still the same guy about whom the speaker made his statement.
I'm a little unconvinced by the "about whom I say..." locution, precisely because we're lacking a guarantee that the sentence the speaker utters means what he thinks it means (or "what he intends it to mean" or "what he means by it").
I know the tendency of this analysis is to brush off mistakes, but suppose you point out to the speaker -- for easy examples, imagine the speaker isn't quite fluent in the language he's using -- that the words used mean the person is a prostitute: you might end up with a speaker insisting that they wouldn't say that! You'll probably want to cover by changing your description to something like "about whom I mistakenly said ..." but that's no help. What, so you *thought* the person was a prostitute and now realize they aren't?! Doubt the speaker will agree to that. Keep trying. (See " A Plea for Excuses".)
And in the meantime, the speaker has still failed to refer, because once words are in the mix, you're stuck with them; either you trust them to faithfully carry your meaning, as your ambassadors, so to speak, or you allow that there must be negotiation between you and the audience.
"You know what I mean?" isn't always a rhetorical question, even when intended to be.
It's as if, what we need to say is that when you attempt to refer, you "hope" the words you utter will do the trick -- you could also hope you're using the "right" words but I think that's secondary. Now what is the audience to do with your hope? How does that help them know what you mean?
((There's a reason sitcoms are full of this sort of stuff.))
What I mean by what?
Quoting frank
I can't tell if you mean the whole thing, or the individual parts. How can I know?
Well, seems to me that referring to something can fail in a few different ways, and that it might be worth paying them some attention. I treat them as speech acts, and so bring on board the sort of analysis found in Austin and Searle.
The intent can only ever be inferred.
I think I know what you're saying, but I can't be certain. It would be better to just stop trying to communicate.
So you don't get my intent?
That's fine, we could keep chatting and see if we can reach some agreement, or at least some point form which we might move on. That strikes me as more important than sorting out the Gavagai.
I was using "glunk" to try to de-fang the meaning question entirely, but perhaps I didn't go far enough. How about this: "The man over there who I make this noise [hideous shriek] when I see is . . . " Now are we outside of possible mistakes and ambiguities of meaning? All that matters is that the shriek fixes the reference.
Well, I'm not even sure what we're talking about now, but it looks like you are trying to create one of Wittgenstein's private languages. You want to have in hand an association between an object and something, a name, a referring expression, or a bit of behavior, and for that association to be something you can't be wrong about.
I think there are a couple layers to this. One is the apparent incorrigibility of attention: when I think of something, perceived or recalled or imagined, even if I am making important mistakes about the properties of that thing, even if I misidentify it, I cannot be wrong about it being the object of my thought (or intention). In my pitching example, the guy is remembering something someone did, even if it wasn't who he thinks it was or in the circumstances he thinks it was. There is, we want to say, a pure, original, and unimpeachable phenomenal experience underlying the stories we tell about it, even if those stories are all wrong. Even if it turns out the thing you're thinking about, that you think you remember, never happened, it's still what you are thinking about.
It's a compelling vision, but I suspect it is fundamentally mistaken.
When we come to language, the act of referring seems somehow to share in the unimpeachability of attention. The additional problem here is that "refer" is one of Ryle's "success words", so when we attempt to describe reference we describe successful reference. The downsides here are that (a) what is genuinely interesting, impressive, or mysterious is the element of "success" rather than something specific to referring; (b) our vocabulary blocks a proper comparison of successful and unsuccessful attempts at reference; (c) by being defined as successful, reference seems to take on the color of incorrigibility we associate (I think mistakenly) with attention.
Quoting Srap Tasmaner
Yes. Both of these are what I'm aiming at.
Moreover, I want to take this out of "private reference," which would apply only to me, and make it what you're calling a successful reference -- one that I can use with others.
At this point I'm fairly sure I'm not grasping what you see as problematic here. Probably something simple I'm rushing past. Would you mind explaining a bit more? Perhaps using the shriek example? If I teach others that my shriek refers to Mr. Champagne, in what way could this reference fail for others, or be mistaken on my part? What could go wrong in a statement like "The man who I shriek when I see is a really nice guy"? (other than doubts about my sanity) The identifier is my behavior, not anything about him, and I can scarcely be wrong about whether I'm shrieking.
This question is a non-starter. You're presuming the entire system of conceptualization and language usage is at your disposal, and then all you're doing is in effect introducing a word by stipulation. It is interesting that we can do this, but it doesn't get anywhere near addressing the questions you're interested in.
We're actually covering similar territory to the memory discussion. My position is that rather than the pure phenomenal experience we overlay with narrative, which we can then strip away, all we've got is narrative. The process you imagine of "stripping away" is real, but creative, it's making a new thought out of the thoughts that came to you not just enmeshed in context, but constituted by our systems of understanding and communicating. I don't think you really have the option to just set those aside and recover some original underlying experience ? you never had access to any such experience.
The idea, as I conceive it, is similar to Sellars's argument in "Empiricism and the Philosophy of Mind": he allows that there must be some sort of raw inputs to our thinking processes, but denies that they have any cognitive status whatsoever. In particular, they cannot serve the Janus-faced role thrust upon them, linking on the one side to purely causal processes of sensation, and on the other side to our conceptual apparatus of knowledge and reason. Nothing can fill that role.
As it is vain to seek the primordial unconceptualized experience, it is vain to seek the originary act of referring within a mind that knows no other minds.
Well, yes, but isn't that what Kripke is interested in too? He wants to know how we fix the reference of a new term -- a proper name, say. The baby's name is a stipulation, if anything is. And with a proper name, no less than with a shriek, we find ourselves in the middle of a "conceptualization and language system." Don't we have to presume that?
Again, I feel I must be missing something very obvious. Why is the question about teaching others the meaning of my shriek a non-starter? Words of one syllable, please, I'm floundering here! :wink:
I'm sure it's my fault. Of course it should be possible to provide an account of what makes names names, what makes them special, what their role in language is, what makes them different, and this is the sort of thing Kripke is up to. Sure.
But we were also talking about reference as such, and it's clear to me that an account of names in terms of baptism, or words in terms of stipulation, can't also serve as an account of reference but presumes it. If you want to teach someone "blork" means that thing, you have to already be able to successfully refer to that thing. (I think Wittgenstein raises similar objections to theories of demonstrative teaching, as if pointing "just worked".)
So talk about stipulation and teaching all you like, but it doesn't get you to that level of originary reference you're chasing, the intentionality you cannot be mistaken about. It relies on that; it doesn't explain it or even describe it.
Consider "Let's agree that this thing is Blork". Who teaches who here? Isn't the choice to use "blork" an agreement, if not a commitment?
There's an indexical built in: "this". Indeed, can we have a language without such an ability? It would be a mere syntax, a string of letters or sounds.
Reference goes all the way down.
I think the ability to pick out a part of the world is there in potential in an infant. That potential is realized through interaction with others. As Kripke points out, none of us has access to the baptisms of common words. Humans have probably been speaking for at least 100,000 years. That's a long causal chain.
I don't think what happens between people in a moment of communication is about a new ceremonious confirmation of that chain, as in "Yes, you successfully referred to the tree because I agree that that is called a tree." None of that is necessary because a whole section of the brain has been configured to handle a particular language by the time a child is 2 years old. A child can literally talk to herself at that age. She doesn't need anyone else, and the Private Language argument isn't suggesting otherwise. Do you agree with that?
This is good stuff. A couple of points.
The type of stipulation used would be a status function, a "counts as" Statement. There is a mutuality in the stipulation - we think of it as the adult teaching the child, but it's more like a joining in to a conversation - consider how we each learn a name. Sometimes a definite description is available, sometimes - often - not. Always, involves a community.
And there need be no "intentionality you cannot be mistaken about'. Back to the derangement of epitaphs. We can set rules up, as needed, but then they will fall, or be pushed.
Non of which detracts from what you said.
Quoting frank
On the other hand, if we do not have some such agreement, we might not be able to continue. There's adequacy between certainty and incomprehension.
True. But I still referred to the tree. I don't need your buy-in for that.
This is helpful, and I think you're right, except I wan't really looking for such a level of reference. My chain of thought was mainly an attempt to do better than "That man over there with champagne in his glass", which has all the problems of mistaken reference that you and Kripke and many others have pointed out. And I think there's a valid distinction to be made between a property that we use to designate something rigidly, and a statement we use to do so. Or perhaps I should back up and ask whether the statement-type designation -- "He is the person about whom I say . . ." -- is rigid.
Quoting Banno
Well, yes, but it's fair to say that, in many cases, there is an originator, a teacher, and one who learns. If I wish us to refer to a certain tree outside my window, I have to do the pointing. Then, of course, we can agree.
Quoting frank
This is getting to some crucial questions about the "game" of reference. (OK, sometimes "game" is the right word! :smile: ) Like you, I'm holding out for reference as a potentially private game. Talking, so often, is talking to ourselves, and we need all the apparatus of talking-with-others to do it. Now it may be that a criterion for successful private reference would be that, if challenged, the person could introduce others to the game. Arguably, if you can't make it clear to someone else, you aren't clear about it yourself. But that's a different point. If there's a pile of papers on my floor and I say to myself, "Right, that's the pile I need to file tomorrow," I have performed a very common and useful act of reference. I can now think of the pile that way, compare it with other piles, etc. We could, I suppose, deny that this is an act of reference, and argue for using "reference" in a different way, one that must involve others, but what would be the warrant for that?
I wonder if people assess the situation according to their own experience of thinking and speaking. I think Srap Tasmaner is basically saying he doesn't think at all when he's not engaging another person. I think he's saying he's not even conscious of the world around him until he discusses it, at which point a sort of negotiated narrative comes into being. I can't connect with that at all. I have no idea how a person would even become conscious that this was happening.
My experience is more that speech has a metaphoric connection with things my nervous system is doing automatically.
In this case, even to the degree that I am engaging with another person, I am speechless.
It just seems obviously not to be.
1. It has an indexical in it. I think that rules it out from the jump.
2. As phrased, it names a class of actual performance, without even a ceteris paribus clause. The obvious way to strengthen it is to shift to talk of dispositions. But c.p. clauses and dispositions have known issues.
What you seem to want is really an in-between category of "rigid-for-you".
No, Kripke and Kaplan say indexicals can be rigid designators:
Unless you think the "demonstrative / indexical" distinction is important here? I think my example uses a genuine demonstrative. And in any case, I'm pretty sure indexicals are generally accepted as rigid. @Banno?
Quoting Srap Tasmaner
I don't think so, because I don't yet see how my designation differs from the standard model. In what way would it not be rigid for anyone, once accepted?
You'll probably need an MRI at some point. :grin: :up:
Right right. It's been years since I read this. I've got nothing to contribute on "what Kripke would say" so I'll mosey along.
I'll add one little note, relevant to the issues raised in the OP about essential properties.
In the collected papers of Ruth Barcan Marcus, there is a transcript of a discussion between Marcus, Quine, and Kripke, who was (iirc) at the time maybe not yet 20, and I forget who else. Anyway, I remember a specific exchange where Quine said that Kripke's approach would require bringing back the distinction between essential and accidental properties, and Kripke agreed, but didn't consider that the fatal flaw Quine did.
I think there was some bad blood later, Marcus or people on her behalf claiming that the causal theory of names was stolen from her.
Anyway it's interesting to see Quine's star student (and then later Lewis) already plunging into waters he was deeply apprehensive about.
Some people like wrestling with the Gavagai. :razz:
Quoting Srap Tasmaner
Srap referred me to a sentence I had uttered and I told him I wasn't sure if he meant the whole thing or the parts. You got the reference to Quine, but Srap didn't. Does that mean the reference was successful and unsuccessful at the same time?
Does this sentence strike anyone but frank as plausible?
Sometimes @frank I just don't see the point in responding. I'm sure you understand.
So you did get it? Fair enough.
But wait... did you get it privately? Do you allow such a thing?
Eh, I guess it doesn't matter.
I agree with the comments @Banno and @Srap Tasmaner made on the issue of intent regarding the way for one's own expressions (or thoughts) to refer, at least until this point in the thread (where you flagged me). The very content of this intent is something that ought to be negotiated within a broader embodied life/social context, including with oneself, and, because of that, it isn't a private act in Wittgenstein's sense. It can, and often must, be brought out in the public sphere. That doesn't make the speaker's intentions unauthoritative. But it makes them fallible. The stipulated "rules" for using a term, and hence securing its reference, aim at effective triangulation, as Srap suggested.
Another issue is relevant. Kripke's semantic externalism (that he disclaimed being a causal "theory"), like Putnam's, often is portrayed as an alternative to a descriptive theory that is itself construed as a gloss on Frege's conception of sense. But modern interpreters of Frege, following Gareth Evans, insist on the notion of singular senses, that aren't descriptive but rather are grounded in the subject's acquaintance with the referred object and can be expressed with a demonstrative expression. Kripke's so called "causal theory" adumbrates the idea that, in the champagne case, for instance, whereas the speaker makes a presupposition while referring to the intended individual that they see holding a glass, their act of reference also is perceptually grounded (or meant to be so) and is a singular sense rather than descriptive. When there is an unintended mismatch between the reference of this singular sense and the descriptive sense that the speaker expresses, then the presupposition of identity is mistaken. What it is that the speaker truly intended to have priority (i.e. the demonstrative singular sense or the descriptive one) for the purpose of fixing the true referent of their speech act (or of the thought that this speech act is meant to express) can be a matter of negotiation or further inquiry.
There's actually a funny issue with non-response I've been thinking about, since 's entreaty that I stick around. It's one of the things Lewis talks about in Scorekeeping, if I'm remembering correctly.
Suppose you ask me who that guy is holding the glass of champagne, and I realize you mean Jim, but I happen to know Jim is holding a glass of sparkling cider. I could silently correct you and just answer "That's Jim," but in doing so I will have implicitly endorsed your claim that Jim is drinking champagne.
We are again in the territory of farce.
Going back over this, it seems to me that the reference is now fixed by the indexical, "the man over there", and not by the description "He has champagne in his glass".
By answering both and seeing to which @Srap Tasmaner responds? Answering one, and seeing if the response fits that answer?
Generally, by moving the conversation on, and seeing what the result is, and then making an inference about Srap's intent.
I wouldn't quite accept your thinking, or even talking to yourself about the tree, as a bonafide reference. I am more incline to think the prime examples of reference involve a public shared speech act, and that such self-talk is secondary. This, becasue, one does not usually need to reassure oneself that the reference being made is correct, or question what it is your are thinking about. Not that we can rule that out, but it would seem to be unusual.
We need to understand reference in the first instance by looking at fairly standard cases, then considering oddities.
seems to me to be mistaken, becasue we do not usually need any "apparatus" in order to check who it is we are thinking about. Indeed, the idea is odd.
Consider Did I know it was a picture of him?
I agree that speech is pervasively conditioned by the wider context of human life, but I don't see how we could maintain, as Srap and Banno have been doing, that a speaker has to have the buy-in of the audience in order to "successfully" refer. The triangulation they're talking about, as far as I understand them, is not about the social context, it's about the comprehension of the audience. I think you can triangulate with what you've learned about language use. Why do you need the audience's acceptance?
Quoting Pierre-Normand
So you're saying the champaign issue is an example of a failed reference? Is that how you take Kripke's meaning?
BTW, I know you're busy, but if you have a second and would want to tell me what you think about Kripke's Wittgenstein on Rules and Private Language, I would so appreciate it.
I thought you objected to making inferences about intent?
Kripke does re-introduce the idea of essence, but in a form quite different to the classical approach, in being extensional rather than intensional.
Yes!
To that end we ought acknowledge the limits of finding a set of conventions or rules for fixing a reference, as set out by Davidson in "A nice derangement of epitaphs".
Of course, if reference is a product of triangulation, and I think it is, then it is not private.
I was noting that such inferences cannot result in certainty.
But it's important to note that this doesn't matter.
We don't need to fix the referent of "gavagai" with absolute certainty in order to get the stew, or go hunting rabbits.
So much of the conversation about fixing referents is unnecessary.
It's on the first page:
Complete text available at the David Lewis papers.
Perhaps it is, rather, but I'm certainly familiar with it. As in my "pile of papers" example (which the bolded addition is meant to capture), I find I often have to come up with a system of reference in order to keep straight what I'm trying to think about. Also, more simply, I do in fact talk to myself, both out loud and "with words in my head." Maybe "apparatus" isn't the right word, but I don't find much difference between how I do this, and how I converse with others -- including, as I say, sometimes reference-fixing.
That was kind of my point to Srap. As @Pierre-Normand was saying, the speaker's intentions are authoritative, but fallible.
Not to put you on the spot, but are you saying that reference in fact requires triangulation, or only that we should reserve the term "reference" for that particular type of reference-fixing, and call my private version something else? I assume you're not denying that I do have the private experiences I'm claiming.
I think broadly you'd expect, and can find exemplars of, two ways to go on this, as usual:
(T) Language is, first, a system for organizing your thoughts; secondarily we developed ways of verbalizing our linguistically structured thoughts to each other, for obvious reasons.
(C) Language is, first, a system of communication, an elaboration of the sort of signaling systems many other species employ; secondarily we developed the ability to "internalize" an interlocutor (perhaps imaginary) and to use language to organize our thoughts.
A whole lot flows from this fundamental difference of approach. I'm not sure there's a reasonable means for choosing between them, but I tend to think what evidence there is favors (C).
I'll dig it out. I think I know what box it's in.
Cf.:
Quoting Tony Roark, Conceptual Closure in Anselm's Proof - link to related thread
Yeah that's quite interesting, and I think both (yours and mine) represent types of triangulation.
A further curiosity is that parasitic reference has to be self-consciously contrastive, so it's the sort of thing a parent can engage in; on the other hand, children are said to be learning when they manage this sort of "playing along," "calling things what you call them," but they lack the distinction between the two ways of doing this.
AI tells me it's in Ruth Barcan Marcuss Modalities: Philosophical Essays (Oxford, 1993), especially in the early papers and appended discussions. I can only see a limited preview.
I think it's yet a third thing: The reference is fixed by the description "the man about whom I say . . ."
I'm jumping in here without an clear understanding of how "triangulation" is being used here (a relation between the scribble, the writer and the reader, or the scribble, what the scribble references and the scribbler). I don't see a triangulation unless we leave out the reader/listener as we have three things in the scribbler, the scribble and what the scribble refers to. So it appears there may be some trying to fit a square into a triangle-shaped hole.
Say you were the only person in the world. Why would you even consider drawing scribbles to refer to other things that are not scribbles? Well, maybe you might want to keep track of time, like how many days passed since the last rain, or when the deer migrate, etc. Maybe you want to remember the past so you might make scribbles that remind you of something in the past. I have spreadsheets containing information that I've never shown to anyone. They are for my eyes only, and only I would understand the relationship between the information within the cells. So it seems to me that a reader/listener is not necessary for reference, just some rule where some scribble refers to something else and the rule-maker. Agreement with others about the rules of reference come later.
I just don't think that follows from anything.
Everytime someone argues that blah is born out of social practices which continue to support and inform it, someone will say, "So if I privately blah, in my mind, you're saying it's not really blah?!"
No, of course not. It's why I tried to make clear in that post that both views of language at least attempt to end up with both social and private uses.
Here, consider reading. Famously, reading used to only be done aloud. To this day, children are overwhelmingly taught to read aloud: your teacher tells you, out loud, what sounds the letters make; the student demonstrates their ability by making those sounds out loud. It is how this knowledge is transferred to the next generation. It makes clear the relation between our use of oral and written language.
Would anyone then conclude that reading silently is not "really" reading? No.
That's certainly how it seems to me, and for the same reasons you cite, but I want to understand why @Banno might think otherwise.
Quoting Harry Hindu
Same point. Surely Robinson Crusoe did some private referring! But again, let's hear more about the "no private reference" case. I do think it's crucially different from "no private language," which may be where Banno is coming from.
I know you're kidding, but that's clearly the wrong test case. He was taught to refer to things using first oral and then written language. Even gesturing at things is learned behavior.
I don't think anyone has made the claim that reference is ever done using a private language. The claim you made is that someone has to comprehend the speaker's reference in order to for there to be any reference. As @J pointed out, that's an odd usage of the term "reference." I don't think it's what Kripke is talking about.
What I'm saying is that we only have something we call "reference", the thing that we do with referring expressions like names and descriptions, so that we can talk about things with other people. More than that, our individual cognitive capacities are shaped by our interactions with other people, so the sorts of things we want to talk about are already the objects or potential objects of shared cognition.
And I think our referring practices are shaped by the goal of achieving shared cognition. In conversation, both speaker and audience contribute: the speaker says what they believe will be enough to direct the audience's attention, expecting the audience to draw on whatever they can to "fill in the blanks" (context, shared history, reason).
Why does any of this matter? Because words are a "just enough" technology that evolved for cooperative use; a word, even a name, is not something that carries its full meaning like a payload. Words are more like hints and nudges and suggestions. They are incomplete by nature.
And so it is with using them to refer. We should expect that to be a partial, incomplete business.
I think it's tempting here to think of this on the analogy of regular human finitude: in our minds we pick out objects to talk about and we do so perfectly, completely, but words are imperfect and ambiguous and are kind of a lousy tool for communicating our pure intentionality.
I doubt that story, but about all I have in the way of argument is that our cognitive habits and capacities are shaped by just this sort of good enough exchange. My suspicion is that we largely think this way as well. And this makes a little more sense if you think of your cognition as overwhelmingly shared, not as the work of an isolated mind that occasionally ventures out to express itself.
Quoting Harry Hindu
I think this highlights the question we're discussing. I'm just thinking this through myself, but there has to be a difference between "private language" and "private reference," doesn't there? As frank says, we don't need a private language to refer privately. We can use the community language we all know. That's not what's private about private reference -- rather, I'm arguing that it's the independence from "triangulation" or the need to have a listener comprehend the speaker's reference. I read Srap as talking about language, not reference, and if that's so, then what Srap says is clearly true: Robinson Crusoe needs to have inherited and practiced a non-private language before he can make up any designations for the flotsam that washes up on his beach. But once he does that, why would we deny that he's referring to said flotsam when he thinks about it, or perhaps makes a list of tasks?
Our posts just crossed! So, as to this: That's what I'm questioning. Why couldn't it be true that we need reference equally to talk to ourselves? I'm not even sure that your version would be true as a genetic account -- who knows which came first, private naming or public discourse, or whether they were simultaneous?
That's right, and therefore I think the interesting question asks how parasitic or triangulated reference fits into reference in general. There is certainly a sense in which parasitic or triangulated reference is secondary, and this is seen in the way that the child does not begin with it. I think it is also true that lying requires parasitic or triangulated reference, and lying too is a secondary form of reference.
More generally, people are doing slightly different things when they refer, but it would seem that all acts of reference have commonalities.
That's fairly persuasive as a theory of the origin of speech, but I don't think it necessarily indicates that we can't speak meaningfully while alone. The part of the motor cortex that orchestrates speech is separated from the portion that handles comprehension. It's not clear that the unity of consciousness we enjoy today is the way humans have always been. It may be that talking to ourselves has been around as long as talking to each other has.
It's important to remember that skills don't necessarily arise for a need, but having arisen, they find a need (can't remember who said that, Democritus?) It may be that speech just randomly emerged as a continuous stream accompanying experience. In time, it became valuable for group dynamics. We really don't know.
Quoting Srap Tasmaner
Absolutely.
Quoting Srap Tasmaner
Sure. I think there's convincing evidence that speech capability is innate, but interaction is necessary for development.
And I'm suggesting that this "independence" is to some degree illusory, in two senses: the sorts of things you think are the sorts of things you could express, whether you do or not; and secondly, they are that way because you learned how to think from other people.
Roughly, I want to convince to feel, behind every thought you have and every word you utter, millions of years of evolution and hundreds of thousands of years (at least) of culture. The thoughts and words of countless ancestors echo through your thoughts of words. Everytime you choose as the starting point for analysis "What am I doing all by myself?" that's a mistake. It's the tail wagging the dog.
I agree.
I hope no one will take the forcefulness with which I'm expressing my view to indicate dogmatism. I could be entirely wrong.
Honestly I think I'm inclined to push this sort of inside-out approach just because so much of our tradition presumes the opposite. I'm curious to see if other approaches might be enlightening.
And that is by and large a good idea, which I appreciate. We don't want to be taking words like "private" or "mental" to imply some lonely kingdom we inhabit and populate by ourselves, as if it (and we) were something new on the Earth. That never happens.
It should be clear from other posts that I agree we do not know, and may not be able to know.
But I am still a partisan of the communication first view, or, rather, shared intentionality and cognition first. A lot of that I get from Tomasello. I was playing with my granddaughter last year after watching one of his talks and it's shocking how obvious this is once you look for it: I roll the ball toward her and she glances up at me then back at the ball until she traps it in her pudgy little hands and immediately her face pops up to look at me. (Did I do it right? Is this how we do it?) Then she focuses on the ball so she can roll it toward me and as soon as she lets go, her face pops up again to see, again, if she's doing it right. It's constant. We start as early as possible learning to see the world through the eyes of our caretakers. I think talking builds on and elaborates this fundamental orientation of ours toward communal cognition.
Well, there is a theory that reading in the ancient classical world was always reading out loud. Reading to oneself in sllence developed later. Sadly, I have lost my note of where I got this story. However, one can see this process at work by watching small children as they learn to read. Even it is not true, it seems to me to be a plausible myth of the origin of talking to oneself.
Quoting J
Is there any reason why we can't distinguish two phases of reference? The speech act and the hearer's response, which acts as feeback to bring into line any misunderstandings.
Where speaker and hearer are one and the same person, we have, so to speak a limiting case. One of the limitations is that the tendency, over time, of language to wander from its original starting-point. A solitary speaker has, and requires, no feedback.
The involvement of other people puts a brake on this for a solitary individual. Of course wandering still occurs, but occurs as the result of many individuals communicating with others, so the changes are controlled by consensus.
Quoting Banno
This quote from @Banno is from the other thread, explaining to me how formal logical systems are constructed. This process seems to me to assume that assigning properties to individuals presupposes the assignation of names to their references. But perhaps I have misunderstood.
Of course, that's not a problem if we are simply using natural language as opposed to constructing one. But it would be nice to be able to say that referring and describing are interdependent activities. They really need each other.
Incidentally, ostensive definition is the traditional way of escaping from the endless circle of descriptions (I believe). Wittgenstein's point about this is, as I understand it, that there is no guarantee of success. But if we can sort out misunderstandings, why do we need a guarantee of success?
Saint Anselm? I'll have to google now.
Ambrose!
I think this is why people like @J talk to themselves aloud. It creates a quasi-externalization and a quasi-triangulation.
Quoting Srap Tasmaner
Yes, but we see the world through their eyes. Your granddaughter asks implicitly, "Am I doing it right?" I don't think she is implicitly asking, "Am I doing it your way?" That's why I thinkat least according to a prominent rational aspectthe parasitic move is secondary. If I tell my nephew what a spider names, or how many legs a spider has, he is immediately thinking in objective terms. He thinks, "A spider has eight legs," not, "My uncle says/thinks a spider has eight legs." The shift to the latter is actually quite complex and difficult. In fact the concept and basis of error eludes most of the TPFers on a regular basis.
Quoting Ludwig V
St. Augustine found St. Ambrose's silent reading strange and abnormal, which is one evidence we have for that thesis.
Thanks very much. Perhaps I should have paused before posting.
Quoting Leontiskos
Thanks. It's good to know I was not wrong.
I miffed that a bit. It was actually St. Augustine writing about St. Ambrose, who practiced silent reading. Augustine found it strange.
I will try to get to the big can of worms you opened later tonight.
I am familiar with this passage. It's in one of the earlier books of Confessions.
That people used to often read aloud might also be a reason for the heavy preference of verse up until the modern era, the epic poem being for them what the novel is for us, and even scientific and political topics were covered with poetry. Another common hypothesis is that it is easier to remember text that is in rhyming meter and books were so expensive that memory was essential. Also, you can often do more emotionally and thematically with clever verse using less text, and when you have to kill a bunch of oxen to make a single book, economy is key.
Is it? I suppose in some sense it must be, because it requires [I]some[/I] stimulus input, but it also seems about as innate as almost any behavior can be. Honey bee dances would seem to refer, but bees are not "taught," although they do have a "critical period" in development where lack of exposure to other dancers will hamper (but not remove) their dancing abilities. For humans, pointing seems similar, being fairly universal and innate (as much as anything can be).
Unrelated comment for the group:
Animal "reference" is an interesting example. Can dogs or dolphins refer? It seems they can in at least some basic sense. Dolphin pods send out scouts and use signals to direct the pod during attempts to catch fish, etc.
I'd question though whether it makes sense to talk about "bee's dances" or "dolphin clicks and whistles" doing the referring. The bees and dolphins refer, the dances and whistles are their means of doing this. Certainly, we can look at the means of communication in abstraction, but zoologists are pretty good about always trying to understand these in terms of the animals' particular biology. With philosophy of language, this focus is sometimes lost. Language itself does the referring in some views, rather than acting as a means. That's a crucial difference.
From an information theoretic perspective though, you can never ignore that data source. If a random text generator just happens to spit out a coherent English sentence, all it has really given you information about is the pseudo-random process underpinning the generation of the text. If it says: "Rome is burning," nothing about this ties back to Rome or fires, except wholly extrinsically.
Just a rambling thought about analysis. There is conventional meaning and intended meaning I guess.
There is a similar, maybe more influential issue where we tend to think that the unconscious processes undergirding empathy and language must work something like conscious inductive inference. Indeed, formal versions of induction are often called on to explain language, and "Bayesian Brain" theories certainly imply this sort of thing. But is it so? I don't see how any purely computational, formal inductive process can ever entail [I]feeling[/I] or [I] understanding[/I] (the old Hard Problem). Empathy seems to involve more than just induction, yet empathy also appears to be key to language acquisition and successful interspecies interactions.
There is no intentionality, emotion, and first-person experience in formal models of induction. I am not sure how there ever could be. Hence, using this as the starting point for philosophical analysis might be condemning our project to behaviorism (perhaps against our wishes). If the idea is that "what is empirically observable (and measurable)" is what fits in such models, then we might end up being forced to reevaluate this epistemic criteria.
Thanks.
Yes, and I think the recovery of reading aloud would be a good thing. Simpson talks about the development of poetry as part of the natural progression of culture's orientation towards a comprehensive end:
-
- Sounds good.
Building on what said, this essentially contravenes your earlier observation, 'I shall use the term "Glunk" to refer to the man that I call "Glunk".' Note that part of the problem here is that 'Glunk' is not a term that is serviceable for triangulation, because it has no common meaning between the two speakers. This is the same problem with your, "The man who I think..." Mere thoughts are not common between speakers, and therefore are not serviceable for triangulation. This is why your revision does not add anything - because your interlocutor cannot read your thoughts. The reference still succeeds or doesn't succeed on the basis of the claim that the man is holding a glass of champagne.
Quoting J
It is now based on something thought to be the case. There is no escaping the assertion of what is the case (and this is equally true on TPF, where it is unavoidable to give opinions). The revision is only useful insofar as it introduces an explicit margin of error. It is only useful insofar as it says, "The man over there who is holding a glass which appears to contain champagne." The idea here is that the interlocutor will be helped if the possibility of an erroneous appearance is pointed up. But apart from that, revising to thoughts is no help at all, given that thoughts are private. "That man over there of whom I am thinking," is not going to suffice for a common reference. We say, "The one I am looking at," or, "The one my right foot is pointing at," but not, "The one I am thinking of." So, "I am thinking of the one I am thinking of," is an infallible statement, but it won't be helpful when it comes to public reference.
This plays well on my dithering between Davidson, Austin and Wittgenstein.
Referring as a speech act is public and communal, as are all speech acts. If we are to make sense of thinking about something to oneself, we might well do so as a back construction, a re-application of the public act to the equivocal "private" world.
One argument for this is that the "private" discussion might be made public - you can tell someone what you are thinking. The act of thinking about something to oneself is not inherently private in the radical sense; it can be translated back into the public domain. That's what distinguishes it from the kind of "private" experience Wittgenstein critiques in the private language argument. So it's not private in the way that the sensation "S" is for Wittgenstein.
In answer to your question, I don't see that we must either deny that a referring to something unvoiced is not a reference proper, because it well might be made public.
The private language argument shows the incoherence of a language that in principle cannot be shared. It remains that something a reference may be in fact unshared yet not unsharable.
So I do not think I am caught in the dilemma of having to choose to rule some references as not references.
It's not at all clear to me what sort of act a "parasitic reference' might be; I can think of a few possibilities: repeating a name without knowing the referent, ironic, fictive, or pretend speech; quoting someone else's use.
What do you have in mind?
Not sure what you are asking.
Giving an interpretation to a formal language involves assigning individuals to the individual variables (names, in a natural language) involved. a to "a", b to 'b" in the exemplary case.
Properties, or more properly predicates, are not something apart from those individuals, but sets of individuals. f={a,b,c} or whatever.
Quoting Ludwig V
Not sure you can seperate these. For example, Wittgenstein points out that ostension is already a part of the language. One has to understand the activity of pointing to follow a pointer.
Good, and your experience with your granddaughter illustrates it beautifully.
Quoting Banno
That's where I come out too -- "private language" is a bizarre if useful thought experiment, whereas a reference may be private or not, depending. As you say, it's the difference between something that in principle would have to be unsharable, and something that just happens not to be shared.
What are we doing when we use a word in the community language differently (ie slang, etc)? It takes time for that use to propagate throughout the community. When does it go from being a private use to community use? Does this mean that language is rooted in private reference that has been simply been agreed upon by the community? Who invented each language? How did each language become a language? Was it the local shaman that found scribbles useful for keeping track of natural events and then taught the use of the scribbles to the community?
Yes. But that assignment happens before the assignation of individuals to predicates. So, presumably, predicates can play no part in assigning individuals to individual variables. Hence only rigid designators can be used here.
Quoting Banno
I didn't think I was questioning that.
Quoting Banno
Yes, he does. But ostensive definition was thought at one time to be the way that language reaches out from the circle of words (as in definitions) to attach to the (non-linguistic) world. Has that changed?
Quoting Ludwig V
Do you have in mind something like this, from the first page of the Investigations?
Wittgenstein continues:
It's a part of the story, not the whole of it. In particular that juxtaposition of a linguistic and non-linguisitic world needs some critique. The individual a and the individual constants "a" could not inhabit seperate worlds if we are going to do things with the one by using the other.
So it's not quiet that "predicates can play no part in assigning individuals to individual constants". We might assign "a" and "b" to a and b becasue we already assigned "a" and "b" to "f"; we might assign "sports car" and "sunset" to the sports car and to the sunset becasue those words were already predicated to "red". It makes no difference if we first assign names, then predicates, or if we first assign predicates and then names.
Nor are we restricted to only rigid designators. We also have at hand the individual variables x,y, and z, the indefinite noun phrases of a natural language that work with quantification. So we have "Something is red" and "Nothing is red" and so on.
But your general point carries here, in that the separation between syntax and semantics in a formal logic is deceptively simple, and so somewhat unlike the semantics of a natural language.
Yep. Cool.
This might be the most common error made by folk attempting to critique private language - "But I do talk to myself privately!", and by mistaken defenders of private language arguments who supose that we cannot do something we indeed do.
What the argument shows is that the meaning of "red" is cannot be our private sensation of red, and that rather than looking for a meaning here as the thing that "red" refers to, we should look at how we use it to reach agreement on which apples we will purchase.
The puzzle is why the extension of "red" includes these apples and not those ones.
I don't need to agree with anyone to know when an apple is ripe and when it isn't. To know when an apple is ripe or not (and to know that red means ripe and black means rotten), I interact with the apple, not people.
Female peacocks don't need language or to agree with anyone that one male's plumage is more attractive than another's and means they would be a better mate.
Once an infant obtains the sense of object permanence, they understand that their mind is not the world and their experience of their mother does not exhaust what it means to be their mother, but is representative of what it means to be their mother, and that their representation can be maintained and talked about in the absence of their real mother.
The scribble, "red" is no different than the red apple in that we all privately interpret what the red of an apple means and what a scribble means and how to use it (if the apple is red eat it, if it is black throw it in the trash, if the scribble is a word then you can use it to represent things in the world, if not then you can interpret the scribble as the outcome of purposeless natural processes, or art). A scribble is a thing like everything else that we privately interpret using our own senses and our own brain. You seem to be positing the existence of an omni-mind where the meaning of some scribble is contained.
Yes, of course that's right. I was lazily using what I thought was a standard formulation. Let me try to put the point another way. A dictionary defines word in terms of other words. It is surely obvious that, if that is all there is to it, there will be a massive problem in actually using language for many of its standard purposes, such as shopping lists. Of course, Wittgenstein was right to say that ostensive defition requires an understanding of "where the word is stationed in the language", but he didn't suggest that ostensive definition didn't work, did he?
Quoting Banno
I get that. But my, possibly naive, point is that whichever we assign first, we must be assigning without the use of whichever we assign second. If we have assignd names to constants, we have something we can assign to predicates. Obviously, we cannot at the same time use predicates to assign names to constants. The same applies, mutatis mutandis, the other way round.
I had thought that the point of the concept of a rigid designator was that the assignation of names to constants was, from the logical point of view, arbitrary, so the problem didn't arise. (The causal account of naming could safely be seen as beyond the scope of formal logic.) I seem to have got that wrong.
I feel forced to a view that names and predicates require each other and so must be interdefined by a process that defines both at the same time - unless in some way the point is the structure and not the process of construction.
Quoting Banno
H'm. I don't know enough logic to comment. But I would be surprised if there were no difference between formal logic and natural language in that respect. The concept of syntax (grammar) was invented long after natural languages developed - and I find it hard to believe that the latter was developed in a systematic way.
Quoting Banno
I agree entirely with both your points. But I don't see what the puzzle is? That could only be puzzling to someone who couldn't perceive the difference.