Why should we talk about the history of ideas?
Quoting Wayfarer
Genuine question here.
You very often write this way, what I've come to think of as the "argument from the history of ideas". The general form is: Lots of people used to believe X, but then in modern times (glossed as appropriate, usually the Enlightenment or the 20th century) people mostly starting believing Y instead, and that's the current orthodoxy, but X has started making a comeback because look! A, B and C are contemporaries who believe X and they say Y is on the way out!"
I used to always ignore these paragraphs (unless they were weirdly factually wrong like the one above) because I was largely in the "Just say 'No' to the history of ideas" camp. It would sometimes occur to me that an approach more derived from Nietzsche (or Heidegger or Foucault or Derrida, FWN's vartious progeny) might have some use for such an approach, so maybe I ought not be so dismissive. But I have some sense of what those folks were up to, and I don't have much sense of how you think about these paragraphs.
So here's the question: what sort of point are you making when you post something like this? Is it only sociological? There'd be value in that, in helping wayward young philosophers know they're not alone, there are others who believe in whatever, and it's okay to be philosophically divergent. But is that what you're up to? Or do you have in mind some sort of genealogical critique in the style of Nietzsche and his descendants?
There are other options, of course, but I'll leave you to address the question in your own way, if you are so inclined. Others will do as they like.
Peirce was active in the so-called 'golden age of American philosophy', roughly contemporaneous with Josiah Royce, William James and Borden Parker Bowne, all of whom were broadly idealist, in keeping with the zeitgeist. That was all to be rejected by the ordinary language philosophers of the 20th century and the ascendancy of scientific naturalism as the 'arbiter of reality'.
Genuine question here.
You very often write this way, what I've come to think of as the "argument from the history of ideas". The general form is: Lots of people used to believe X, but then in modern times (glossed as appropriate, usually the Enlightenment or the 20th century) people mostly starting believing Y instead, and that's the current orthodoxy, but X has started making a comeback because look! A, B and C are contemporaries who believe X and they say Y is on the way out!"
I used to always ignore these paragraphs (unless they were weirdly factually wrong like the one above) because I was largely in the "Just say 'No' to the history of ideas" camp. It would sometimes occur to me that an approach more derived from Nietzsche (or Heidegger or Foucault or Derrida, FWN's vartious progeny) might have some use for such an approach, so maybe I ought not be so dismissive. But I have some sense of what those folks were up to, and I don't have much sense of how you think about these paragraphs.
So here's the question: what sort of point are you making when you post something like this? Is it only sociological? There'd be value in that, in helping wayward young philosophers know they're not alone, there are others who believe in whatever, and it's okay to be philosophically divergent. But is that what you're up to? Or do you have in mind some sort of genealogical critique in the style of Nietzsche and his descendants?
There are other options, of course, but I'll leave you to address the question in your own way, if you are so inclined. Others will do as they like.
Comments (220)
Some simple, some complex. Some popular, some obscure. Some that have endured, some not.
And some that died out, and for reasons unknown have returned to become undead zombie ideas like the flat earth theory lol.
Do you buy the idea that "how the conditions of life in any give time and place relate to changes in thinking (arts, sciences, philosophy, politics, etc.) is "history"?
If you do, what's your problem with Wayfarer's paragraph? If you don't buy it, what is your definition of history?
Sure, I mean, the history of ideas is really interesting. Love that stuff.
My question was not whether it's worthwhile in general, but how does talk about the history of ideas contribute to philosophical discussion? I mentioned a couple of the answers I'm somewhat familiar with. I could also have mentioned its central role in exegesis, evident in the reading threads we've had dealing with texts by Plato, Hume, Descartes -- texts far enough removed from us that you have to restore some context to understand them well. I've always been inclined to call that sort of thing "history of philosophy" rather than philosophy "proper" -- but I'm not wedded to that view.
It's an open question to me what the place of the history of ideas, and of the history of philosophy, should be in our discussions, and I expect people to give very different answers. My starting point was wondering what @Wayfarer's point was in telling @apokrisis what he did, as quoted above. What effect did he expect that paragraph to have on apo's views?
I think you gave at least part of the answer in your OP.
Quoting Srap Tasmaner
Studying the history of ideas helps you understand that things that were once seen as true but now aren't may be true again. More simply - they might have been true all along, or at least had perspectives that were helpful and useful. I've read in more than one place, I can't remember where, that ideas in science and philosophy have fashions. Being in fashion gets you professorships and funding. Being out of fashion doesn't.
And here's the main reason I responded. It gives me the chance to quote one of my favorite sections from my favorite poem by my favorite writer.
Quoting Robert Frost - The Black Cottage
Quoting Srap Tasmaner
In authors you mentioned like Derrida, Foucault and Heidegger, a distinction is made between history and historicism. Philosophy is always historical in the sense that the past is changed by how it functions in the present. This as true of historical analysis as it is of fresh thinking. Historicism, by contrast , treats history as a static objective grid that one can traverse without altering its sense. Historicism fails to recognize that history is nothing past and gone but is immediately present and operative in the now that it co-determines. Both American Pragmatism and scientific naturalism can be treated that way, as a past that is still operative now.
But what do you study when you do a philosophy degree but the history of ideas? You hope to learn critical thinking and even eventually join up with some current research project. Yet you get sat down for your first years and walked through the history of philosophy.
Perhaps you can skip ancient Greek metaphysics and start off with Enlightenment epistemology, but it makes sense to understand the context of what has gone before so as to ground what seem the concerns now.
I would say that in fact a problem is that folk skimp their history and dont realise how much is simply being rehashed with each generation. And also, close study shows how much the telling of the past is a sloppy caricature of what was actually said.
Anaximander is a prime example for me. Philosophy is a social game and its winners write the history. So the complaint might not be that the history is rather irrelevant - which for poor historians, it would be - but that it is a lot of bloody work to be historically accurate.
Quoting Srap Tasmaner
This has its own history. At least five times now Ive had to respond to the jibe that Peirce was ultimately an idealist because he used the words objective idealism. I have to point to the context in which Peirce philosophised - the very churchy world of late-1800s Harvard and Massachusetts academia which placed Peirce under considerable social pressure to conform. And also that Peirce was arguing for semiosis as a general ontology before there was a clear notion of their being genetic and neural codes to match the linguistic and logical codes of human thought.
And if we are talking about the actual zeitgeist that Peirce was responding to, it was his response to Kant that most closely echoed Schelling and Duns Scotus.
But anyway, the history of ideas is important as it is the only way of understanding why folk tend to believe the things that they do. And while science and maths are juggernauts when it comes to the production of new things to know, maybe philosophy only has its history, or its creative writing wing.
In this sense I am not a historicist, then. I mostly think about Popper when I think about the accusation of historicist, and that's the name I don't mind taking on. I just don't think there's as much poverty in history as he intimates.
But, this notion you say about history being treated as a static object -- that I do not do. It would go against everything I understand about the writing of history. History is a living subject, not a static Way Things Were, and it's nearly always addressing something present while talking about the past. My more mundane explanation for this is that history is narrative, which itself isn't a list of facts but a story which follows a trajectory of significance. (hence why I highlighted that in my post)
Same here.
Quoting Moliere
This is applicable in several fields. The history of science isn't science; the history of the arts [literature, painting, music...] isn't "art"; but a decent history requires the historian to be sensitive to, delight in, be familiar with performance, etc, else one gets a ham-fisted treatment. The history of philosophy requires an engagement with the relevant philosophers.
Books about philosophy are sometimes the best approach to a given philosopher, because the about can provide context, background, explanation of terms, and so forth that a general reader might not (probably doesn't) have. That's certainly the case for me.
Quoting apokrisis
That's two votes for better understanding through history, which it's hard to argue with. I've often wished math and science were taught with more of an eye to history.
Quoting apokrisis
This is true, but I would put it this way: philosophy curricula more closely resemble literature curricula than they do the sciences or mathematics, and that's slightly odd. (I'm not the only one to have noticed this.) You could say that's a result of our above average scrupulousness, since secondary sources tend to get things wrong or slant them in some way -- but that assumes that what matters most is what Kant or Aristotle or Schmendrick actually meant, and that's a matter of biography, isn't it? We're supposed to be in the ideas business. Kant only matters to us because his ideas are interesting; his ideas aren't interesting because he's the one who had them.
Quoting T Clark
Now that's a specific lesson, kind of a warning really. You've folded in both swings of the fashion pendulum here: what you hold true, even obviously true, may in the not so distant future be considered obviously misguided in any number of unflattering ways; and then the naysayers may themselves be naysayed in turn. There's a whiff of vanitas about the whole proceeding, looked at this way, and one might be tempted to chuck the whole thing. Or you could embrace the ephemeral nature of philosophical struggles and shortlived victories and take giddy pleasure in it -- after all, you needn't worry about having any lasting influence!
Quoting apokrisis
Indeed. This is even stranger than the phenomenon of fashion in philosophy. But -- coming back a bit to my original question -- it does little good in discussion to point this out, because no victory in philosophy is ever complete, and probably not even lasting. So if you point out to someone that they're taking the same view Schmendrick did in the 30s, they might just add him to the list of people to quote in favor of their position! It's most unlikely that Schmendrick's position was ever definitively refuted, only discredited in some way, or passed by at great speed by fashion. You might even accidentally cause a Schmendrick revival...
Quoting apokrisis
Now here I think we're closer to a sort of Nietzschean genealogy, and I'll only remark that the implication is that why people think what they do is not what they think -- it's not the arguments and the evidence but the currents of thought they've swum through and been buffeted by. I don't disagree, but it tends not to go over well in conversation, since it amounts to a kind of cultural psychoanalysis, and that's rude.
Maybe I spoke too soon. I like reading history. I'll just stop there.
Seconded. I feel like the scientific pedagogy tries to highlight history but it's not focused on it, so it's kind of bad history so it's definitely something you have to take up on your own if you're interested. The pedagogy is designed to teach people to be employable rather than give a deeper insight.
Quoting BC
:rofl:
Thanks for picking up on that. First of all, why is that paragraph 'weirdly factually wrong'? What exactly is wrong with it?
The history of ideas is a recognised academic sub-discipline, often associated with comparative religion. As I think I might have mentioned, it is associated with a 1936 book The Great Chain of Being: A Study in the History of an Idea, Arthur Lovejoy. The title conveys the gist well enough, it concerns the idea of ancient provenance of the heirarchical nature of being, the 'scala natura', extending from minerals at bottom, up through vegetative, animal, human, angelic, with the One/God at the top of the heirarchy, cascading or emanating the lower levels. (It is actually rather a turgid read, by the way.) I am of the view that the loss of the sense of there being a vertical, qualitative dimension of being, is a real loss, which has given rise to the 'flatland' or 'one-dimensionality' of modernity (note this paragraph about how even up to the 17th century philosophy accepted that there are degrees of reality. )
I learned about history of ideas as a discipline through Comparative Religion which was the subject of my undergraduate degree; one of our lecturers was adept in that approach, and we also read quite a bit of Mircea Eliade, the influential scholar of religion, who has a lot to say on it. It is also a theme in anthropology, which I studied alongside comparative religion (indeed anthropology and sociology of religion, Max Weber and Peter Berger, is another contributing sub-discipline). Then there were scholars such as Joseph Campbell, Huston Smith, Ninian Smart, and others, who cover world religions and also touch on the important themes of the philosophia perennis (a term coined by Leibniz denoting the idea that there is a current of perennial wisdom which surfaces in diverse world wisdom traditions.) I will also mention that I started my undergrad degree with a reading of Russell's HWP, which of course is very much an extended essay in history of ideas.
Another scholar I was introduced to was Edward Conze, a 20th century Buddhologist and likewise historian of ideas. He has this to say in an essay on Buddhist philosophy and its European parallels:
This last being the represented in figure of 'the sage' in philosophy. (Incidentally, Kant is sometimes referred to as 'the sage of Konisberg'.)
I suppose I will also mention Alan Watts whose books triggered my interest in this subject, cheifly The Supreme Identity and The Book on the Taboo on Knowing Who you Are. I won't try and recap his ideas in this post other than to say his overall thrust is that modern materialist culture kind of institutionalises or normalises what the Eastern sages would characterise as Avidy? or ignorance. This was of course a counter-cultural attitude.
I'll leave it at that for now, I have family commitments today which will keep me away for the rest of the day but will come back later. Thanks for the question.
100% what's missing. As if this wasn't the result of human efforts but some sterile equations and information. Ridiculous how education essentializes and splits up technical topics cleaving it of any human element.
Holistic understanding of how thought developed over time is definitely not what most institutions try to market to people. There are some exceptions. St. John's, I believe tries to teach students through primary sources, Great Books tradition, and a more historical approach, but it's few and far between.
https://en.wikipedia.org/wiki/St._John%27s_College_(Annapolis/Santa_Fe)
That seems a better question then. How could one restructure the pedagogy to reflect a different approach? Oddly, I can remember exactly how my introductory philosophy classes started, but not the science ones. But the air of certainty and present tense is definitely what was intended to be conveyed.
And I don't think you would want philosophy to exude that kind of authority where the right views are already there to be learnt?
So does the historical approach not go very well with the mature practice of philosophy?
Quoting Srap Tasmaner
I certainly agree with that. And you can only make an academic career by having ideas of your own that your peers find interesting. Even if it is just offering a revisionist telling of philosophical history.
But what you learn from close reading of the big names is as much the way they thought as what they thought. You can get into their thinking and attitudes and so discover the habits that work in the philosophical game.
Yet how would you set up Philosophy 101? It would have to be some kind of map of the field that oriented students the right way. It would have to use some surprises to get the kids gripped and thinking.
The most philosophical moments for me came in fact from my first psychophysics class were the professor used the example of the Mach bands that give sharp outlines to all boundaries in our visual field. I walked outside and for the first time noticed the contrast edging around the dark buildings against the bright sky. Or at least understood the neural mechanism responsible for what had seemed a defect of clear vision and the processing logic behind it.
The things I would have in my introductory class would be the epistemology of modelling and dialectical ontology of metaphysics. So Kant and Anaximander would be useful historical starting points. But a quick sketch of the context in their times would be enough.
Then one could add a review of comparative philosophy pay the necessary lip service to theology and PoMo as cultural traditions, along with science, Eastern traditions and the general theory of socially-constructed belief systems. :grin:
I think it's likely a problem of complexity. Maths and science have cutting edges in ways philosophy really doesn't. Philosophy on the whole is a much simpler suite of ideas, even being, as I remember reading in Hegel somewhere "the same old stew, reheated" (although I cannot find the quote anywhere, maybe I dreamed it).
Math and Science are certainly not the same old stew reheated, they unarguably progress.
And the arts in general, for that matter.
What's mysterious is that philosophy progresses, but it doesn't progress in the same ways we usually measure progress -- power, fairness, abundance, stability, pleasure, tribe.
Math I can't say. But science I feel progresses in a different mode, more or less. Progress is relative to some value, at least, so I'd suggest that the difference is in values between practitioners of science and practitioners of philosophy, rather than a difference in unarguable progress.
There are narratives that tell us how much better we were in some distant past. There are others that tell of a movement away from the shadows of our ancestors. The 'history of ideas' is a record of previous opinions. It can also reflect the mutability of human life.
One of the odd features of the 'golden age' thinking is that it undercuts the universal as a continuity between all the many articulations of being a particular kind of being. Many models have emerged to imagine how we have changed or not changed. How will we compare these models to each other?
Or are we in the land of Protagoras where there is nothing to learn outside of one's own theater?
Any improvement (according to some) or worsening (according to others) of philosophical understanding has arguably come on the back of scientific progress.
Because you've never been clear on the difference between analytic philosophy and ordinary language philosophy. We don't need to go into it here.
But now I get to ask the same question again, because this was another post just like the one I was asking about. All very interesting I'm sure, but what effect was the history of the history of ideas supposed to have on me? (Maybe you're just "catching me up," in your view, so we can have a proper conversation, but as it happens I already know what the history of ideas is and I've read some of the stuff you mentioned.)
Quoting Moliere
I don't think that's quite it -- obviously for some things, sure, but science and mathematics are just different, and I don't think it's for the reason you suggest. ---- But it looks like this thread has already changed topic! Now it's a thread about how we teach science and how we teach philosophy, and that's fine.
Quoting schopenhauer1
I'm not sure that's true either, if you recognize that there are skills needed and technical background needed to do this sort of work, and the curriculum is designed to get you up and running, able to do mathematics, to do scientific research -- and those are great human endeavors! They don't have to focus on the human element because you are the human element and if everything goes right, you'll be thrilled to head to campus or to the lab or to the site everyday because you get to do science all day! This system largely works, and you can see just by peeking into any lab at the nearest research university, grad students listening to some tunes and doing their work -- a perfect life if there were more money.
Quoting schopenhauer1
True. I've known some Johnnies very well (and married one, a long time ago). They learn geometry from Euclid and physics from Newton.
Quoting apokrisis
Absolutely. But why? Because we don't have any certainty to convey... With the sciences -- geez, with medicine especially, it seems -- it's becoming commonplace for half of what you learned in school to be falsified by the time you retire if not much sooner. (There are some numbers on this in The Half-Life of Facts, but I forget what they are.)
Quoting apokrisis
Certainly. When I was young, I read philosophy in a believing frame of mind, acquiring ideas I could endorse or not. Got older and for a long time have read philosophy with little interest in the 'doctrine' at stake. I enjoy Wittgenstein primarily because we have such an extraordinary record of an interesting mind at work. I just like watching him go, and I think I've learned from how he thinks. I've enjoyed watching Dummett at work because his command of logic is formidable and he sees things I have to work through slowly. Sellars also has an unusual mind. I even like the tortuous way he writes. He's every bit as intricate as Derrida, but not for the same reasons at all. ---- Anyway, big yes, and I think this is an excellent specific reason for reading original texts, but then that only throws into sharper relief my original question: what does the history of ideas contribute to such an experience?
Quoting apokrisis
It used to be my ambition to teach Philosophy 101 using Calvin and Hobbes as the text. (Wittgenstein somewhere said you could teach a class in philosophy using only jokes for your text.) There is one textbook I admire, Contemporary Epistemology by Jonathan Dancy, later known mainly for writings on ethics. Begins with two problems, Gettier and skepticism, and then goes through historical accounts of knowledge noting how they handle these two key problems or fail to. I liked the problem-oriented structure. It's a bit ahistorical in one sense, a very mainstream Anglo-analytic sort of thing, but it really engages with a lot of stuff and brings it to life. Nice book. Probably not quite what I'd be after now, but a solid example of how good a philosophy textbook can be, in my view.
Quoting Paine
If I can abuse your post a bit, several people have suggested that knowing the history of a philosopher's ideas helps them understand those ideas, but there is another way to go here, which is to suggest that it's narrative that matters. (Hey @Isaac.) That is, that we don't naturally deal with 'naked' ideas, but with ideas as they occur within narrative -- that's what our thinking is organized around and pretending to discuss an idea 'in isolation' means you're probably just embedding it in some other narrative without acknowledging that transfer. And that probably means distorting its original meaning -- but that might not be our primary interest anyway, as I've noted. --- At any rate, if we can only deal with ideas as elements of some narrative, we might as well face up to that up front, even if there's no decisively privileged way to do that.
Thats up to you. In the context that started this dialogue, I claimed that C S Peirce was part of the generally idealist attitude of the philosophy of his day. Per the SEP entry:
But that this aspect of Peirce is routinely deprecated as incompatible with naturalism. I think it is at least germane to the OP.
(BTW a rather good Medium essay on current idealist philosophy came up in my feed just now. Dont think its paywalled although I am a subscriber.)
One way (hardly the only way) to look at philosophy historically is as a zoo of intense personalities who react to those who came before and influence those who come after.
Simple exposure to a many opposed personalities, all of them able to make a strong case for their own interpretation of reality, should not IMO be underrated. As others have put it, we are mytholgoical animals, and in philosophy we found a second-order tradition where our orienting myth, which we can't live without, is subject to constant modification -- like Neurath's boat.
The technical issues can be fascinating to me, but I suppose the existential issues remain primary. What is a person's fundamental metaphor for reality? What pose do they take in relation to it ? In other words, what kind of costume does the hero wear and what's the scenery like ? (Dramaturgical ontology?) This would include things like whether philosophy is understood more as mathematics than theatre in the first place.
I lean toward us as the narrative (mythological) animal. We can also just look and see that humans mostly talk about ideas in practical contexts -- trying to persuade someone, prove one is educated, make a woman laugh, etc.
What is this house of personality ? I think of mirrors reflected in mirrors or Indra's net. We aspire perhaps to Shakespearean completeness or perhaps instead some thin dry purity or ? Someone may invent a new way to win tomorrow, though it'll probably be a blend of what came before, same old sawdust.
https://en.wikipedia.org/wiki/Indra%27s_net
Because of the clarity of the jewels, they are all reflected in and enter into each other, ad infinitum. Within each jewel, simultaneously, is reflected the whole net.
To be fair to science, how much free thinking does society really want or need? Especially at the introductory level, you want to impress a certain useful rigidity of thought on all those who are only going to need to apply learnt algorithms on the grown-up world.
And those who do philosophy at university only need to come out with a useful facility for critical thought and perhaps show some comfort with uncertainty in the grown-up world. The history of philosophy is not relevant in the job market.
Medicine became engineering applied to biology and then got corrupted by Big Pharma and Big Food. So no surprise it is what it is.
Again, how should philosophy be taught? That winds up being answered by whether it serves some useful purpose as far as society goes.
Quoting Srap Tasmaner
You're right. If you aren't looking for a totalising world view, then browsing other smart minds is at least a pleasant diversion. I like reading crackpots for the same reason. It is useful to be able to tell the one from the other via intimate literary acquaintance.
But one could also be seeking that totalising world view that both philosophy and science have as a selling point. And a history of ideas in both spheres is important to that project.
Not my style, but I get it. As I said, it doesn't market well, and admitted as such. I'll only leave you that pragmatics doesn't mean it's good. If anything, our society has shifted too far into the "because its expedient for technology and market's sake" and not for actual understanding and context. But keep devil's advocating, I get it. I get the other side, so don't actually.
Yep, that's @Wayfarer. As far as I'm concerned, this approach to discussion is a crutch used in lieu of admitting he isn't clear on, or hasn't thought through, the topic at issue well enough to reply cogently with his own thoughts.
I always do (until its clear nothing significant follows). :up:
Maybe it's uncharitable (or impolite) of me to say so, but after a decade and a half of exchanges with Wayfarer I am convinced that his "appeal to the history of ideas" is used to indicate that he disagrees with me because he agrees with some historical figure/s rather than critically engaging my points and/or defeating my arguments. It's a rhetorical dodge, nothing more. Wayfarer is quite well read, no doubt, but, IME, he's much more skilled at arguing to a foregone conclusion (rationalizing) than validly arguing from clear, explicable premises (reasoning). Typical 'religious/idealist' mindset. No matter how interesting his citations are often they are they're just lengthy footnotes to 'the reasons' he fails to give. :eyes:
Yeah, but it's not only other inmates of the zoo that matter, not by a long shot, especially if it's more like your
Quoting plaque flag
that matters most. It's someone who made a strong impression on you, or it's something in the zeitgeist, or it's the character of the people you interact with over and over, every day. Not just other philosophers. --- And if it is, we don't need the broader history of ideas but only the history of philosophy.
I'm sympathetic to the rest of both of your posts, but I'm still a little hesitant to put narrative front and center. I think it may be the fundamental mode of language use, and thus everything built on language use, but there are layers of life management below language, and I can't quite see language displacing those.
Nothing much for me to take issue with there, but I still have the issue I started with. What I haven't heard yet from anybody is some sort of full-throated defense of, I don't know, 'decentering' philosophy in philosophical discussion, not taking its self-image seriously, and treating it instead as only a part of Something Bigger, something like the history of ideas, the Great Story of Culture, whatever. --- I'm trying to keep an open mind, since my instinct is to treat these moves as some species of informal fallacy, which is, it happens, the sort of thing I find interesting sometimes, but I was looking for a more charitable take.
FWIW, I think that's already implicit in the narrative, which includes Nietzsche and eventually Heidegger, Rorty, and Derrida, pretty much explicitly melting philosophy into poetry, cultural criticism, etc. Even Comte way back thought of philosophy as a merely prescientific stage of human thought. It's what I try to gesture at with Shakespeare as a symbol for 'infinite personality.'
I think of Harold Bloom and Tristram Tzara and James Joyce (to name a few) as part of the same general story as the 'official' philosophers. I'm reminded of Hegel trying to put down Schlegel's notion of Irony in his lectures on fine art -- as if Hegel was afraid of being merely an earnest character in some joker's novel.
You might find this amusing if you haven't already seen it.
Philosophy is the true home of irony, which might be defined as logical beauty, Schlegel writes in Lyceumfragment 42: for wherever men are philosophizing in spoken or written dialogues, and provided they are not entirely systematic, irony ought to be produced and postulated. The task of a literary work with respect to irony is, while presenting an inherently limited perspective, nonetheless to open up the possibility of the infinity of other perspectives: Irony is, as it were, the demonstration [epideixis] of infinity, of universality, of the feeling for the universe (KA 18.128); irony is the clear consciousness of eternal agility, of an infinitely teeming chaos (Ideas 69). A literary work can do this, much as Schlegels Lucinde had, by presenting within its scope a range of possible alternate plots or by mimicking the parabasis in which the comic playwright interposed himself within the drama itself or the role of the Italian buffo or clown (Lyceumfragment 42) who disrupts the spectators narrative illusion.
Yeah, that's not bad. I've figured out what philosophy really is dozens of times, but I'm starting to think you can just not do that.
What I didn't get into was the necessary construction of the self from pieces of other selves. A person who reads philosophy (and literature) (and watches good movies/TV) has more to work with, though too much plurality can be dangerous and maybe paralyzing. As Kojeve might put it, a person who is already into philosophy has already at least chosen the path of the hero of self-consciousness (of the species as much as the little self). This metaphor can be fleshed out in many ways. Am I a hazy slippery profound type or a sharp dry and clear type, contemptuous of anything that doesn't smell like math ? To me it seems like the struggle (anxiety of influence) is to somehow evade easy categorization, which means inventing new categories, enlarging the game for the players that follow. That's part of the charm of the history, seeing how this trick is managed again and again.
I'm also partial to Hegel-and-the-gang's idea that it's really just one thinker leaping from mortal body to mortal body, sometimes splitting into adversaries, but eventually recombining, only to split again. The software runs on the crowd, enough of us always alive to not lose our progress in the game's attempt to understand itself.
:up:
It's beautiful, really. We can't help trying to find what philosophy is, because it's like finding our truest human nature or something. But fortunately we're inexhaustible in some sense.
I think there's a lesson to be learned. It's probably the most important in philosophy and the one that causes the most arguments and misunderstandings. Most of the issues that raise a ruckus in philosophy are metaphysics. They are matters of point of view, not fact. I've made this argument many times before. Differences in philosophical fashion are more cultural, historical, sociological, or psychological than they are factual.
I think that is what Plato was trying to get beyond with the acceptance of dialectic as a necessity of combined ignorance. That spirit of dialectic can also be seen in the way Aristotle commented upon the views of others before arguing his own case.
Those views of dialectic are sharply different from the view that history went one way rather than another. And are those changes accidents of some uninvolved fate or the sequence of some kind of logic such as Hegel wrote?
Aye. I think if you found philosophy's essence, there would still remain a distinction between the found essence and the essence finding thought process. And in that regard the relation between those two would essentially remain unspecified. You end up equating philosophy with (inquiry+critical thought+reflective writing) in search of specifying its method, while simultaneously emptying its concept of content due to the variety of auspices it becomes equated to.
Always a view, never of anything. But that perspective comes from fetishising its essence, rather than the essence of philosophy (screw you Wittgenstein).
Less abstractly, I don't think there's any reason to believe philosophy behaves differently to sport. Which has no essence or method.
Quoting Wayfarer
It's not though. That goes against the norms of reason we usually follow in argument. If you are making a counterclaim or counterargument, you should be able to explain how it undermines your opponent's point. In that regard, what specific claim does your appeal to the history of ideas undermine? What force makes your claim need to be addressed on pain of being unreasonable?
The only guess I have is that you seek to portray an opponent's conclusion as a contingent event of thought; which it is, their thought just happened as part of the history of ideas. Nevertheless a claim can be a necessary consequence of another through rules of reasoning. In essence, "that's just your opinion man" vs "that's an opinion".
But he asked the question! Srap Tasmaner asked me, quoting from one of my posts, 'why should we bother with history of ideas' and said it was a 'genuine question'. To which I did my best to provide a genuine answer. And the response was:
Quoting Srap Tasmaner
So - how can I answer that? Can I guess 'what effect it is supposed to have'? All I can do is try and explain why I think history of ideas is important. The impression I got was that either I failed to make the case that the history of ideas was important, or that the questioner wasn't really interested in eliciting an answer in the first place.
Further to that, I thought my initial response raised some pretty fundamental points about the question, the answer to which was basically 'so what'? So what flaming hoops did I fail to jump through?
Maybe none! I think I saw you construe @apokrisis as being an idealist through that remark? Or was it just that the thinkers they reference might be construed as such? There could be an inherent idealism in construing the fundamental units of reality as sign-systems, but it isn't clear to me that you demonstrate that. Or that you sought to demonstrate it. Just kinda left hanging! Hard to tell whether what you said is intended as a counterpoint, an attempt to contextualise apo's remarks as part of the history of idealism or a claim that apo's an idealist for an unspecified reason.
The key philosophical point I wanted to make centred the quote by Edward Conze, referring to the characteristics of what is described as perennial philosophy, and also on the loss of the sense of their being a qualitative dimension and 'degrees of reality'. I feel these are important philosophical questions although nobody seemed to take issue with them.
Overall, another illustration of 'folks talking past on another'.
Fair enough. :up:
I agree the historical context is important to showing how philosophy follows fashions. It could be said hemlines rise, hemlines fall, thus philosophy reveals its essential cultural arbitrariness. It isn't a progressive enterprise building elaboration from solid foundations.
But that broad flip-flopping should be easy to recognise as the natural dialectic of lurching between extremes, as polar opposition makes ideas crisp in terms of all that they can "other". So for good metaphysical reason, we can see why there should be this kind of Hegelian history unfolding in philosophy as a discipline.
Furthermore, we should be expecting the triadic balancing act of a resolution that does build the solid foundations for the next level of elaboration.
And this brings me to the problem I have with your historical approach to contextualising Peirce.
You understand the historical development in terms of a simple realist vs idealist ontology. And you have picked a side that ought to be monistically the winner in the end. So you seek to assimilate Peirce to that reading of the necessary answer to final philosophy. But you don't really appreciate Peirce as in fact the step that finally helps resolve the realism vs idealism dichotomy in Western metaphysics. Your history telling is wishful rather than factual.
So a close historical reading is the way to go. There was the big set-piece debate of realism vs idealism dominating Western philosophy. And its resolution in the triadic systems relation of pragmatism/semiotics was very important and still unfolding in radical fashion, now spilling openly into scientific thought.
To seize on Peirce as a scientist who was a closet idealist is just shutting your eyes to the historically significant event taking place within philosophy at that time. You will never appreciate his place in the history of ideas.
Relating this to the OP, @Srap Tasmaner sounds to want philosophy to be an open and unstructured kind of thing. A pastime with no real purpose or stakes. It is talk that is free and not to be constrained by grand ends.
I instead can see why a historically-rooted approach is correct. We are dealing with a grasping after something that defines the limits of our being, and have been doing that with surprising success since Ancient Greece woke up to the idea of living in a Cosmos that must express some universalised cause.
The greeks worked their way to a dialectical method of inquiry that helped produce ever sharper focus. It was the logical machine that generates integration and differentiation, generality and specificity, in equal dichotomous measure.
Philosophy as a discipline emerges out of that self-structuring dynamic. It is properly Hegelian and has a historic destiny in reaching its ultimate limits. So a sense of how the debate has progressed is a crucial to placing yourself in the "now" of philosophy as part of this arc from its past to its future.
Peirce's triadic systems logic is an obvious milestone of thought in this regard and shouldn't be trivialised by claims he was "really a closet idealist all along". He was the first proper semiotician and not the last prominent idealist before the dark tide of scientistic logical atomism swept through Oxbridge and outlawed metaphysics for a generation or two.
Interesting, but if anyone cares, no.
Quoting Srap Tasmaner
Quoting Srap Tasmaner
Quoting Srap Tasmaner
Still no.
First quote was me offering a couple of riffs on Clarky's response, drawing out ways of reading his response or extending it, not expressing my position. The word right before your quote begins was "Or".
Second was a response to you, and same deal. You suggested intro science classes exude an air of authority in presenting the results of settled science, but into philosophy ought not. I took that as likely meaning that there's no equivalent set of results, nothing to be certain about. (In fairness I never went back and finished that paragraph. I mentioned how even what you learn in intro science classes might not be permanently true, but I never got around to finishing the point that it's still a reasonable pedagogical approach: teach them the not-really-certainties and let them find out how messy things are later...)
Quote three describes various ways I have read philosophy, which I distinguish pretty sharply from the doing of philosophy, as I would have thought was pretty obvious by now.
I don't know why quote four is there. How does that make the case I consider philosophy just a pleasant pastime? Anyway, that again was not a statement of my view, but of a view I expected someone to take.
Besides -- it sounded like you'd be disappointed if I thought what you said I did. Take my assurance that I don't and be happy!
There's plenty of reasons to go into the history of ideas. Off the top of my head:
It's a good way to rebut appeals to contemporary authority or appeals to popular opinion. Granted, such appeals generally appear on lists of common fallacies, but that doesn't negate the fact that they still carry weight in many contexts.
It's also a good way to argue against dogmatic view points in some contexts. If I'm arguing for x, and my interlocutor's response is that x cannot be true because of y, where y is some widespread, dogmatically enforced belief that I think is false, then it makes perfect sense to explain how y came to be dogmatically enforced. For one, it takes the wind out of appeals to authority and appeals to popular opinion if you can show that the success of an idea was largely contingent on some historical phenomena that had nothing to do with valid reasons for embracing that idea.
There might be valid reasons for supporting x. But said evidence for x might also not be particularly strong. Perhaps in our opinion, the evidence for x is far weaker than y,z, or even q. Yet if x has somehow contingently won approval from relevant authorities and/or popularity for some historical reason this is necessarily going to tend to influence how people judge x versus other competing positions.
That's just how people are, and not without reason. If 99 doctors out of a 100 tell you x is absolutely true, you should look twice at your reasons for buying into y before you eat the horse dewormer or whatever. But sometimes dumb ideas also get very popular.
The book Bernoulli's Fallacy is an excellent example of this sort of argument. It demonstrates some core issues with frequentism, but it also spends a lot of time showing how frequentism became dominant, and in many cases dogmatically enforced, for reasons that have nothing to do with the arguments for or against it re: statistical analysis.
The history of an idea can also show where a tradition when wrong in ways that simply looking at where the current tradition is today can't.
Quoting Srap Tasmaner
If one were to reject a "history of ideas" narrative structure for philosophy, what could one replace it with, and why?
Are you claiming to have no horse in the race? You seem to have gone through dozens and now deciding "whereof one cannot speak...", or something.
It's your thread. I was just expecting more clarity to justify historicism as a target here. You seemed right in fingering the problem of using it to close down debate. And I replied that it is what also keeps debate open by being the bookmark which tells where the great adventure has now got to.
Does it boil down to whether you view philosophy as a progressive project or just one damn cultural trope after another?
I agree with a whole lot of that.
But there is some tension here I don't think I've mentioned before, which makes it particularly odd in the cases that caught my attention: to explain why you hold the opinion you do by pointing to the history of the idea and its cultural uptake, is to discount your stated reasons for holding that opinion and substitute an historical explanation -- or at least supplement the given reasons with historical circumstances, forces, trends, what have you.
As it happens, @Wayfarer is hostile to explanations of an agent holding a belief in terms of causes of any kind; beliefs are explained solely in terms of reasons, from which they are rationally inferred, or in terms of causes in which case those beliefs do not count as rational. That's an exclusive "or". We had a whole thread about this, and he made his position quite clear.
There is, of course, an out -- which I know I've mentioned somewhere -- namely: other people's beliefs have causes; mine have reasons. Example: "I don't believe in God because I've considered the arguments and evidence; you believe in God just because your parents raised you to, all your friends do, and you've never questioned it." I think this is in fact a view people quite often take when someone disagrees with them. It might be so common as to be the default response.
On this forum, however, it would indicate a belief that only the position advocated is even rationally arrived at, and everyone who disagrees is just caught up in the zeitgeist, not making up their own minds at all but parroting the received view.
Now I'm even more confused, because surely @Wayfarer does not intend to claim that those who disagree him are behaving irrationally, but if their beliefs are rationally inferred then no historical explanation for their holding those beliefs is even possible.
This is right on the mark...it is not uncommon for people to be unable to conceive that someone could rationally disagree with the ideas that they think they have arrived at by pure reasoning. You don't agree? Then you must have a scotoma, or you must have failed to understand the argument or, as you say, be caught up in the growing tide of superficiality the culture is being driven by.
If someone frequently responded to my posts with sarcasm, hostility and pointless emojis then that would cause me to believe that their posts were not worth the bother of responding to. I would think of that as an example of a rational cause. If my thinking was however influenced by an underlying neurological disorder or by intoxication, then I would not regard that as a rational act, but the consequence of a physical influence. This is based on the distinction between logical necessity and physical causation, which I think is perfectly defensible. Do you think that is a valid distinction? Furthermore, that the only empirical examples we can observe of such rational causation must be exhibited by rational agents.
As for historical and social causes - it is of course true that we are to some degree influenced by all of those as well, but I would dispute that we're wholly determined by them.
Quoting Count Timothy von Icarus
As it happens, that is very much my reason for invoking it.
Quoting apokrisis
I see 19th century idealism as representative of the maintream in Western philosophy. But then, idealism has re-appeared in cultural discourse in part due to the discoveries of early 20th c science, although as Banno points out, it is still very much a minority view in academic philosophy.
Quoting Wayfarer
I'll address this.
Let's say I believe we ought never to have given up belief in and worship of the Greek gods. That's my position. I believe, and I want you to believe.
Someone asks me why I hold this belief.
Shall I say that people used to believe in the Greek gods? Does that look like an argument to you? It might do as a demonstration that such belief is possible, but no such demonstration is necessary; I am already all the proof you need; I'm a believer. What other point could I be making? That there was a time, unlike now, when quite a few people would have agreed with me? Still not an argument, but maybe a demonstration that my belief is not entirely idiosyncratic. So that's a little defense, but still not an argument in favor.
Shall I say that Western Civilization took a wrong turn when it abandoned the old religions and Christianity became ascendant? Does that look like an argument to you? Of course I think that's where things went wrong; that's when people stopped believing what I believe; that's when people started believing something else; that's when people started disagreeing with me. It's not an argument, but a restatement of what you already know, namely that I believe in the Greek gods, which perforce means Western Civilization went wrong when people stopped doing that -- unless I want to be the onliest special believer all by myself, and that's not on the table, because I am trying to convince you to join me.
Suppose I say that people only stopped believing in the old gods because Christians bullied them and tricked them into it. Is that an argument? Not quite. The claim is that people did not stop believing because they had good reasons to -- realized it was all bullshit, for instance -- but stopped believing for bad reasons or because they were forced. But it fails to give any good reason for their original belief! See how that works? You can have a stupid belief arrived at for stupid reasons and give it up for equally stupid reasons. Discovering that your reasons for giving up a belief aren't any good might make you rethink giving it up, might present a prima facie case for suspending your rejection of the belief, but that's still not in itself a positive case for holding the belief.
What if I say this: as a matter of fact the reasons the ancient Greeks gave for why they believed were pretty strong, and those reasons were never refuted, it's just that times changed, Christianity came along, a lot of the reasons behind the old beliefs were forgotten, but here I can explain them to you...
Now we're talking! I could just present the positive case by itself, but people would be suspicious: if the old religion was so great, why did everyone abandon it? At some point I'll need to offer some explanation for why people stopped believing in the old Greek gods, and that explanation will have to show that people did not have damned good reasons for jumping ship, but there are non-inferential reasons they did, historical reasons. That's not strictly necessary from a logical point of view, but the demand for such an explanation is reasonable.
Do I need to go on?
How is it not just another appeal to authority in that case? If a person claims idea X has merit because it is held by such-and-such a contemporary thinker, then pointing out that it arose from (or used to be opposed by) some other thinker of merit, tells us nothing about its qualities unless we are to take {believed by important thinker} as a property which speaks to the merits of an idea.
Quoting Count Timothy von Icarus
True, but nothing about historicism does this (unless we've all forgotten our lessons separating correlation from causation). All your historicism here would show is that idea y's dogmatic enforcement coincided with an historical phenomena. It leaves completely untouched the question of whether that phenomena became popular because of the vagaries of fashion or because more and more people saw how compelling the idea was.
The notion I take @Srap Tasmaner's OP to be gently poking a stick at is exactly that missing step. Simply saying that idea y arose alongside, for example, an enlightenment rejection of the supernatural, says absolutely nothing about whether such coincidence was reasoned, accidental, or peer-pressured. That case is left entirely unmade.
Quoting Count Timothy von Icarus
Exactly. If frequentism is flawed, then those flaws are the reasons to dismiss the notion, not the history of its adoption. It's perfectly possible that a right idea is adopted for the wrong reasons, so the reasons for an idea's popularity are not in themselves, arguments against it.
The only exception is perhaps identification of bias in experts whose work one cannot otherwise assess directly. If I think a scientific idea is 'fashionable' I might have reason to be sceptical of weak experiments purporting to prove it, but that reason is there as a poor substitute for actually replicating the work (maybe it's in a field I have no access to labs or data for). In philosophy, no such limitations exist, we can all 'replicate' the workings of any of the previous thinkers.
EDIT - apologies@Srap Tasmaner, I see you've said almost exactly this already whilst I was typing. I didn't mean to tread on your toes (here on your own thread and all)
Not really. It's something that's let slide far too often in my view. Were it an up front insult, it would be moderated, but this is not less insulting. The only argument in...
Quoting Wayfarer
... is that @apokrisis has adopted this notion, not as a result of reasoned consideration, but as a result of merely blindly following the fashion started by Russell and Moore.
It's poor form.
Quoting fdrake
Thank you both for at least understanding what the hell I was talking about. (And you too, @180 Proof -- I saw that, but I'm trying to be polite for god knows what reason.) Also thank you . As a matter of fact I've presented the general case against disagreement before.
Quoting fdrake
I did this all over again above, I hope you don't mind.
Quoting Isaac
Puzzled me too, but I won't tell @fdrake which battles to fight.
While I've enjoyed the responses to the wider interest I expressed in starting this thread, it has been frustrating watching the narrow point of the OP be so thoroughly missed. That's on me, I expect, but I'm glad a few of you understood.
I've probably told this story a hundred times now, but sod it, I'm doing it again... We used to have a saying marking papers (derived I think from a comedy sketch, but not sure the origin) of labelling most as "writing everything they know about Napoleon". It refers to the common trend to see the name Napoleon in the question and think 'I know lots about Napoleon - here I go", instead of reading the actual question and answering that. It appears no less common in real life than in undergraduate essays.
Why did you choose that as a hypothetical example? Is it because you have me pegged as a religious-or-spiritual type, therefore this must be typical of the way that I think? Because otherwise, I'm completely failing to see your point, although I suspect that finally your actual motivation is coming out.
So, to go back to your original post:
Quoting Srap Tasmaner
No - it's soteriological. My claim is that, from the outset in Plato, down to the end of the 19th century, there was a soteriological element in Western philosophy. (Of course, that is a rather obscure and academic word, so I should define it: concerned with doctrines of salvation. So the response might be, what is that, if not an outmoded religious myth? Hence the comparison with belief in the Greek gods, right?)
To which my reply is that it is the way that religion developed in Western history that makes it susceptible to that criticism. My view is that, regardless, there is something real and important in the religious consciousness, and that includes the religious aspects of philosophy, which for many reasons has been forgotten, misunderstood or abandoned in the transition to modernity (hence my criticism of 'scientism'.) In the most abstract form, stripped as far as possible of the accretions of religous dogma, that is what I referred to as 'the vertical dimension' of existence. There is a qualitatively real good, if you like. If you recall the Robert M. Pirsig book, Zen and the Art of Motorcycle Maintenance, a lot of it concerned re-discovering a metaphysic of quality. I will go further and say that encoded in the traditions of philosophical spirituality, there is a vision, or an intuition, of a different domain of reality, or a different way of seeing reality, which reveals that qualitative realm. I suppose it is that which puts me on the religious side of the ledger, although I would describe it more in terms of philosophical spirituality. The point of the history of ideas is to trace the geneology of that understanding, which has considerable provenance.
Great. So an argument for 'real' and 'important' would be really interesting. The fact that other people have previously agreed, but now don't, is not such an argument. Not even close.
It was an arbitrary choice. Good reasons: it is a position with no known current adherents; it is widely known to have no known current adherents; it is widely known to be a position that once had a great many adherents, to enjoy a position of prestige now long past. Biographical reason, why it came to mind: my computability textbook used Zeus as an explanatory aid -- Zeus can list all the integers even though there are infinitely many but not even Zeus can list all the reals. Stuck in my memory. It's also just damned interesting that something once so important that the founder of our vocation was put to death for ever so obliquely challenging it, is now just a bunch of old stories.
** Added: I looked back at the post you ignored all but a sentence of and remembered the other reason. Since it's no secret my sympathies are with science and naturalism, and I believe you generally have me pegged as a materialist or physicalist or some such, I found it amusing to make myself an advocate for an unlikely religion -- I ever so briefly considered going with Zoroastrianism -- and I thought it might be amusing to you as well, since it's so obviously out of character for me. **
Quoting Wayfarer
No. Really. If I wanted to do that, I would have put the argument in your mouth instead of mine. Stop psychoanalyzing me and just read what I write.
Quoting Wayfarer
Here's the problem. For purposes of this thread, I don't care what you think. In the argument from reason thread, yes. There we argued about what you think. This thread was not an attack on your worldview or your philosophy or your understanding of history or anything. None of that is relevant.
Maybe someday, maybe soon, I'll change my mind about this, but for now I still have enough invested in the ideas of logic and inference and argument that I am only raising in this thread an issue about your method of argumentation. That's all. The content of the argument is of no interest to me, not in this thread. In other threads, yes. Not here.
Here, I have only been concerned to understand how what I assume you must think is an argument is put together, because these posts I have described do not look like arguments to me. They have no logical connective tissue that I can see. Giving you the benefit of the doubt, and recognizing my past prejudice against historical reasoning, I asked for an explanation.
But you keep making substantive posts about your worldview, like this one. I don't care. I don't care what you're saying, I'm only asking about how what you're saying is put together so as to make an actual argument.
Am I being clear enough? It has nothing to do with my attitude toward your views, whether I agree or disagree. It's entirely about the logical form of the arguments you present, because I cannot work out that form for myself.
And it is not some modern prejudice of mine. You can thank Aristotle for noticing that how arguments are assembled is indifferent to the contents.
I like it.
I much prefer when people use their own words as often as possible rather than relying on philosophers as a crutch.
Quoting Srap Tasmaner
So I went ahead and did that. Your response was:
Quoting Srap Tasmaner
So, no response to anything I actually said, but then,
Quoting Srap Tasmaner
Quoting Srap Tasmaner
So in response to:
Quoting Srap Tasmaner
Answer: definitely not, but don't go to any further trouble.
:up:
It's quite painfully simple.
Someone says "the cat is on the mat"
You respond that positions like "the cat is on the mat" are part of a modern trend of seeing cats as being located on mats, but in the past people used to think of cats as being more likely on armchairs.
The question being asked is how such a response has any relation to the question of where the cat is.
I never said all historical arguments can be justified in this way, and certainly not that they're all good. You can use any method to make bad arguments. I said they make sense in certain contexts.
I'm not going to defend the idea that "people used to believe x," is itself an argument, although it might be interesting and tangentially related to an argument. But that hardly means all arguments from history lack weight or relevance unless you invoke some sort of overarching project vis-á-vis the history of ideas.
Use the frequentism example. You can't just "argue on the merits of Bayesianism or propensity," if your interloceturs are firmly entrenched dogmatists who keep saying "but look, frequency IS probability just like a triangle is a three sided shape. It's what the word means, it's an analytical truth." Something has to be done to address the foundations of the dogma. This is particularly true for ethics, where, for example, it used to be the norm to support nonvoluntary, painful medical treatment to "cure" homosexuality. You'll note that people often refer back to earlier treatment of homosexuals when addressing contemporary issues with transgender individuals because it makes for a good argument from analogy as well (another reason to bring up history.) People have a very hard time seeing past their dogma, that is the nature of dogmatists, but a trip through history can show how the seemingly necessary (e.g. probability defined as frequency) is actually contingent.
No doubt it also helps for emotional appeal that the main champions of frequentism as dogma were eugenicists; it's logos and pathos and ethos after all.
Or take: "n/0 has to be undefined or else bad things will happen." This could be met with "no, n/0= ?, x, y, and z genius polymath agreed. But more importantly, people did math fine all the time back then despite the problems you listed, so clearly it isn't the problem you say it is. DAX and other popular data analysis languages use n/0 = ? for legit reasons. Take the limit of 1/0.000000...01 and tell me what it is!"
I'm not going to defend the position that n=?, but obviously there is a pragmatic argument that can be bolstered by the history of making division by zero undefined, because it shows that the problems being fixed don't really affect many applied uses for arithmetic anyhow, and even that n/0 = ? was better for some applied use cases.
Also @Wayfarer
I think there is a place for this kind of argument, as a proxy for a much longer essay. I'm just gonna go hard on it to the point of self parody:
The birth of analytic metaphysics placed the meaning of words and their correspondence to the state of things as the essential character of the relationship between thought and being, or action and environment. The problems of metaphysics thus become articulated in terms of the connection between language items and world items. This means their discussion takes its cues from analysing the privileged relation between statements and the world; through the analysis of statements' truth conditions, and what impact ascribing truth to a sentence would have on the sentence's semantic content.
Thus the canonical examples of statements in these problematics; "the cat is on the mat", "the cup is red"; force the adoption of a perspective where factual disputes of the nature of things must accord to the analysis of representative statements whose truth conditions mirror (or fail to mirror) the environmental activities they are articulated in conjunction with.
In that regard, pointing out a mere factual dispute between whether "the cat is on the mat" is true or false constraints the problematics surrounding the relationship of thought and being to orbit around the intuition that the two naturally mirror each other, and that the true order of things is to be found in the form attempts to state the truth take. Rather than the gulf between thing and semantic content.
Thus focussing upon whether the cat is on the mat, as a paradigmatic example of the form of truth seeking dispute, brings with it a set of assumptions that render alternative problematics of the connection between thought and being next to impossible. They cannot be justified in the tacitly demanded terms.
...
And at that point there's just too much. So I do think there's some room for this argument style as an attempt to continually upend the chess board, if you're inclined to do that.
You can't argue anything if your opponents are 'firmly entrenched dogmatists'. I suspect they would disagree and therein lies the problem. As such, I'd say something needs to be brought to bear to demonstrate that (to their resigned satisfaction), and I still don't see how any potted history of the idea is going to do that. That the word used to mean something else doesn't have any bearing on any current claim to its meaning being an analytic truth. It's a good argument against analytic truths in general (one I'd be happy to go along with), but you've not made the case in favour of historical analysis being an appropriate tool to demonstrate the weakness of such a definitional approach.
Quoting Count Timothy von Icarus
Again, it makes an argument from analogy. I fail to see how it makes a good one; other than by it coming to a conclusion you already happen to prefer. As a step in a rational argument it doesn't seem to contain any data. "They used to do that with homosexuals" is an empty argument without your interlocutor already agreeing that homosexuals and trans people share the same status... and if they agreed on that, there'd be no argument in the first place. You couldn't argue against the incarceration of child molesters by saying "they used to do that to heretics". It was wrong to do it to heretics, it's right to do it to child molesters. The argument is in the case, not the history.
Quoting Count Timothy von Icarus
Can it? Or does it just seem to you to show that in cases you already believe? You may well gain some vainglorious satisfaction from the obvious righteousness of such an insight, but I suspect your 'dogmatist' interlocutors would simply say that previous uses were simply wrong, after all, people sometimes are. The argument that they weren't is the important bit, not the mere past occurrence.
Quoting Count Timothy von Icarus
Again, the quality of this argument depends entirely on the fact that these other languages work, not that they tried. They may have worked in history, work contemporaneously, or work hypothetically. It's the working that matters not the place held on the timeline of ideas.
It is not...
Quoting Count Timothy von Icarus
...that matters here. It's the success. That might be historical, contemporaneous, or hypothetical (with a good argument); the demonstration required for rational argument is of it actually working, not of it having been once thought to work.
It seems you're only looking at history through the lens of one who already agrees with the points you want to make. From that perspective, of course history looks like it supports your position, it's confirmation bias, not compelling argument.
That's fair. I haven't helped matters by picking the most analytical proposition out there.
Let's say instead that the proposition in question were something more like, say, enactivism in psychology. I'm still not sure that switch renders the case any differently. It still seems that saying "enactivism is part of a recent trend and people used to be more reductionist" carries no weight as a rational argument against it. Whether analytical or not, merely pointing out that a position is currently in favour or has recently lost favour doesn't seem to have any bearing on the argument itself, but does seem worryingly like a veiled insult (as I said, it's as if the argument is 'you only believe that because it's fashionable', or 'you're behind the times')
But maybe I'm still missing something. Does your counter still work against my example of something less sharply analytic?
In for a penny, in for a pound...
You're doing @Joshs not @Wayfarer, and they're actually quite different.
Quoting Isaac
Yes it is. I understand that most argument in philosophy is informal. I understand that inferential connections are sometimes implicit, even here, although here most people understand that's a problem, and explicit is better. I'm not only open-minded about these things, I actually like informal reasoning, so it's not like I'm demanding two-column proofs.
On the other hand, even informal arguments have a form and a content. Many faulty patterns of informal argument have acquired names we toss around (ad hominem, argument from authority, strawman, blah blah blah). Not really my thing, but it indicates that the difference between the form and the content of even informal arguments is widely recognized here, even though one could raise objections to that idea or at least muddy the waters considerably, if one so chose.
@Wayfarer my question to you in this thread was always and only about the form of argument you were employing, not the content of those arguments. Whether Peirce is an idealist is irrelevant to the specific question I raised.
My very first words even set out the form as a schema, with Xs and Ys and everything, and gave it a name, without committing to it being an informal fallacy.
Why didn't I just accuse you of committing an informal fallacy?
Because I recognized that you probably see what you post differently, that you probably perceive an argument in such posts that I do not, only you left it implicit without realizing that you had -- it happens to all of us, failing to connect the dots, even though they are clearly connected in our minds -- so I thought you might be able to explain how these little historical essays you post are not just non-sequiturs.
And because it occurred to me that there was an interesting larger issue here which members probably have quite different views on. (That turned out to be partially wrong because almost everyone was "Yay for history!")
And because I'm tired of playing the forum's logic cop. It's my own damned fault: no one appointed me to that post and no one wants me to do it. It narrows my thinking in a way that I have grown very weary of, but I need to be convinced there's a good alternative. I have no stomach for what @apokrisis called "talk that is free", philosophy as belles lettres. It's just not me, much as I'd like it to be sometimes. I thought this thread might be an opportunity to explore some alternatives. I think @Count Timothy von Icarus attempted something like that, but the social dimensions of discussion are not new territory for me. I was really hoping for something like a revaluation of all argumentation values. @Isaac's got one of those, but while I've come almost entirely around to his point-of-view on many things, I'm still looking for something a little different.
Thanks!
:up:
'Because trains are cool.'
Perhaps there's also a tacit assumption of a narrative of the forgetting or repression of...
It's a requirement for me that the approach I end up with is science-friendly. Narrative and metaphor have some traction, because you can do actual research on these things. But what Derrida did is adapt models drawn from Marxism and psychoanalysis, and those just aren't science. What you end up with is a free-for-all.
Hasty generalization!!! I want you to do it. :razz:
I expected a couple "The hell you are"s and maybe a "Get off your high horse," but not this treachery.
:lol:
This is also where I am at. Discourse is a game of sorts (not implying a lack of seriousness though), and like any game it has rules. The rules here are, like Ramsey's habits, not arbitrary, they seem to work [political] albeit to an increasingly lesser extent these days [/political]. You cried 'foul' on a move and that's a fair part of the game (I know you were much more charitable even than that, but I'll happily take that position myself). One of the great things about this game is that even the rules can be discussed, and that's what this thread's for. Far from the ref's decision being final, @Wayfarer gets to make his case.
But we can't even have discourse if there aren't any rules. It's a social enterprise. Without rules we have a noticeboard and a series of private blogs, not a discussion forum; and I'll happily stick my neck out and say one of those rules has to be that the response needs to modify the proposal in some way (lend it more support, make it more suspect, expose flaws, shore it up, link it up...) It has to do something to it, otherwise all we have is a soapbox.
Of course, the psychologist in me wants to say that very few people ever actually play this game, but rather use its general form to play a much more earnest game of power and group dynamics. That's why the rules are so often broken. It's not that they're unclear, or disputed. It's that they sometimes don't serve the actual purpose. Despite its great potential, much of language hasn't moved beyond birdsong.
But this isn't a psychology forum...
I agree with all of that, but would like an approach that doesn't require switching hats. Maybe that's a mistake, and being self-consciously multidisciplinary is the best way to get what I want.
A lot of this intellectual soul-searching comes from realizing I have been on the wrong side of arguments with you and @Joshs, in particular by defending the border between philosophy and science. I'm writing from the middle of a paradigm shift, so that's a mess, and it makes me resentful of returning to old habits of analysis. I think the points I want to make are more or less okay, but I only have the old way of putting them, and I no longer find that satisfactory.
I relate, but I also like to see this heroic identification with science from the outside, as part of the performance of that heroic role. Is it a form of asceticism, an epistemological veganism?
I sometimes think we tend to kneel beneath the god of engineering. It's not so much careful reasoning that convinces but naked power. I like the countercultural edge of philosophy (it's way too easy to side with the machines) and tend to respect the scientific status of someone like Husserl.
I'd also suggest that Freud and Marx were sometimes scientific. Popper is great, but his own status is akin to that of Freud and Marx as another instance of radical thinking about thinking (emphasis on the root metaphor.) 'Science' functions politically as a marker of authority and contact with the Real. Philosophy of science, articulating/determining this elevated species essence, is like theology (ultimately political too). As I see it, we are never done clarifying / inventing the concept of the rational / scientific. As a little mortal creature of my age, I have to trust experts like anyone. I'm biased toward those who can do math, those who measure, etc. So I get it. But theory makes observation possible, and even math depends on metaphor.
Aw. What did I get wrong?
My third time posting this! Enjoy.
I won't say that institutional science doesn't have its shortcomings and its blindspots, but that's just the nature of institutions. Science itself is not some close-minded affair, but the best way we know of overcoming closed-mindedness. That's what I want to stay connected to.
I could answer but I've already gone way over the line discussing the posting styles of members here. I allowed myself to start this thread for the wider issues it might raise and never intended to get into a back and forth about how people write. I had my reasons for giving in and doing just that, but no more.
Good call!
:up:
I think 'philosophy' also works. Or really it's the idea of open-mindedness itself, right ? A metaphor too. Open, permeable, inclusive, ...
An intellectual institution must be large enough to contain its contradictions because dialectics. :razz:
Philosophy's a big tent, so it gives the impression of open-mindedness. But it's a fact that the practice of philosophy does not much resemble the practice of science. There are handful of famous (to maybe a hundred people) changes of mind (half of those are Hilary Putnam), but otherwise?
I'm not dumb enough to think scientists come in to work, grab a cup of coffee and check the reports of last night's observations so they know which theory deserves their credence today. It's slow. But revision large and small is built into the enterprise. Over in philosophy, the revision we see has the character of an arms race. --- I meet your objection through a small adjustment to my theory; you raise a new objection, and I make a new adjustment. Whole different deal.
Watched an episode of Nova the other night about the footprints at White Sands. Cool stuff, but how old? Most were guessing 12-13,000 years because the oldest Clovis site dates to 15,000. Results of carbon-dating: 23,000 years. There will be debate, and some new tests to replicate the date, but eventually everyone will agree to reshuffle our understanding of the populating of the Americas. Nothing like this is even conceivable in philosophy.
An understatement of impressive magnitude.
Quoting Srap Tasmaner
Scientists seek truth, while philosophers argue the definition of truth. Interesting interplay.
You don't make any point by trivialising the argument. The issues at stake are considerably more subtle, and more significant, but I won't try to explain them again.
Quoting plaque flag
Only sometimes? :lol:
Quoting fdrake
:100: I say this is because the 'illusion of otherness' is a deep but unstated premise in post-Enlightenment philosophy, arising with the ascendancy of individualism. Natural philosophy, in that context, acts with the implicit presumption of the division of subject and object - hence the emphasis on objectivity and replicability as the sole criteria, assuming a correspondence theory of truth. The profound underlying difficulty is, however, that we're not actually outside of, or separate to, reality, as such - an awareness which is found throughout phenomenology and existentialism (not to mention non-dualism) but rarely, it seems to me, in Anglo philosophy.
But this is more bad history. Pragmatism arrives at a theory of truth based on the usefulness of a way of looking at the world. It finesses the dilemma of the epistemic cut by saying that it is the feature and not the bug.
Science took off when Newton threw up his hands and proclaimed I feign no hypothesis. The idea of gravity as action at a distance made no sense. Descartes battled on with the realism of aether type theories of jostling corpuscles. Newton moved forward by leaving the metaphysics a blank because the equations worked.
This was a key psychological moment. But of course science still found metaphysics necessary. It came back to try to fill in the blank. The latest go is gravity as an entropic force. However it did make a break that allowed science to understand its truth-making in terms of pragmatic modelling.
All this is relevant history that ought to change your position here. I point this out because the problem is not the application of history to philosophical argument. It is the pushing of narrow views of that history.
Of course one can always point to Scientism that butchers intellectual history by telling the story from its winners point of view. And that creates the losers who identify as other in their memory of what had happened.
But even history is an institutional discipline that evolves its habits of truth telling. To give an accurate historical account of Peirce - as a juicy example - would involve a heck of a lot more genuine engagement in the history of ideas.
I don't agree. This sense of the division of self-and-other, the Galilean division of primary and secondary attributes, the Cartesian division of mind and matter - these are huge influences in today's culture and commentary on them is voluminous. It is not bad history, it's simply history.
Quoting apokrisis
You're quite right. The problem is the attempt to apply scientific criteria to philosophical problems.
Well said. I should probably clarify. When I talk about philosophy, I'm thinking of 'Shakespeare' -- something that ruthlessly transcends but also includes academia. Socrates earned himself a drink of hemlock. I definitely want to include thoughtcrime, 'inadmissible' views. I don't mean that I want to approve of them all. I mean that I want philosophy to be that radical of a concept. I don't like the idea of it being the pet of respectable people or limited to (shine of the crawls, sin of the cause, sign of the gross) what one can say on campuses.
The idea of radical thinking is (I claim) 'possibility rather than substance' -- a necessarily vague intention leading to unpredictable results, an identity crisis that drags a partially constraining history behind it. Socrates was a corruptor of the youth. At least I prefer him as that kind of unsentimentalized and undecidable figure -- foolosophy as pharmakon, as poisoncure poured in the porches of my near.
The consensus achievable in 'science proper' is beautiful. I was unwordly enough to go to grad school for pure math myself (not exactly science but). Lately I'm studying Joyce. Ulysses is not not science ! More seriously, I personally prefer a broader conception of what knowledge is and how it's obtained and communicated. Is there knowledge in music? In visual art ? If not, that'd be logocentrism -- a term that of course preceded Derrida. It's not automatic of course that logocentrism is bad, and it'd be questionable to argue such a point. It's more about pointing out a horizon, gesturing toward the vastness of the space of possibility. [But I'm incapable of believing in ghosts, however fun it looks. So there's that. ]
Aye.
It amuses me that we agree on that but for completely opposite reasons. Ah well. Another time!
Yes, only sometimes. But often!
I make a point about the form of the argument.
Quoting Wayfarer
Ansel Adams had a young photographer friend, and once a year they'd get together to talk (and maybe drink, I don't know), and the young photographer would bring a stack of prints with him. Adams would go through them, giving his feedback, and sorting them into 'yes' and 'no' piles. One time, he stopped and said, "Every year you bring this one, and every year I put it in the no pile. Why do you keep bringing it back?" Answer: "If you could see the climb I had to do to get that shot --" "Doesn't matter how hard it was to take the picture," responded Adams. "It's still a lousy shot."
Doesn't matter how subtle or significant the issues are, you've still got to follow Grice's maxims, and you've still got to connect one point to another, just as you would arguing about where to eat.
I'm open to being convinced there's another approach available, but I'll tell you what's not going to work for me, that it just comes down to choosing sides. You write as if you're rooting for your team, and it's always a good time to make any point that supports your 'side', whether it's directly responsive to anything, whether it's even connected to the last point you yourself made. I don't call that dialogue but cheerleading.
:up:
We aren't outside of it, and it isn't in us. Co-given, entangled.
:up:
I'd say that philosophers want the truth about truth. But in that pursuit they have to question constantly whether they do or even can know what they are supposed to mean.
Perhaps the definition (of 'truth' or 'logic') is discovered through conversation research, sort of like patterns in the natural numbers. There's a constraint on our creativity, though it's hard or impossible to specify ahead of time, for that too is part of the conversational research.
In some ways, proper science is an escape from the treacherous mud of the most radical thinking (which turns like a snake to bite itself constantly.). This doesn't mean that it's easier, of course. It's much easier to be bad philosopher than a good scientist. And even good philosophers don't obviously help much with passing out of gadgets.
Right. And would you agree that this insight is more typical of phenomenology and existentialism than Anglo philosophy?
Quoting Srap Tasmaner
Fair enough. But aside from a few places in the thread about the argument from reason - particularly in the discussion about the distinction between physical causation and logical necessity - a lot of what you write in response to my posts is not to me, but about me, presumably as a demonstration to others of what you regard as my bad form. This thread has sure seemed like that. I acknowledge that my general stance is contrarian with regards to philosophy as it is nowadays understood and taught, and I also readily acknowledge the shortcomings of my education and training with respect to many subjects that are discussed here. But I will continue to try and make the anti-materialist case.
Quoting plaque flag
You're no doubt aware that the Hegelian (and generally German) approach to science is radically different from modern scientific method - the Germans have that nice word, 'Geisteswissenschaften', often translated as 'sciences of the spirit', for which English doesn't have an equivalent. Of course, Hegel's work collapses under the weight of its own verbiage and I don't want to involve discussion of him, other than to say that, unlike modern scientific method, his notion of science includes consideration of the nature of the subject, in a way that, up until recently, modern scientific method has not. It is beginning to change with systems theory, embodied cognition, phenomenology, and so on, but that implicit exclusion of the subject is still influential in science and culture.
I'm tempted to say yes, thinking of early AP, but there are people like Sellars and Brandom and Braver, to name just a few.
Quoting Wayfarer
Indeed. I've tended to favor the Germans because they try to account for existence as a whole. Maybe there's something stormy and grandiose in some of it, but to me that's more good than bad. This stuff involves us. We risk ourselves (in our current formulation) when we open certain books --- though perhaps this risk diminishes with exposure.
Fair enough, although I think it's fair to say that the bulk of their work is directed principally or solely to their academic peer group. I don't know if much of it will filter through to popular culture.
I think we both already touched on the main reason why. Even though I think philosophy is science in some high grand sense, it can't be denied that a methodical stupidity has functioned brilliantly. As @apokrisis put it (quoting Newton), hypothesis non fingo, motherfuckers!
I studied real analysis for years ( it has a severe beauty, and writing proofs is a craft). But how many engineers or random citizens trust their calculus because of real analysis proofs ? I think we just trust the familiar airplanes in the air and the bridges that have stood for decades and the pills that reliably make us feel good. O the brutal rhetoric of the skyscraper and the opiate! Math is respected because it's connected to 'magic' technology and not the other way around. We elite exceptions, unsullied by indoor plumbing, by are excluded of course. [In other words, I confess my monkey addiction (so far) to all the usual machines. I love my M1 chip. ]
The history of the dialectic that pragmatism resolved even if it is a Cartesian divide baked into modern culture for its own pragmatically comprehensible and historical reasons.
So the epistemic fix is in. That is the facts of philosophical history. The Cartesian divide continues in popular thought. That is the facts of the more general history of ideas.
You are ignoring the one and perpetuating the other.
But that is no surprise. There has got to be a reason why science is taught in the science block and philosophy is taught over in the humanities block, right. The culture wars are baked deep into even our institutions of free inquiry. They other the impressionable from the get-go.
I myself was disconcerted to find that my university treated psychology as either a BSc subject or BA subject. But I had to do physics and chemistry to qualify for a BSc. Yet the fine print also let me go mix with the unwashed and add on some philosophy as some light relief.
As to my route after that, I've never not been working in a mixed environment where science and philosophy are complementary rather than antagonistic. I just don't recognise this culture wars divide at the coalface of ideas.
No great scientific mind says "shut up and calculate" except in the Newtonian spirit of vaulting some metaphysical chasm to reach the next paradigm shift. The critical question from the scientist's point of view is the philosophical one of "what should we next pick as our measure?".
The supposed divide between science and philosophy is over-blown. Even PoMo got its impetus from anthropology and linguistics.
Pop culture is IMO way too visceral-mythic for any 'serious' intellectualizing. They don't care about Bertrand Russel's 'famous' beef with Hegel, never heard of Popper unless you put corn in it.
Fair enough, but I have observed in your case that your approach to philosophy has been that it provides alternatives to Cartesian dualism for the purpose of modelling and understanding organic life, rather than for its own sake. I mean, your over-arching model of the primacy of the second law of thermodynamics basically reduces life to an efficiency measure, don't it? :wink:
Quoting plaque flag
I learned in Buddhist studies about 'picture men' in traditional India, who used to travel from village to village with scrolls illustrating scenes from the Indian epics - the Mah?bh?rata and it's various sub- narratives. They would put up a stand under a tree and put the scrolls up on them, entertaining the populace who would all gather around to hear their telling of the great mythic stories. I suppose early Greek drama was another example. Heck, even today the cinemas are full of 'super-hero' stories which project archetypal themes (per Joseph Campbell) using unbelievably realistic CGI. It's all culture. It all filters through (although unfortunately a lot of today's is junk.)
And the Russians. The Brazilians. Er ... anyone not Anglo? :chin:
The cosy "history of ideas" view on this would be that the Brits/Dutch were unified populations, secure in their community and seeking to express their individuality, while the Germans were having to forge a nation from its scattered people. French rationalism argued for a politics of state centralisation, hence Hegel's excitement about Napoleon as "the world spirit on horseback". The Brits, as a nation of shopkeepers, were more into the politics of decentralising liberalism.
Another example of how philosophy, like science, rather reflects its cultural context in terms of what is fashionable and talked about.
Boomer philosophers went through the hippie years. Eastern thought became high fashion in the "counter-culture" and still lingers here and there.
There is always a narrative to be told. And the telling is what reveals its own inconsistencies and contradictions. Even what I just said here should raise a storm of "buts".
And funny how philosophy came into focus as this kind of dialectical conversation. Then science fetishised it as epistemic method. It is almost like our brains operate on some kind of Bayesian reasoning algorithm, hazarding guesses to discover their consequences.
Well in this thread, yeah, and I feel bad about that.
Quoting Wayfarer
Which is fine by me. It's just hard to engage with you because every argument you present quickly morphs into all of your arguments. We start out changing an oil filter and end up taking apart the whole car.
Okay, occupational hazard, Issues connect to one another, arguments depend on one another, there are assumptions to suss out, all that. And there's a place for synthesis as well as analysis. I wouldn't lay down some rule that we only deal with one thing at a time and everything else is off-limits. Philosophy just doesn't work that way, and shouldn't. But we have to be mindful of the cats we're herding and take the opportunity to control the complexity when we can do so without doing violence to the discussion. The best discussions move up and down gradients of abstraction and connectedness, taking now the wider view, now the narrower, bringing in new issues at one moment and keeping things out at another.
Fair point, I'll take that on board.
That's exceptionally gracious of you.
That's your narrative and you are going to stick to it.
But no. I take the natural philosophy and systems science route by pragmatic choice, having discounted the less pragmatic alternatives
A pragmatic epistemology does of course do the natural thing of seeing the world in terms of a pragmatic ontology. But hey. What do you know? It works better than the other options.
Rather than putting us outside the world in frustrated realist/idealist fashion, it puts us into the world as we are trying to make it for ourselves. We can find the world that indeed contains "us" as its complementary "other".
It is the view from the organism. It is the model of reality as a metabolism. It places us at neither a first person, nor third person, POV, but instead right in the thick of the meaning-making that is our semiotic Umwelt. It puts our hands on the controls in way which is focused on the process that is constructing "us" along with our "desired kind of world".
So the Cosmos does have its own "pansemiotic" metabolism the one that thermodynamics (upgraded from second law equilibrium narratives to the new science of dissipative structure theory) describes.
But life and mind stack up their further levels of actual semiotic metabolism on top of that as negentropic exploitation of cosmic entropy gradients.
Science and philosophy are products of the fourth level of semiotic code. Their metabolic reality is the rational structure that began with the Greek mathematical/logical turn and found itself eventually hitched to the rocket ship ride of fossil fuels and the industrial revolution. That is the dissipation-accelerating metabolism they serve.
This becomes easier to see, in a history of ideas fashion, when you start off down in the basement of biology where the algorithmic trick of semiotic regulation/the modelling relation/the epistemic cut first got going.
There is an organic thread from the start of life to where we are in history today.
So pragmatism is bigger than philosophy and bigger than science. It accounts for these dialectical practices in terms of an actual evolutionary history. They exist in their annoyingly culture-bound fashion as that "self" is a part of the "world" that is being fashioned as the new planetary metabolism the arrival of the (likely short-lived) Anthropocene, as we are calling it.
I like this approach. The anti-Hegel movement can be read as an expression of egoism (an atomistic ideology of traditional liberalism) against the awareness of the sociality and temporality of reason. The little king of the castle is a ghost made of freewill behind a wall of screens. Even if this is silly, it goes with what really mattered -- the (relative) personal freedom and far more serious interest in technology, the thing itself (giving not only wealth but military power.)
I like the sound of those scrolls. An entire culture sharing an epic like that is nice. They had a language of references in common. That's maybe becoming more difficult as the memory of our world gets larger, and we have, seriously, at least a million channels to choose from. Every once in a while I hear about so-and-so who is apparently famous among the youth, but I've never heard of them. Probably the present has always drowned out much of the past. Is it worse in these days of live projection across a billion screens ? I love my old books for the leverage they give me against my age.
Speaking of scrolls, I suppose we live in the supreme picture age. At their best, modern movies are supreme delivery systems for stories that are already great. But there's (as you say) lots of trash. What kind of pop culture did the Athens of Socrates have ? Horseracing ? Pornography ? Celebrity gossip ? I've read about the Rome of Nero and it's sounds a bit like us, though our Colosseum brutality is simulated. I watched Extractor 2 recently, and the choreography of the violence was amazing and absurd. The star beats down about 40 (?) enemies in a row in an insane prison fight scene. No gladiator could ever live up to Hollywood's hyperreal forgeries. Coming soon: bespoke adaptive synthetic 'wives' who really do know what you want before you do.
I see the thread has moved on, and you've received quite a bit of heat for even broaching the subject, so I understand if you just want to leave it, but I'm interested in what you mean by switching hats. I don't see the distinctions so sharply as that. If language, discourse (and communication in general - non-linguistic included) are just tools, then what one is doing with them is just the process of living (which is the process of self-maintenance, entropy fighting, surprise-reduction - pick your metaphor). If you see a blacksmith using a hammer, it's unsurprising to find he's peening a rivet, not driving a nail.
In other words, we take our best guesses as to the goals of the people we're interacting with into account when we model their behaviour all the time, language is no different. I don't see it as a different hat to treat some linguistic expressions as social group badges, whilst others are almost one-to-one mappings to some worldly object, it's all part of the same enterprise - we reduce our surprise by trying to get others to agree with us about strategy, or by more closely aligning ourselves with them so we're not all pulling in different directions and 'surprising' each other with unpredictable responses..
Trying to figure out the way things really are, what is the case (as Van Inwagen delightfully put it - "even if your claim is that nothing is really the case, then it is the case that nothing is really the case"), as a social enterprise is just that process of alignment. That could be science trying to align our varied observations, or philosophy trying to align all the other stuff we think about what is the case.
Another way of putting this is from Quine. Since the evidence over-determines the theory (and vice versa) we have two tasks in choosing which model to follow - one is a rational task (if a theory is actually overwhelmed by evidence to the contrary, then we ought discard it), the other cannot really have any rational reasoning applied to it (which, of the many remaining theories, do we prefer?). I tend to exhaust task one first, then maybe feel a little uncomfortable about task two since I'd prefer a straight answer derived from a clear logical process, but there isn't one. Others tackle task two first, then (if we're lucky) might engage in something more pro-social in attacking task one checking if it's overwhelmed by evidence to the contrary.
The problem is that at each stage there's this sense of wanting to align the view with that of others because we're uncomfortable with the surprise that too many unpredictable theories generates. Aligning task one has a relatively clear method (or at least boundaries of method). Aligning task two doesn't and includes everything from the light threat of social ostracisation to inquisition-style religious pogroms. Progress as a social group obviously requires us to avoid moves toward the latter.
What I object to in many of the approaches to discourse here is what I see as unhelpful moves usually in this second task (choosing theories).
One such is the move you started this thread with - the implication that your choice is the result of some weakness, easily-lead gullibility to the latest fashion, whereas my choice is the result of a deep understanding of some golden-era intellectual canon.
The other (which has nothing to do with this thread, but since I'm on the subject...) is pretending task two is task one. Pretending there's some logical, rational way to choose between two competing theories even when neither is overwhelmed by evidence to the contrary. It's this latter I've had most trouble with recently.
(... then proceeds to reduce the whole of the move to naturalism to a single 'erroneous' assumption)
Come on! If you want respect for the subtlety and complexity of your approach, then perhaps show a little for those who see things differently to you and stop trivialising their reasoning by dismissing it all as 'fear of religion', or 'lack of awareness'. It's insulting. People who've chosen to follow a more naturalistic (or even materialistic) path are as mixed a bunch as those who've chosen a more religious one. They're not some homogeneous mass of people who've all made the same basic mistake, or lack the same basic insight. If I made the same lumpen analysis of all religious people, you'd be on it like a shot, so perhaps a little mutual respect might help.
I find philosophy is always like that. As soon as anyone says anything they are open to all the big questions - how do you know? what do you mean? why should we care? etc? But a small point in favour of history; It is interesting to read the thread from the beginning and compare where it started, with where it has arrived. One might even claim that everything that is known is history, in the sense that it is known of the past, just as phenomena are of the present. Certainly the above quoted exchange would be hard to understand without the context of the thread.
I'm of the view that you're exceptionally quick to take umbrage. That's why I rarely respond. I'm aware that my kind of approach rubs plenty of people up the wrong way. By my response to Srap, what I'm saying is, I'll be more mindful of which threads, and which topics, to contribute to in future. I really don't enjoy antagonistic exchanges.
Would it?
If I posted, out of the blue, the exchange...
A: it's hard to dance with you because you're never focussed on the moment but always on improving your technical precision
b: Fair point, I'll take that on board
... would anyone have any trouble understanding what's going on?
The point is not that we need no context, it's that we already have the context. We've all lived, we've all read books, we all have our version of history...
The problem with historicism is not the addition of context, it's the removal of it - the attempt to focus attention on just one single aspect of the million threads of history and say this here, this one thread is the one which explains how we got here, or tells you what you need to know about X. It constrains and oversimplifies the complex contextual understanding we all already have by attempting to tie everything to (or at best, spotlight) this single thread.
The point, which started this thread, was the attempt, not to add context to an understanding of Peircean pragmatism, but to remove it, by shifting focus to this one thread from history (the move to analytical approaches post-Russell), rather than leave it as the complex tapestry of threads it already was in our present understanding.
Then I can only say that you need to work on your theory of mind.
Do you seriously think telling an entire swathe of serious-minded people that their carefully thought out ideas are just the result of a 'fear of religion', or that they just 'haven't understood the issues', isn't very antagonistic?
If you don't like antagonistic exchanges, the solution is to stop antagonising people by insulting their intelligence. We all read, we all think*. You don't have the monopoly on either.
* I should perhaps make it clear here that I'm talking about peers, not all of humanity, just in case this is interpreted as an argument in favour of excessive relativism.
I generally address my posts to the individual(s) with whom I am conversing (although it's may be true that swathes of people will read them and take exception. As I said, I fully acknowledge my view of philosophy is contrarian, although in future I will try and avoid intruding on dialogues where my contribution isn't likely to be welcomed.)
Exactly.
Your first post after mine...
Quoting Wayfarer
Insulting @T Clark by suggesting his seeing no mystery is the result of a lack of wisdom, rather than the carefully considered conclusion I'm sure it actually is.
Responding to the person doesn't render insulting their intelligence any less antagonistic.
Yes.
What do mean,"it's hard to dance with you" ? I don't dance, and if I did, I would dance alone.
Oh, wait! In the context of the thread history, you are giving an example of something you think would be meaningful without the historical context. In context your meaning is clear, but out of context It would be bizarre.
Which part of "we already have the context" did you not understand?
Which part of "the attempt to focus attention on just one single aspect of the million threads of history"?
I mean, did you actually read my post at all? By all means disagree, but try to disagree at least with something I've actually said.
Thank you.
Still just responding to half of what I wrote I see. Do my paragraphs bore you that much you struggle to get to the end?
Thanks for looking out for me. I appreciate it.
Respect seems a simple thing, but sadly notable by its absence these days. But then my generation haven't exactly made a good account of themselves in other areas, so... maybe rudeness and blue hair will cure global warming... Who knows.
I've had to work at being less confrontational and more respectful as I've gotten older. As you can see from some of my posts, I still have a ways to go. The forum has helped in that regard.
Okay Boomer
I had to look that up! Does that make it self-fulfilling?
Sure, you're making an argument. This detraction can be leveled at all forms of argument and so it seems to be trivial. "You're only looking at the entailments of that proposition that support your argument," "you're only brining up analogies that support your argument," "you're only discussing x scientific model that supports your argument," etc. etc.
You fail to see how it's a good analogy because you think trans people are more similar to child molesters than to homosexuals, or because you disagree with the shift to wider acceptance of homosexuality? Or are you just making the point that its possible for someone to disagree with any analogy, regardless of its merits and that it's also possible to make bad analogies? (This seems trivial to me). Or is it that arguments from analogy are inherently flawed? (This just seems wrong)
I don't see the broader point here. It's possible to write bad proofs and it's possible to believe that good proofs don't work. This objection seems like it applies to any form of argumentation.
Anyhow, I was merely trying to give some examples where the history of ideas may relevant, not even making an argument. Frequentism jumped to my mind simply because I think Bernoulli's Fallacy is a good book, even if I don't buy all the arguments. It uses the historical rise of frequentism to both order and elucidate its mathematical arguments. Thus, you're not just seeing that "people thought about x differently in the past," but you're seeing both a mathematical argument for why frequentism doesn't work in all cases paired with examples of where prior thinkers went wrong and how that has influenced current dogma.
You can say the same thing about a syllogism. That someone could reply to "all men are mortal, Socrates is a man..." with "you can't know that all men are mortal!" doesn't amount to much, no?
Why is an argument from the history of an idea particularly bad?
There's something David Foster Wallace said about the appeal of fiction, and it's kinda heartbreaking since he ended up taking his own life: because we get to peek into their minds or otherwise get an explanation for why they do what they do, fictional characters are understandable, and it's pleasurable to think (or pretend) that real people might be understandable. What I don't think he said out loud is, It gives you the idea, or the vain hope, that someone might understand you.
I've been reading a lecture of James on "The Sentiment of Rationality." He makes a point, from physiology, that the pleasurable feeling of calm and order we associate with understanding and rationality is not just that of unimpeded thought and action -- which he emphasizes elsewhere -- but release under tension. It's impedance followed by free flow. Solving a problem, grasping an idea you struggled with, and so on, all obvious examples.
Together those tell a story about the friction in a conversation and why we engage in it anyway, but something's missing, right? We have communication in the first place because the cost of listening to you is lower than the cost of finding out everything for myself, you know, assuming you know something it would be helpful for me to know.
Talk is cheap. Listening is pretty cheap, but if you don't know anything I don't, it's not cheap enough, unless it's also a way to do something else, strengthen social ties, manage status, that kind of stuff.
Now we can say that we engage in the kind of conversations we do here because we're in the habit of sharing our knowledge and learning from each other, and that all adds up to a communal process of learning. Swell. It's just that in a given conversation, I won't know at first if you're any help, to me or to the project. So now there's this whole process of exploring your ideas just to find out if it's worth exploring your ideas. That's a lot of friction, and it's starting to look like a pretty heavy investment on a speculative basis. I'm gambling with my time and energy when I talk to you.
Anything up to here you'd disagree with?
Next steps then would be all about managing risk, reducing the time until I know whether the investment was a bust, improving the reliability of my guesses about whether I'm going to learn anything or otherwise aid the social project, and so on. And the rules of discussion would be about risk management. Make sense?
Did you use Yahoo! or did you get someone to help you ask Siri?
Not quite. It's merits are contingent on the interlocutor already agreeing with the point it's supposed to be demonstrating to them. What's the point in demonstrating to someone a point they already agree with?
For a logical argument to have persuasive force it is only necessary that I agree with the rules of logic. I could not, of course, but it's not a big ask.
For an argument from analogy to have persuasive force, like the one you presented, I'd need to already agree that the situations are, indeed, analogous. If I agree that, there's little left to persuade me of. The same cannot be said for rational argument in general. It's not the case that merely by accepting rules of rational thought I've basically agreed with your argument.
Quoting Count Timothy von Icarus
Yes, examples from history can be illustrative, add colour to an argument. They are not the argument itself, that's the point being made here. The post which gave rise to this OP was nothing but history.
Quoting Count Timothy von Icarus
Again, as above. The commitments you require of an interlocutor for an argument from syllogism to work are little more than the law of identity. Those required for an argument from history to work are so close to agreeing with the proposed position anyway as to render it little more than window-dressing.
No, nothing.
Quoting Srap Tasmaner
Yes, totally. It's why I think the issue you raise here is so important, and I don't even think it necessary to shy away from commentary of the approach of different posters. If anything, it's necessary. For a project like this to work, there has to be some mutual trust, since, as you say, you can't know if what I say is going to valuable until after you've invested the time to extract it, read it, and understand it - no small investment. I might be interacting simply to get a rise, to declare my group membership, to alleviate boredom... all three. None of which are of any use to you (unless you want to be in my group, of course, in which case me showing you which beliefs are required as membership tokens is useful).
To achieve all this needs a web of trust, like with peer-review. To build that web requires some interrogation of intent. I just don't see a way around that.
So yeah, adherence to the rules of rational discourse are a really good guide, partly because they themselves are a cost outlay, they show good intent. Most people have more important concerns, even when interacting online, than the edification of their peers, so for me to actually adhere to the rules to that end, rather than simply use the structure to declare/cement/advance my social status, is a cost to me (as it is a risk for you to trust that that's what I'm doing).
For different people, that cost is going to be greater than for others, depending on their circumstances, what the need from this place (or any other discussion) - hence, again, some psychologising is inevitable. [hide="Reveal"]An aside, I'm often asked about the 'status' of psychology as a science - it seemed a hot topic a few years back - and the answer I gave was that I didn't think it was one; 'why pursue it as one then?', was the standard response and my reply is basically what we've just been talking about. If an area of physics is merely speculative, we can just ignore it until we have more data, or better methods. With Psychology, we're holding models of how people think, how they'll respond, what their motive are... all the time. We can't not have a theory about this stuff. So no matter how bad our methods are, we'd better do out best with them, because it's happening anyway.[/hide].
What's problematic is that there's a tension between the Gricean requirement to interpret charitably, and the need to build and encourage this web of trust, especially on a anonymous platform of short post format such as this one. If every interpretation maximises charity, then there's little incentive to risk the cost of an honest transaction (for someone seeking one of those many other goals), intent needs to be interrogated.
As is probably patently obvious, to the annoyance of many a peer, I favour a fairly robust interrogation, after which we can be fairly sure the exchange is one of mutual benefit. Others, prefer the risks, but then I'm lucky in that I've little to lose from such an approach
No, of course not. I Facebook-chatted a Snapchat question to my Tik-Tok followers, who replied by re-Tweeting an Instagram to my Whatsapp - you know...the usual way.
Good one! Should have thought of that.
(Heard a fascinating theory along these lines of the origin of organized religion: there have to be burdens, like dietary restrictions and so on, as bona fides of your seriousness about being a member of the group; and these are only necessary because human communities had grown large enough that you might not know right off whether someone is one of us or one of them. Religion then steps in as a kind of passport, offering proof of group membership by having these up-front costs. A shared religion indicates a level of trustworthiness, so then religion can even cross borders and enable the maintenance of trading ties and so on. But again, it has to cost you something more than professing membership or no one will think it a reliable indicator of your trustworthiness. Another way of handling the cheapness of talk there.)
One other thing that occurs me, that comes off the idea of the sentiment of rationality being the feeling of release under tension, is that a lot of what we actually do is more rhetoric than logic, in this sense: if you think of storytelling as the art of withholding information -- so that the audience feels anticipation and is eagerly engaged, anxious for the next reveal -- then we make our little step-by-step points so that the audience will keep getting a little hit of the sentiment of rationality. I put it that way because it's like the way casinos take all your money by giving you occasional trivial payouts. Part of that manipulation, I've always heard, is that the win must come at irregular and unpredictable intervals -- by building in uncertainty about whether the next spin will be the next win, tension and anticipation can be maintained, but no one can maintain that state of tension indefinitely so you have to allow some occasional release.
Around here, that's something like uncertainty about where the steps are headed. When do I find out what the point of all this is? When do we get to the part I'm going to balk at? When do you admit what you're really driving at? And you can see the impatience building all the time; having suffered through a handful of points, feeling like their grasp of the issue is thoroughly established by the hurdles they've cleared so far, people start wanting to get to the big showdown. Part of what's going wrong here might be that step-by-step argument is too predictable, and while taking each step might be in itself satisfying, the process as a whole doesn't have the same hypnotic effect that gambling and storytelling do.
What does get people engaged is the surprise conclusion. You agreed to A and B, and voila! A and B entail C! Didn't see that coming did you! Then you get a big rush of rationality, which you'll have to struggle to shake off before you can examine whether A and B really do entail C. (And you'll hope they really do, because that was cool!) The thing about religion I described near the top, that hit me that way, as an awfully clever idea (which I'm not sure I conveyed very well).
Facebook. ::facepalm::
Yes, that is fascinating, I'd never heard that before, but it's similar to an issue I've worked quite closely on in group belief dynamics where some small contingent of the token beliefs (the membership badges, if you like) will be costly in themselves to profess. The idea is the same - a test of commitment. The difference (in my model) between that and the example you gave, is that I model group membership tokens as dynamic rather than institutionalised. They're more like starling murmurations, each group member trying to predict which ones will work by copying the others but those others are just made up of people doing exactly the same thing, everyone copying everyone else. In small groups this leads to conservatism with occasional paradigm shifts. In large groups, chaos factors in and it can lead to tokens which no-one either intended, nor necessarily even benefits from.
Well, that's the theory anyway... never finished testing it.
Quoting Srap Tasmaner
Nice. I could see it as a kind of meta-narrative to soften the blow of having some fundamental belief shaken. If it's part of a known story where the whole point is the 'big reveal', then we're less likely to reject it because we know that narrative - being-surprised-by-the-plot-twist. It's a way, perhaps, of dealing with the necessary tension by putting it in a familiar story where the tension is released. A musician friend of mine talks this way about music too - tension in discordancy with a predictable (but only just) release in eventual harmony.
If we're set up right, it's actually quite enjoyable to have one's foundational beliefs shaken. But it comes back down to intent. The story form can be abused too, the set-up-and-reveal nothing but a sham to draw you in to a theory which has no A, B therefore (surprising) C structure at all. And drawn in we will be...
It's interesting you mention rhetoric and storytelling because it struck me only from your mentioning it how much the effects of my doing that (I do it a lot) are read differently in different settings. This is my first (and only) foray into social media, and outside of this forum my social circles ( in terms of who I might make arguments to) are very limited. My wife (also an academic), my colleagues (all academics), and my clients (don't argue back!). It's weird having the wider diversity here and reading the different responses - weird in a good way. But response to rhetoric is one of the differences I've noticed. It's not always taken for what it is, often getting mixed up with the actual argument. I think maybe because when talking to people who know the basic form of what you're about to say, the rhetoric is more obvious. It stands out as embellishment because the substance is mostly already known (apart, of course, from the 'big reveal' at the end).
But then again, it come back down to intent still. If people don't trust my intent, they're not going to bother sifting through the rhetoric. They're not going to see it as a well-meaning way of embellishing the story. and they'd be right to because if I don't trust them, it's as likely to be a definition of social boundaries as it is a benevolent adornment. We do also argue to persuade, and sometimes the success of that persuasion is more important than the method.
Which comes back to that tension between charitable interpretation and 'enforcing' the rules to build a web of trust within which we can feel comfortable with these rhetorical embellishments and tension-building lead-ups. That's very much how I saw your OP, a (tentative) suspension of charity to enforce an absolutely indispensable rule. I'd like to see more of it, but it didn't go down well did it?
No, this is profoundly misunderstanding what logic alone can do for us. Logic just tells you that, if the premises of an argument are true, then the conclusion follows. Logic generally can't tell you anything about whether the premises are true. Most arguments are claims about states of affairs/matters of fact. You can't argue anything "just from logic," except (maybe) "a priori truths," that can be grasped from pure deduction alone (which plenty of people don't think exists).
"All historical arguments are good arguments.
Wayfarer's argument, which sparked this thread, is a historical argument.
Thus, the Wayfarer's post is a good argument," is deductively valid.
And there isn't one set of "the rules of logic," for people to agree to either. There is a fairly well agreed upon set principles for classical logic, and there are the widely accepted "laws of thought," but these don't allow you to phrase many of the arguments people want to make (i.e. arguments about modality, quantifiers, etc.), nor does everyone agree on them. Mathematics has not proven deducible from logic to date, and so even proofs don't "only require that [you] agree to the rules of logic." Hence, either logical pluralism or logical nihilism is the norm, with some folks still holding out on the hope that some One True Logic reveals itself.
You can apply logic to parts of an inductive argument, but such an argument necessarily includes claims about past states of affairs/past observations. If I say "cutting taxes won't result in higher government revenues per the Laffer Curve, because we have seen 3 major tax cuts since 1980 and each time revenues have fallen instead of increasing," that is of course an argument relying on historical fact. In many claims about the world, I would argue that deduction's primary role is to ground the statistical methods used to analyze past observations. People can always argue that past observations are in error, fake, poorly defined, etc.
You can put historical arguments into the form of a deductively valid syllogisms. It doesn't mean they will be convincing or true.
To get back to the original point here: , do you guys think most science textbooks waste the student's time by going through the history of how a theory came to be developed?
Every in-depth treatment of GR/SR, quantum mechanics, or thermodynamics I've read starts with the history of the ideas in play. A survey of thermodynamics normally starts with Carnort, Clausius, and mechanism-based explanations of thermodynamics in terms of work. Most treatments will discuss the once widely held, but now thoroughly debunked caloric theory of heat; what the theory was and which experiments ultimately led to the rejection of the theory and the positing of a new one.
Likewise, almost every review of relativity of any depth starts with a summary of Newtonian physics and discussions of the theory of luminiferous aether.
I've always thought that these reviews were done so that the student could follow the development of an position. Knowing which alternatives to a theory have been considered and rejected are key to understanding a theory because, especially for a novice, the dominant theory of the day is always going to look undetermined by the evidence they are aware of. It's also true that knowing why a given element was added to a theory gives you much better insight into how to think about that part of the theory. If some constant was added simply because the mathematics for some project wasn't working out, it's good to know it.
For example, if the a multiverse version of eternal cosmic inflation becomes the dominant view in cosmology, I'd argue that it'd be good for students to know that the driving reason behind that theory's adoption was concerns over the Fine-Tuning Problem. Why? Because some people might find FTP totally untroubling and so mignt question why some seemingly "philosophical," question led science to accept a vast landscape of unobservable phenomena. Likewise, mechanism was rejected because Newton's gravity acted at a distance; knowing this is relevant when Einstein's theory replaces Newton's because the historical reason for rejecting mechanism goes away and locality is seemingly back on the table.
What would your preferred method of presentation be? Just presenting currently held facts and models? Talking about just experimental results and how they support or undermine a theory, without any reference the the history behind the experiment?
I don't think this works. We collect and categorize data based upon our current theories. The historical context of an experiment determines how it is preformed and how it is understood.
Plus, old "debunked," theories have a habit of coming back in new forms. Wilzek's "Lightness of Being," spends considerable time look at old aether theories and why they were rejected because he wants to revive aspects of the theory in a new format, to explain space-time as a sort of aether, a metric field. Explaining the history lets the reader see how only certain parts of the old aether theory were inconsistent with experimental findings.
The modern conception of physicalism was defined in terms of popular dualist and idealist theories this way. Defining physicalism in terms of causal closure and the denial of any suis generis forces makes no sense if it isn't explained in the context of its competitors.
No. My issue wasn't really with the use of history per se, but with how it was or wasn't connected to other points being made, which would hold for any sort of obiter dicta in a post. I left in the detail that it was a specifically historical point as an opening for defending a different view of what sorts of connections between points are required in an argument.
The general view is that there's good persuasion, which follows the rules of logic, and bad persuasion, which doesn't.
I'd rather switch that around and say logic is partially descriptive of at least some the types of persuasion we find good, or think usually work, etc.
That there are valid arguments that aren't persuasive is obvious: if you don't accept a premise, doesn't matter that the conclusion is properly derived. (Multiple ways not to accept premises too.) So persuasive arguments seem to be a subset of valid arguments. But it's not like that because a lot of arguments aren't deductive. So really it's a matter of the two overlapping.
But I still think persuasion is primary, and what we see with non-persuasive but valid arguments is escape, a derivative use of tech that originates as a type of persuasion. It's similar to how I've come to think there is a core use of language in the simple, direct account of what I know that you don't, basic narrative, but that form can be repurposed to lie, and, well, everything else.
Exactly. It has persuasive force. If we just swap out all the premises for letters and produce a long, non-obvious, logical argument that, say , if A> B and B>C then A>C, that has persuasive force. I can look at that and think "yes, that's right, A is greater than C in those circumstances" I've been persuaded by the presentation. The longer an more complex the argument, more likely it is to draw out entailment from believing one logical move on other logical moves. I'm persuaded by the argument that I must accept the entailment, regardless of whether I accept the premises.
My point is that history alone has no such force since it is inevitably selective. Thousands of things happened in the past, so pointing to A and B as precursors of C doesn't do anything because the argument would be in your choice of A and B not in the mere fact of their near contemporaneity to C.
Quoting Count Timothy von Icarus
I didn't claim there was, but, that's beside the point. The point is about the level of, and likelihood of, commitment to shared foundational beliefs. It doesn't matter which brand of logic is used. In arguments it's mostly a kind of informal 'habits of thought' type of mash-up anyway. The point is that in discourse at this level, I'm quite likely to have a very strong overlap with you regarding my belief in those rules. I'm very likely to have the same set and to be strongly committed to them. As such, arguments which are based on them are likely to be persuasive (same goes for a fundamental set of empirical beliefs too such as basic physics, real-world objects etc). Arguments from analogy, or from history are not of this kind. they rely on a shared narrative about historical events of classification which is so close to that of the proponent that agreement on them is usually only the case in people who already agree with the proposition anyway (as I showed with your homosexual/trans example).
Quoting Count Timothy von Icarus
All facts are historical in that sense. This is not the type of argument the post was written about, it's not saying "we can't use any evidence that occurred in the past" that would be an absurdly uncharitable interpretation. What idiot is going to claim that? This, @Srap Tasmaner, might serve as an example of the costs of engagement. Why am I having to expend time countering an interpretation of an argument that a five year old could see was wrong? Why hasn't that interpretation been silently ruled out by all parties in this thread on the grounds that we're not stupid? We shouldn't be here.
Quoting Count Timothy von Icarus
This is all about theory, not history. The question of why a model was abandoned, or why a constant was added is someone's opinion. Someone's theory. Again, from your perspective (you agree with the textbook - or trust the institution) that all seems really solid, but it's not the history that's done that, it's your belief in the authority of the person presenting it. The theory might have been discarded for reasons other than those the textbook claims, the constant might have been added for more rigorous reasons in someone's view but others disagreed (the ones writing the text book). As I said with your other examples, if you already agree that theory A ought have been replaced by theory B, then you're going to be reassured by a historical narrative about how theory B was only supported for so long because of dogmatism. If, however, you disagree, you're just going to also disagree with the historical narrative. It has no persuasive force on it's own because too many competing narratives are easily available. There's little to no reason to accept any particular one.
And to emphasise, this is not the case with arguments relying of basic rules of thought and empirical observation. There are not, in those cases, a myriad of narratives to feely choose from. One might well argue against a tenet of modern physics by claiming maths is flawed, but one would be rightly wary of the commitments that would entail. Not so with historical analysis. I can easily say "No, things did not happen that way" and I'm committed to absolutely nothing else as a result. It's a free pass to disagree.
Yes, I think I'd agree (but maybe for different reasons?). Any activity which loses sight of life as a whole becomes unmoored and I don't think that's terribly healthy. We might well engage here in some fairly effete topics, but they're never entirely disconnected from life and as such carry some (if not much) potential consequence. It will be the case, from time-to-time, that persuading someone of the merits of your position is more important than the method by which you do so.
In those cases I don't see the goodness or badness of the persuasion being determined by the method, but by the circumstance.
That said, I wouldn't want to discard the importance of the fact that these are edge cases and in the main, the better arguments are those which follow a set of rules for how to get from A to B and are persuasive because of their adherence to those rules. We have come to trust, as habits, the fact that those rules tend to yield good answers (less surprising models) and it's that track record which makes arguments following those rules persuasive.
In philosophy (small 'p') I suppose the focus ought be on those rules and the arguments using them. Ironically, in Psychology, I'm most interested in why the other sorts of arguments are persuasive. An emotional appeal, for example, simply doesn't have anything like as good a track record of yielding useful models, so why on earth would we find such arguments persuasive? It doesn't seem, on first blush, to fit with the self-perpetuating modelling relationship theory of cognition I also work with. So the interesting thing for me is marrying the two (plus things like groups membership tokens, fear,...all the other reasons we're persuaded by an argument that aren't those rational rules of thought). The 'why' of it can be explained in terms of evolutionary psychology (were one to be so crass as to attempt such a thing), but the 'how' of it is what interests me, what steps are taken, the mental process of becoming convinced by something...
I seem to have arrived at the same place as you. I want to know why Wayfarer (standing in as merely an example of the trope here) thought the argument persuasive. I want to know what mental steps he imagined the interlocutor taking resulting from the information given. To use your example...
Quoting Srap Tasmaner
...what follows in A's mind? What's the imagined next step?
Or the now archetypal example
A; "Pierce is not an idealist"
B: "He used to be thought of as an idealist, but then Russell re-branded him"
...then...what? What's the step that A is supposed to take next to become persuaded Pierce is, in fact, and idealist? I've genuinely no (charitable*) idea [hide="*"]plenty of uncharitable ones to do with implied gulibility, golden era romnaticism, ...etc[/hide].
Seems fair to me. Although that doesn't seem like a "type" of argument vis-á-vis the history of ideas except in the sense that it falls into the type of "bad," as respects having a bad connection between its premises and conclusions, or simply being a non-sequitor, rather than an argument.
But that makes perfect sense. There are cases where the history of an idea is quite relevant and others where it isn't, and you can present history that isn't even relevant to the idea at hand.
I think something like SR/GR is a good example because it's a theory that is very much defined by the failures of other contemporary theories, and since there were other consistent models that were arguably rejected for contingent reasons (e.g., if we allow that objects grow and shrink depending on their velocity, we can avoid some of Einstein's conclusions).
I'll just add that when people have been discussing a set of topics for a very long time, it seems like seemingly tangential points are brought up because of some broader, and at times inappropriate context. This seems inevitable; you see it in old letters between patristic theologians as well, where some footnote has to explain why x is remotely relevant to some prior point because of some previous exchange two years earlier. I will agree that this sort of thing is entirely unhelpful for anyone following along.
lol, I in no way interpreted the original post as ruling out arguments from historical examples in general. Hence why I only replied to Srap Tasmaner re: the examples of frequentism and the common practice of explaining the historical conditions surrounding the emergences of scientific theories.
My response to you was what it was because you have repeatedly made the claim that the reason arguments involving history aren't valid is because "you can select just the history that proves your point." My point was that this can be claimed against all inductive arguments- that, you are using an argument re: analogies and the history of ideas that generalizes.
Take your latest restatement:
You made the same argument for why historical analogies, in general, don't work. My response shifted because you appear to be making the wider claim that arguments from historical example do not work because cherry picking is possible (otherwise, why are historical examples related to the history of an idea and those used in analogies somehow unacceptably vulnerable to cherry picking?)
How does your argument in the quote above not apply to my example about the Laffer Curve? You could absolutely claim that I am cherry picking. My dataset has only three examples, all from the same country, in roughly the same era. However, taxes have been cut across human history, and presumably sometimes revenue went up after taxes were cut. Arguments about cherry picking are arguments against the truth of a premise; they are arguments about the applicability of the data. That cherry picking is possible is not a good argument against the use of any historical examples, nor is it specific to one type of historical example.
Do you see how I could have taken things like: "thousands of things happened in the past, so pointing to A and B as precursors of C doesn't do anything," as arguments against induction in general?
If I was accused of cherry picking re: the Laffer Curve, I could counter that the effects of tax cuts in the US, in the modern era, are more relevant than the wider population of all tax cuts across history. This would be to say that the Reagan, Bush, and Trump tax cuts are more closely analogous vis-a-vis a consideration of what tax policy should be in the US today.
But you argue right above that I have no good reason to trust a physics textbook as to why certain theories were adopted because:
What is different here? Presumably I trust an institution because they have a track record of producing truthful information. People can, and do, fake their data. Governments produce fake economic figures. And even if we're not talking about fake data, it is completely possible to cherry pick any empirical data, whether it be historical case studies for an IR paper or which medical studies you include in a meta-analysis.
So, we cannot trust any sort of historical narrative because the person presenting it might be lying or cherry picking, and yet for some reason we can trust some empirical data that other people present to us because...
If, as you say, "the question of why a model was abandoned, or why a constant was added is someone's opinion," and unverifiable, based soley on authority, then science is in a very rough place...
You also seemed to be making the much stronger claims that:
>Argument from analogy is not a good form of argument because anyone can disagree with whether the analogy fits.
In an analogical argument, "x is to y as a is to b..." is a premise. It was not clear to me how your point that: "your interlocuter must agree that your premises are true for them to accept the argument," is unique to analogies. This is what I meant by: "the same critique can be leveled any argument."
Further, the claim that someone must "already agree with you," for an analogy to be successful goes too far. People can, and often are, unaware of all the entailments of premises they accept as true. A good analogy can be persuasive and informative if you're audience is listening in good faith.
>That some other types of argument (I'm not sure which), aren't vulnerable to this sort of disagreement over premises? so long as a person accepts the rules of logic.
Can you see my confusion? What arguments aside from those using allegedly a priori, self-evident premises, are not vulnerable to having the premises challenged?
Only if the premises are true. Let's look:
"If 3 is greater than 9 and 9 is greater than 100 then 3 is greater than 100."
Convinced?
The longer and more complex an argument the less feasible it is for a human being to ever work through its validity, develop a truth table, etc. Hence why we rely on computers so heavily with long logical statements. In general, we want to compress our logical statements down as much as possible or put them in CNF for easy computation.
This is just a baffling statement and I'm going to assume you meant something else by it, like "an argument can be valid without being sound." When an argument is valid, it does not mean that any entailments it enumerates are true or should be accepted.
>If it is Monday, then Grover Cleavland is the President
>It is Monday
>Thus, Gover Cleaveland is the President (proposed entailment/conclusion)
This is a logically valid argument.
Ummm. Look again?
I edited it to phrase it better and not leave it open to interpretations of just affirming the consequent.
The argument still commits the formal fallacy of affirming the consequent.
lol, you responded at the same time I was editing.
Yeah, it's not a good example for the point, since it's affirming the consequent the way many people might read it, with "must" as if instead of iff. The point is that you can have an argument of the form:
If and only if a then b.
b
Thus, a.
This is not affirming the consequent. But it's a shit example of that because "will," and "must" can be taken as if or iff so I'll change it.
Yes. And I've countered that point several times now, but you're still stuck at the beginning. It's not the same because not all methods are so open, not all methods are so narrowly shared. There are entailments resulting from denying a common form of logic, or an empirical fact that are uncomfortable and which are not necessary when denying some interpretation of history.
Simply put if I say, "the ball is under the cup" and then I show you the ball you could still deny my theory, but you'd have to bring in a mass of other commitments about the possibility of illusion, not trusting your own eyes, ... Commitments you wouldn't like.
Likewise if you agree that 1+1=2, then I show you how that entails 2+2=4, you'd have to bring in a shed load of uncomfortable beliefs in order to deny that.
But if I say, "history shows that strong leaders always end up in wars", you could just say "no it doesn't" and walk away with virtually no additional commitments required to maintain that belief. History is so open to interpretation that virtually any theory can be held without issue. Not so with empirical facts, not so with informal logic (not so with formal logic either but that wasn't my point).
I'm sure to someone with your... how do I put this politely... confident way of thinking, the Facts of history probably are all written in stone and no doubt all these alternative interpretations are more of those 'conspiracy theories' your priesthood of disinformation experts are working so hard to cull. I can see how the argument I'm trying to make just won't mesh with some mindsets. It may be an impasse we can't bridge.
Quoting Count Timothy von Icarus
I'm talking about neither soundness, nor validity, but persuasiveness. One can be persuaded of the soundness of an argument, or one can be persuaded of it's validity. It's irrelevant (to this topic) which. The topic here is the means of persuasion, the methods used to persuade. Above, you have tried to persuade me of the validity of an argument form. earlier you tried to persuade me of the soundness (or lack thereof). If I constructed a long logical proof, the manner of its exposition would determine, to an extent, whether I was persuaded by its validity. In your example above, my degree of trust in the CNF you mentioned would be at least partially determinate of whether I'm persuaded by the result. The means of persuasion are not the same as either validity or soundness, they are orthogonal to both. The question being addressed here is how it is intended that certain forms of argument from history are supposed to persuade, the manner in which is is supposed they are to work.
Sure. And denying that we can trust the standard fare of physics textbooks re: the origins of relativity or thermodynamics also comes with a lot of commitments. You'd have to assume a lot of people were "in" on a misrepresentation and that they had all coordinated to keep to the same narrative across a wide array of texts, including falsifying and circulating the papers of the original people involved.
The "ball under the cup," example is rather lacking. Many phenomena explored by contemporary physics can only be observed using fantastically expensive equipment. Findings aren't deducible from mathematics. If you find results in contemporary physics credible you are either did the experiment yourself or you are relying on the authority of others and processes like peer review there too.
Your average person is in a much better position to vet if a science textbook is telling them the truth about the history of quantum mechanics than they are to go out and observe entanglement and test Bell's inequalities. When was the last time you read something about cosmology and fired up your giant radar telescope to verify it?
I do actually agree with you in a limited way though. There is a real difference with some aspects of history, where the number of people who are motivated to develop their own interpretations is larger, the degrees of freedom for interpretation greater, the barrier to entry in advancing one's own theories (somewhat) credibly is much lower, and the ulterior/political motivations for advancing some arguments much greater. I don't buy that this is any reason to assume total nescience is at all rational though.
:rofl:
Maybe you could write a book on this topic and lay out your arguments systematically? Texts in science and philosophy almost universally review such history, and they're wasting a lot of time, right? So, you could radically change pedagogy for the better.
The only downside would be that neither you (nor I, for my small role in spurring you on to the project) could ever get credit for the idea. The fact of our contributions would be lost to the shifting sands of history, unable to be verified.
What? Why would people have to be 'in' on anything? Are you honestly having this much trouble understanding the concept of disagreement among epistemic peers? Some theories are popular, others aren't. Is that such a challenging concept for you?
Quoting Count Timothy von Icarus
Really, how?
Quoting Count Timothy von Icarus
Again, my interrogation is about the supposed mechanism of persuasiveness. I haven't even mentioned nescience.
Quoting Count Timothy von Icarus
You're confusing empirical facts for narratives about the motivations, socio-political causes, zeitgeist,... As above, empirical facts are quite easy to persuade others of since we generally share means of verification and trust. Narratives, motivations, sociology, politics... Those are not things we generally share methods of verification for, so the persuasiveness of arguments using those 'facts' is considerably less (for those who don't already broadly agree). So an argument which employs nothing else is a curiosity with regards to how the proponent intended it to work.
That's a pretty weak strawman. We're not talking about disagreements about scientific theories. What I wrote:
I've yet to come across any radically different versions of how thermodynamics, etc. were developed. Even books like Becker's "What is Real?" with a serious axe to grind still give the same essential outline for how QM developed.
But per your view, how can we actually know why a scientific theory was advanced or why others were rejected?
IDK, when Einstein says he added the Cosmological Constant to have his theory jive with the then widely held view that the universe was static I think that is a good reason to believe that is why Einstein added the Cosmological Constant.
The pioneers of quantum mechanics published papers throughout their lifetimes, conducted interviews, were taped during lectures, and wrote memoirs, all describing how the theory evolved. In many cases, their personal correspondences were made available after their death. Most of this is even free.
Now tell me where I can get access to a free particle accelerator and a Youtube on how to properly use it so I can observe particle physics findings first hand?
Einstein added the Cosmological Constant to fit current models is an empirical fact. In 1492, Columbus sailed the ocean blue is an empirical fact. The Catholic Church harassing advocates of heliocentrism is an empirical fact. People have had sensory experiences of those things and reported them.
Most facts we accept aren't easy to verify personally. You can read about chimpanzee behavior extensively, but how easy is it to go and study chimps in the wild? When was the last time you wanted to learn something and held a double-blind clinical study?
Do you replicate the experiments after you read a scientific paper? No. Then you're trusting the institution publishing it and its authors, right?
Plenty of people don't trust the scientific establishment. This cannot be a good criterion for justification.
That's more or less the idea. If logic stands above and apart from our practices, it hangs in the air. More later.
Quoting Count Timothy von Icarus
Given modus ponens as an inference rule. (And thus not a theorem.)
Actually constructing arguments requires some system of deduction, not just the definitions of the logical constants.
What is the relevance here? (And don't say it's a non-sequitor, I thought we decided about those :rofl: )
I was responding to the claim that good arguments are just those arguments where "all that is necessary is that one agrees with the rules of logic." That and the stranger claim that if an argument is in a valid form we should be persuaded by the argument and that "must accept the entailment, regardless of whether [we] accept the premises.
Maybe none. I only skimmed the exchange you were having with @Isaac, and don't want to take sides. It's just that this caught my eye:
Quoting Count Timothy von Icarus
In other discussions, it wouldn't have bothered me, but since we're talking about what makes an argument acceptable, I thought it a somewhat misleading phrase.
Probably not important.
I honestly don't know how to reply. You seem to have placed a series of things-which-are-true, next to some unrelated quotes of mine.
I'll do my best to formulate a response, but I've very little idea what you're trying to say...
Quoting Count Timothy von Icarus
Did I say 'scientific theories' anywhere in my post? Why are you denying something I haven't said?
Quoting Count Timothy von Icarus
So? Are you saying there are no radically different theories of history, just because there are no radically different theories of how thermodynamics was developed? I don't really know how to respond to that. If you're really going to double down on a claim that there aren't any disagreements about historical narratives, then I just can't help. Perhaps read more than one history book...?
Quoting Count Timothy von Icarus
We can't. I just becoming increasing baffled as to why you can't seem to grasp the idea that intelligent people disagree. Have you ever been to a university?
Quoting Count Timothy von Icarus
Really? When Trump said he cut taxes for the rich to stimulate the economy was that a good reason to believe that's why Trump cut taxes? you may trust Einstein to be honest, but as far as the history of ideas goes, are you suggesting he's an example of the norm? That people are almost universally honest about their motives with a tiny, insignificant number of outliers? I mean, that's a lovely world view you have, but...
Quoting Count Timothy von Icarus
The question was about verifying the narratives in textbooks on the history of ideas. Are you suggesting that such evidence troves exist for all ideas. Could we do this self-checking with the argument of the OP regarding post enlightenment thought? do we have some Russell biography I missed where hes says "...and then I deliberately re-framed Pierce as a realist to get rid of that damned idealism...grrr...hate that stuff"
I don't know where you're headed by providing these hyper-specfic examples which are not illustrative of the form in general. The last 50 years of so might be well-covered. The last hundred patchy at best, beyond that is basically little more than guesswork.
Quoting Count Timothy von Icarus
Baffling. So I say that you're confusing empirical fact with narratives about socio-economic causes etc, and you list a load of empirical facts... I really haven't a clue what that was supposed to do here. Yes. some things are empirical facts. Did you think I didn't think there were any empirical facts? I'm lost.
Quoting Count Timothy von Icarus
Five years ago.
Quoting Count Timothy von Icarus
Yes. again, I've no clue what point you're trying to make here. People trust some institutions and not others. Is that a confusing concept for you?
Quoting Count Timothy von Icarus
What?
Unless you can make your arguments a bit clearer I can't see us making any progress.
Where have I made such a claim? No - forget that, more importantly, what is it about my posting history on cognitive neuroscience, Bayesian inference modelling, social dynamics theories, Ramseyan epistemology...has given you the serious impression that that's the sort of claim I'm likely to have made?
Quoting Srap Tasmaner
That's a shame, because what was an interesting conversation we here having seems to have fizzled out and been replaced by yet another truly bizarre argument against positions no-one in their right mind would have any reason to believe I'd ever hold. And this isn't even odd. Far from it, it's the standard pattern of threads (at least, those I'm involved in...).
You keep using strawmen. If I say, "we can be justified about some historical facts and narratives," you respond with "so, you don't get that people can disagree over historical facts and narratives?"
If I say, "we must sometimes rely on the authority of institutions and base our beliefs on trust because it is impossible for one person to conduct more than a minute fraction of all experiments in the sciences," you respond with "so you always blindly trust authority?"
No, I never implied anything of the sort. "Some x are y," is not equivalent with "all x are y," nor is it refuted by "some x aren't y." Actually, I agree with you more than you seem to think re: why we have reason to doubt some facts more than others and why we need to be open to revising our beliefs. TBH, this is a very frustrating trend in all our exchanges. I appreciate your effort and I think you often bring up a bunch of good points about credulity and justifications for knowledge, but there is always this move to bleed out any nuance and turn things into binaries.
I will give you a specific example. I claimed the history of ideas is sometimes useful in explaining theories and making arguments about them. I said that this is the reason why introductions to a theory usually begin with a historical overview.
Obviously not. This is yet another "some x are y," being taken as "all x are y."
I think our main points of disagreement come down to:
A. I agree with your reasons for doubting historical narratives. However, I don't think the problems you point out are at all specific to history. A zeitgeist (paradigms) colors how people interpret empirical evidence, political pressures shape how scientific data is reported, or if it is reported at all. Culture influences science; e.g., the role of culture/norms is the best explanation for why the replication crisis is such a massive problem for sociology, but not as much for other social sciences (with some social sciences not fairing any worse than "hard" sciences.)
Trust in both individuals/institutions and in the process of scholarship is just as essential for science. We're counting on others to call out cherry picking, fake data, etc. E.g., there is only one LHC; if you do not go to CERN you cannot observe super high energy physical reactions firsthand. Even if you do visit CERN, you cannot vet if they are doing what they say they are without a ton of specialized knowledge and permission to inspect the LHC in detail. By contrast, for some historical issues, a wealth of easily accessible data exists.
Point being, the degree to which we must rely on trust is variable and I haven't seen a good argument for why historical claims necessarily require more trust than many scientific claims. To head off another binary, I am not saying all historical claims can be backed up, We can also have relative degrees of certainty about them.
The point that "anyone can make up historical claims," is trivially true for science as well (see Flat Earthers). I would absolutely agree that the sciences, in general, tend to have a better peer review process, and higher barriers to entry. It is harder to convincingly fake a scientific paper due to the unique vocabulary that fields employ, but this is a problem of degree IMO.
The question of how "hard science," "soft science," and research-focused humanities differ in terms of justification is a very interesting one, but outside the scope here. My brief take is that as you get into very complex systems, e.g. international relations, quantitative analysis becomes increasingly less convincing due to the nature of the data involved, making documentary evidence more relevant but also forcing us to look probabilistically at claims. Arguments are sometimes negative, and it often isn't hard to show that some historical narratives are highly unlikely, even if it is impossible to show that just one is right; good history often does this.
In almost every post ITT I have said "some historical arguments aren't good." This is the same reduction to a binary of all/none.
Ok, now we're getting somewhere! You agree that, some beliefs about the development of a theory can be justified? Sometimes these are helpful for proving a point. In which case, our disagreement is simply a matter of degree. My argument is simply this: "if the history of an idea is sometimes relevant, and if we can sometimes have justified beliefs about the history of ideas, then sometimes arguments made from the history of an idea are relevant. Whether we accept or reject the argument should be based on the data supporting the premises and if the conclusion actually follows from the premises."
I take it we will disagree on how difficult it is to support some of these premises, no matter.
B. I think that knowing why a theory was adopted is central to the scientific project. If we cannot know why a theory developed, science is in big trouble precisely because there is a sociological element to the project.
IDK, you said the difference between historical claims and scientific ones was that the latter used empirical facts. I was just pointing out that this isn't the difference between the two, that historical arguments are also based on empirical facts. I was pointing out simple examples to show that, presumably, you do accept some historical facts, which could be used in premises.
I asked how historical arguments are different and you made an appeal to logic and empirical facts. But historical arguments can be put into valid logical forms and they are often based on empirical facts, so this doesn't seem like a difference in kind. Nor is it clear that all scientific empirical claims are easier to verify than many historical fact claims.
See:
I would ask though, why does science have so many less narratives? It seems to me like the reasons are largely social. This is why I mentioned Quine, Kuhn, and holism. Most theories are underdetermined. There are, what, 9 major competing interpretations of quantum mechanics, all with identical empirical predictions? QM isn't unique in allowing the possibility of multiple interpretations, it's the way that science is practiced that closes off the proliferation of alternate explanations. This is why the history of ideas is so relevant in the sciences.
That's why I asked for clarification about it originally. See your posts below:
What I found weird was the claim that "[if] I'm persuaded by the argument that I must accept the entailment, regardless of whether I accept the premises," which seemed to imply that the logic alone was persuasive. But a valid argument with false premises isn't persuasive. I didn't, and still don't really know how to take the claim that: "For a logical argument to have persuasive force it is only necessary that I agree with the rules of logic." That doesn't seem true, and most of your posts seem to be arguing that you have to accept the premises of an argument to be persuaded, which I would agree with 100%.
Example: is anything flawed with the logic of the following?
[i]We should accept well-justified historical facts as true premises.
"Einstein created the theory of thermodynamics," is a well-justified historical fact.
Thus, we should accept "Einstein created the theory of thermodynamics," as a true premise.[/i]
I don't think so, per common rules of inference anyhow. But we shouldn't find it persuasive because a key premise is not true. So, I don't get how your argument is about logic rather than the ability to justify a certain class of premises.
As I pointed out, that two things are analogous is normally itself a premise. Was the claim then that analogies cannot be put into a formal format?
IDK, I take it you meant "if there are lots of premises and a valid argument I can accept the conclusion even if some premises are false?" That makes sense, especially when we're working with claims we assess in terms of probability of their being true.
Also, why couldn't you put a historical argument into a long formal argument? You could absolutely take all your fact claims about history and assign them to letters and put them into a valid statement. All your arguments about the merits of historical claims have been about the ability to justify their factualness. That is, you reject the premises, so the argument isn't persuasive, so I didn't understand the digression into logic re: analogies and historical claims.
So still on topic.
My eyes glaze over when there's a lot of "That's not what I said," and "That's not what I meant."
To coin a phrase, Why should we talk about the history of this conversation?
Because there is more to philosophy than argument, and conversations that are all argument are not generally worth going over, but even here, there is other stuff; rhetorical questions, genuine questions, opinions wise and foolish, misunderstandings and understandings, eyes glazing and rolling, and occasionally even some communication. This, for example, is an opinion not an argument, to be considered or not according to taste. I will make no attempt to prove it, and I recommend that no one bothers to try and disprove it. Sorry to clutter up your thread, but you asked, and i have an answer, and this is it.
Exactly. Both those arguments are about what we can be justified in believing. Absolutely nothing in this entire thread is about what we can/cannot be justified in believing. It is about the mechanism of persuasiveness. The 'rules' of argumentation. These are at best only tangentially related to the question of what we're justified believing.
For example. I'm justified in believing that you have a deep knowledge of physics. I'm justified in believing that because of your posting history. Now let's say you were arguing against someone who thought you knew nothing about physics because you don't hold to [insert some fringe theory here]. The claim is "your posts are all just parroting mainstream textbooks, the latest theories show you don't know what you're talking about". The response "I do, look at my posting history" is a poor, unpersuasive argument. It would be outside of the 'rules of discourse' because it doesn't address the claim. That has no bearing whatsoever on the fact that I might still quite rationally use your posting history to justify my belief that you do, in fact, know what you're talking about.
Do you see the difference? @Srap Tasmaner's post was quite carefully put together and focussed on the part that historical analyses such as the example response played in argument. That's why it was so disappointing to get slew of responses from people who'd read the term 'history' and apparently no further.
So almost none of your comments are addressing the post (specifically the aspect of it I'm honing in on here) which is the persuasiveness of an argument, the method by which an appeal to the history of ideas is supposed to actually persuade, is supposed to be a response, in argument format, to a proposition. That's the issue.
Quoting Count Timothy von Icarus
The claim isn't that they are specific to history, it is that history is at the further end of a spectrum.
It is difficult for me to deny most empirical facts. Maybe not quantum physics, maybe not cosmology, but most basic empirical facts would require a very odd set of commitments for me to deny them.
By contrast, it is easy for me to deny, say, the Marxist analysis of history. It's done all the time by perfectly intelligent economics professors. Likewise, it is easy for me to deny the idea that religious study drove the growth of philosophy in the middle ages. Again, plenty of intelligent people deny such things.
Establish rules of inference, logic, mathematics, and established empirical facts are difficult to deny and remain consistent. Narrative arcs from history (such as the history of an idea) are easy to deny and remain consistent. Note 'hard' and 'easy'. Terms I've been using throughout. Nothing about 'hard' and easy' denotes binary. 'Hard and 'easy' are two ends of a scale, the scale of difficulty.
Quoting Count Timothy von Icarus
I don't want to derail the thread (see above - this is not about justification, but about persuasion), but this is simply not true. Science has considerably more highly visible tests. It is virtually impossible to tell if Marxist historical analysis is actually right. If the rocket crashes, to rocket scientists calculations were probably wrong. We may not understand the maths, but we can very often see when it doesn't work. Scientist says "this should now turn green" (or whatever) and it doesn't. The two are radically different. It's why old scientific ideas are totally dead, but old economic ideas, or metaphysical ones, or historical ones are still very much alive and kicking, it's virtually impossible to resoundingly disprove them.
But this is only relevant to the discussion insofar as it affects the persuasiveness of arguments using those fields and how they form responses to propositions.
Quoting Count Timothy von Icarus
Good. That's a nice succinct argument. So my issue with it is that you haven't linked having justified beliefs about the history of ideas to arguments using that history being relevant. Why does being able to rationally hold justified beliefs about a subject automatically make it a relevant, persuasive, counter in an argument?
The range of rational reasons to believe a proposition is larger than the range of coherent responses to a contrary proposition.
Quoting Count Timothy von Icarus
Exactly. A potted history of ideas contains neither. It is the theory, not that supporting data, nor the logic connecting it to the conclusion. "Russell re-invented Pierce to sound more analytical" (I'm paraphrasing), is neither 'data' nor 'logic'. What 'data' could we find about Russell's intent (unless he maybe wrote a diary entry like "Hehe, today I intend to re-invent Pierce to sound more analytical - that'll show those damned idealists". Analysis of historical trends is not data-heavy. It's speculation-heavy.
Quoting Count Timothy von Icarus
It isn't. Recall 'hard'/'easy'. Not 'on'/'off'.
Quoting Count Timothy von Icarus
Probably not. Again the argument is about how hard or easy it is to deny them. I don't doubt there are exceptions.
Quoting Count Timothy von Icarus
I think I see the problem. By 'a logical argument' I mean an argument in logic, of the form I presented. I don't mean an argument about facts of the world which happens to be 'logical' (meaning logically valid). I'm saying that it is possible to present an argument in logic (mathematics might have been a clearer example, on reflection), where one is persuaded to accept the conclusion merely because of the rules of logic one is committed to. The 'entailment' is not the same as 'the conclusion'. The 'entailment' is those other commitments that come along with accepting or denying a theorem. If I accept 1+1=2 (whatever argument demonstrates it) I am thus committed to also accept that 2+2=4, the one entails the other (depending, of course on how it is proven, but assume a fairly standard approach).
Does that clear anything up?
Very much so.
Quoting Srap Tasmaner
Understandable, but speaks very much to the comment you made about cost. What is meant is the important part and if we're to make a reasonable assessment of cost, then our charity is limited by our degree of trust. It has to be (the alternative being limitless charity, or perhaps random selection). So how can that trust be built here? In academic circles, it's simply qualification. I trust my colleagues not to be saying something completely not worth engaging with because they expended an awful lot of effort getting the qualifications they have. They're unlikely to waste even a private correspondence on saying something pointlessly dumb.[hide="*"]not all of them though![/hide]
Here, I don't think there's any alternative than interrogating intent. That can be a bit personal, but what's the alternative? One could quietly make judgements based on other posts, but that seems like a rather weaselly way out, relying on others to do your dirty work. One could simply be super charitable to all, but I think we both agree that's simply not feasible time-wise.
Hence worrying about whether a post is indeed a 'proper' response (cost of engagement), and thus whether 'what was said/meant' is, in fact being addressed or rather simply misused. It's super annoying, and probably not worth the effort unless you're also (as I am) very interested in the nature of the responses.
Heh. This thread is about clutter.
I agree with your remarks in spirit, the charming and damnable heterogeneity of it all, but I still think there is a thread (heh number two) of persuasion running through all the sorts of things we say.
Quine reports that Burt Dreben once told him that great philosophers don't argue -- the idea being that it's all about competing frameworks. Give people an approach they like better and it doesn't really matter whether the old one is still more or less tenable, it just becomes irrelevant.
This thread is still very much on my mind, so I'll probably come roaring back in another day or two.
D. H. Lawrence's first book of poems was called "Look! We Have Come Through."
Robert Graves reviewed it, saying, "Perhaps you have, and a good thing too, but why should we look?"
That was roughly the mood in which I wrote the OP.
Cool. If there was an emoji of bated breath I'd be posting it.
(Who am I kidding, I've never posted an emoji in my life)
Quoting Srap Tasmaner
Good attitude!
[quote= Robert Graves, In broken Images.]He is quick, thinking in clear images;
I am slow, thinking in broken images.
He becomes dull, trusting to his clear images;
I become sharp, mistrusting my broken images.
Trusting his images, he assumes their relevance;
Mistrusting my images, I question their relevance.
Assuming their relevance, he assumes the fact;
Questioning their relevance, I question the fact.
When the fact fails him, he questions his senses;
When the fact fails me, I approve my senses.
He continues quick and dull in his clear images;
I continue slow and sharp in my broken images.
He in a new confusion of his understanding;
I in a new understanding of my confusion.[/quote]
That's perfect. Temperamentally I'm much closer to Graves, but, as he suggests, it's not always very satisfying.
And there's reason to doubt the capacity of analysis alone to get us to the understanding we want. Even though I don't think I can shake the habit of analysis, I'd like at least to supplement it with a thinking closer to image and myth. (Some of the philosophy that has left its mark on me is like this, Wittgenstein, Sellars, Heidegger, others.)
So while in one sense I started this thread as a protest against insufficiently analytical argumentation, my real motivation is more like overcoming analysis as a paradigm, or at least embedding it within something more varied and more flexible, but without giving up the rigor and precision of analysis, the things that make it useful and powerful.
I feel like it's a mix of tradition, appeal to authority, appealing to well known, canonical thinkers, and a desire to ground theories in some sort of foundation. The history gives a nice ready made structure for a literature review, but it also seems to do more. There is a sense in which all theories are arguments and the rejoinders become important.
I'll come back later but just briefly:
First, we might have to agree to disagree on arguments. In general, I think that, if you agree with the logic being employed, accept the inference rules, etc., if the argument is valid, and if the premises are all true, the argument should generally be persuasive. The general "type" of argument doesn't tend to make it more or less persuasive to me and I guess is the big difference here, the idea that some types are inherently less persuasive.
I won't hold that this is absolutely the case. Gödel's proof of God works off pretty innocuous axioms, but it doesn't tend to convince people for just one example.
I agree that history is on the "more difficult to verify and falsify," side of the spectrum. I do think many questions of scientific and philosophical history are actually closer to the middle of this spectrum than many of the questions in social sciences though.
I mean, there are all sorts of reasons why any firm raises prices. Core concepts in economics are about how complex systems work in the aggregate, and this makes falsification very difficult because we admit that exceptions do exist and that other factors can overwhelm one sort of relationship.
International relations is even more fraught. It is in many cases easier to make a plausible argument for the core reasons why a given war occured, or at least rule out many explanations, than it is to elucidate a common principle by which wars tend to occur.
But I also think some degree of progress gets made despite these issues. However in these areas it's more about assigning probabilities to explanations than establishing certainty.
But it just isn't. This whole site is clearly evidence of that. Scores (if not hundreds) of people failing to convince others of positions they believe have valid logic and true premises. so the interesting question is why doesn't it work?
Quoting Count Timothy von Icarus
That's really interesting. How would you go about assigning probabilities. I assume (from your love of Clayton) that we're not talking about some 'how many times this kind of thing was right' frequentism. So how? What makes one theory more likely to be right than another and why?
It doesn't work because, even if there could be a fact of the matter as to whether the premises of arguments not subject to empirical testing are true, in the absence of the possibility of such confirmation their truth remains a matter of opinion.
I don't buy into stuff strictly verificationist epistemology because it's self-defeating and no one actually goes by those standards for most beliefs.
For example, presumably we can agree that "in 1986 the New York Mets won the World Series, and have sadly not won it since," even if we don't follow baseball. This isn't a testable claim, we can't go back to 1986 and, while Daryl Strawberry and Kieth Hernandez were great, I doubt the have championship baseball skills we can verify.
Now people did observe the games, but presumably plenty of us didn't and we still believe we can verify the claim from records. Of course, we have videos, which people used to generally except as a sort of gold standard of evidence (seeing is believing), but it's increasingly easy to fake that sort of thing convincingly (and it could be done before).
But obviously we don't doubt many facts based on records for claims that aren't able to be repeatedly tested. Which is just as well because, how do we know the results of most experiments? Records. Necessarily, most people don't have the time or resources to even begin replicating some substantial share of all experiments across the sciences.
I feel quite confident in some fairly distant historical claims. That Saint Augustine "had the intention to make Neoplatonic thought coherent with post-Nicean Christianity," seems plenty certain. He left more work behind than anyone else from antiquity, thousands of pages that pass textual analysis as to their authorship. He seems to be making earnest attempts at what the claim says he is doing (certainly convinced a lot of people), we have surviving transcripts of his comments at various councils, other letters referencing him. It hard to think of another candidate theory from abductive reasoning that explains all that writing. Maybe something more specific would be hard to verify, but this seems more sure than plenty of dubious findings in peer reviewed journals based on observations it's impractical to replicate.
The other thing is that: "the best way to ensure true future beliefs is to subscribe to verificationism," isn't a claim that can be verified by verificationism.
Quoting Count Timothy von Icarus
None of history is directly testable, but there are documents we presume to have been based on empirical observations.
In any case the subject is not historical claims, but phenomenological, psychological or metaphysical claims or questions, with which we apparently cannot do more than frame them in different ways depending on the presuppositions we start with.
Quoting Count Timothy von Icarus
Are you claiming that unverified beliefs about empirical matters could be more likely to be true than those we have verified. Say I believe it is raining somewhere without checking the weather reports for that region, or even if possible, going there to see for myself?
Or say I can hear something that sounds kind of like rain and then believe it is raining outside; would you claim that that belief could be as likely to be true as a belief based on having gone outside to look?
Ah, gotcha. Yeah, this is true.
Do you think the fizzling might be somewhat a consequence of excessive politesse on your part?
I suspect that as a psychology professor you have insight into the topic of the OP that you haven't brought up in the thread. (And I understand there may well be ethical standards for someone in your position, and abiding by such standards requires limiting what you say.)
Thoughts?
I suspect other members might have a very different impression of my tendency to politesse...
Quoting wonderer1
Yeah... I'm fully retired now, so I can say what I like really, and I tend to do so without too much restraint. But I've never had the sense that there's much interest. People don't like psychology as rule. I think there's something immediately offensive about someone claiming to know how you think. I'm more keen to just learn how different people respond to interrogation, that's my wheelhouse really (one of them, anyway). How people defend and attack beliefs in a social context - the rules of engagement, the tactics, the impacts... that sort of thing. It's a rare thing that a thread addresses this directly as this one has, but really, there's more meat to found on the ones that are talking about something else.
But also, @Srap Tasmaner has probably heard my 'insight' on these matters to the point of fatigue and I fear if I use the word 'narrative' one more time in any post I might well inspire physical damage.
That said, if you have a specific question, I'm happy to risk it, but fair warning, the answer will be about narratives and won't mention Freud once, unless in place of an expletive.
LOL
If you have any particularly edifying examples, I'd be interested in taking a look. (But maybe PM?) I have autistic standards of politesse myself, so to me you seem fairly circumspect.
Quoting Isaac
It does seem to be an acquired taste, and some psychologies make acquisition much less likely. Still, there are those times when you can lead someone to a more accurate understanding of their own nature and change the rest of their lives for the better.
Quoting Isaac
This has been a big interest of mine for a long time as well, albeit from a strictly amateur and eclectically educated perspective in my case.
Quoting Isaac
I think I know what you mean. People behave in more informative ways in other contexts.
Quoting Isaac
I'm really enjoying participating on TPF, and I've already received a warning for bringing up a psychological topic, so perhaps later in a different context.
These days, I'm attempting to sing a different tune..
To wit, here's what I've been thinking about -- unfinished, but it's time to post something.
Being in the habit of telling each other what we know, I tell you something I think I know -- about the mind or reality or some philosophical thing -- but instead of thanking me, you disagree. This is shocking and bewildering behavior on your part. (Surprise.)
If I do not understand your position at all, that's the worst case for me, because what kind of action (i.e., talking) can I engage in in response? Anything is better than this, so my first step will be to substitute for your position a position I believe I understand and can respond to. (There's a cart before the horse here. Have to fix later.)
I want to bring your views into alignment with mine, and that's why I make arguments in favor of my belief. But I probably don't really know why I believe what I believe, so I'll have to come up with reasons, and I'll convince myself that if I heard these reasons I would be convinced. But really I have no idea, since I already believe what I'm trying to convince you of; it's almost impossible for me to judge how much support these reasons give my claim. Finding reasons for what I already believe presents almost no challenge at all.
This is all risky behavior though, because I've opened myself up to more disappointments: you might reject my reasons themselves, or you might reject that they provide support for my "conclusion" so styled, or deny that they provide "enough" support, whatever that is.
Denying the premises is really the least of my worries, because we're talking roughly about intuitions -- making this the fourth recent thread I've been in to use this word -- which I'm going to gloss here as beliefs I don't experience as needing justification. If you share my intuitions, we still have to fight about the support relation; if you don't, I can just keep daisy-chaining along until we find something we agree on. This is routine stuff, have to have common ground even to disagree let alone resolve such a disagreement.
But that still leaves the support relation. Not sure what to say about that. If you start from the idea that some people will just "get it", we're still talking intuitions; as you spell out more and more steps between what your audience accepts and what they don't, this is what logic looks like. The usual view, of course, is that "being logical" makes a connection a candidate for a step in the argument; the thing is, I think we spell things out only to the point where the audience agrees, which means something they accept without reasons -- and here we're talking precisely about the support relation that holds between one belief and another, and the sorts of things I come up with are just things that sound convincing to me as someone who already believes, which means my process for producing reasons is a kind of pretend.
It's entirely possible that logic is some kind of refinement of such behavior, a constraint placed on it, which is how we tend to think of logic, the guardrails of sound thinking. Dunno. One thing I think the description above gets wrong, now that I've written it out, is that the support relation really shouldn't be presented as another belief itself, but as a rule or habit for passing from one idea to the other. (I think empiricists and pragmatists would agree on that.) So the issue at each step I have to spell out is not whether you accept a proposed connection, but your behavior -- do you pass from antecedent to consequent as I predict or desire?
I have an atypical perspective on communication and strategies for communicating and I don't know how likely it is that I can convey much understanding of it, or that others will be able to make use of it. However, I'll give it a shot.
It seems that for me an aspect of being on the autism spectrum, is a lack of a model for 'the generic person'. This manifests as me tending to be very quiet IRL around people I don't know, because I tend not to see clear ways to express myself without some specific knowledge of the other person's way of looking at things.
I think an aspect of how I have learned to cope with autism is to be somewhat hyperattentive (in some regards) to what individuals say, and what that tells me about how that individual thinks about things, and (to some degree) what 'subconscious hooks' in their thinking I can make use of in conveying things to them. IOW, to have much ability to communicate fluently with someone I need to know something about how they specifically are likely to connect the dots.
For example, because of our exchanges in the past, in talking to you I can refer to Capablanca as making use of the subconscious/intuitive hooks of other expert chess players, to convey an understanding of a particular endgame, by setting up the relevant chess pieces in a particular way. An aspect of communication for me is a sort of planting of seeds in people's subconscious, such that an intuitive recognition might occur at some point. I see it as analogous to Capablanca setting up the chess position. If you had not written the things you did about Capablanca I'd guess that I wouldn't be writing this, because I would not know how to convey what it is I'm trying to convey to you.
Inasmuch as I'm talking about a communication strategy, I'll point out that it is often a long game strategy where I'm not expecting to have much impact on a person's thinking in the short term. In many cases I don't have much expectation of seeing results, because I'm relying on the other individual's life experience to fill in the 'intuitive dots' and perhaps result in an epiphany at a later date. I don't even expect people to recognize that I've set them up to have whatever epiphany they might have.
I've seen plenty of evidence for the effectiveness of this style of communicating in changing people's intuitions to some degree, though I'm not going to present the evidence because it would be too much like presenting psychological case studies of people I care about. Besides, if things work as I think they do, I think it likely that you will develop a recognition of how this style of communicating can be effective without anything additional from me.
Very interesting!
One of things @Isaac and @Count Timothy von Icarus seem to have been arguing about for some reason circled around this "generic person" who is the target of the logically valid argument, the argument that any rational agent ought to accept.
One thing I was thinking about -- going back to that thread of yours -- was the difference between someone who shares your intuitions, so no argument is necessary, and someone who doesn't. My first thought was the thing about intuitions being tacit knowledge, and if that were the case, to explain something to someone who doesn't "get it" what you have to do is spell it out, you have to demonstrate some of the little steps you had skipped over. And that's very much the feel of doing things logically, clear little steps, everything implicit made explicit.
But of course that's wrong. Not everything is made explicit. Not everything can be made explicit. More importantly for this discussion, not everything needs to be made explicit; you only to need to spell out as much as the other person needs to "get it". How much is spelled out, how much made explicit is sort of negotiated.
At least that would be the plan, but when the plan fails, we point to the step-by-step-ness of our chitchat as if that's proof that we're right. And I'm saying the step-by-step-ness is an artifact of our negotiation process, not some standard of truth and justice. If I weren't talking to you, I'd hold the same beliefs without the step-by-step demonstration.
In short, yes, there's the generic person, the rational agent, like homo economicus, but we only pretend to craft our arguments to suit him, or we only invoke him when things go wrong. He represents an idea about what we do when we talk, but not even an ideal we try and fail to realize. --- I think this is one of those things everyone assumes is true (the way we use logic and respond to it) that if you could show them what that would really look like if we did it, they'd realize it's nothing like what we actually do.
Or I'm barking up the wrong tree. We'll see.
Very very insightful, and you are recognizing things that I only recognize as making a lot of intuitive sense, as a result of you having put things in your own words. It's so cool that your background knowledge allows me to communicate with you about such things, with such productive results.
I suspect I'll have more to say after I've had some time to reread and cogitate more on your response, but I wanted to say that much for now.
Feeling some poetry...
From The Prophet by Kahlil Gibran, On Teaching:
Nice. There are times when the obvious truth of this really hits you, and it's just as true that we learn an enormous amount from other people. Somehow.
Also, I think it turns out I'm re-inventing the approach of Mercier and Sperber in The Enigma of Reason. Just read the introduction and there were lines that could have been in my post, which is odd. Now I don't know if I should wait to read the book until I've worked out some more of this on my own. (Call it Gibson's Dilemma: there's a story that William Gibson bolted from a screening of Bladerunner because he was in the middle of writing Neuromancer and it was too close.)
Very chancy business, this life of the mind.
I just bought the book, after looking at the Amazon description. But I've had a lot of time to think about this sort of stuff on my own. I do see some wisdom in you waiting awhile to develop your own view a bit more, in order to be better able to critically evaluate the book.
I will say, that what I read of the book's focus on human interaction, matches up well with my view. In fact, in discussions of free will, I've often referred to myself as an interactive determinist. The interaction part is important. Anyway, I could go on about this at length, but I'm hopeful Mercier and Sperber will give me tools for communicating about it more effectively, so I'll hold off for now.
And I'm glad Gibson bolted from Bladerunner because I loved the uniqueness of his vision.
Alas, so have I. I remember -- this may have been 25 years ago -- arguing on the defunct ANALYTIC-L mailing list that producing reasons for your beliefs is (just) a practice of ours. I was very Wittgensteinian back then.
I was much impressed with David Lewis's Convention some years ago, his attempt to ground language in game theory, and after that I began to think of Wittgenstein as a man trying and failing to invent game theory. I wanted to do something similar for logic and reason -- modus ponens would fall out as a pareto dominant strategy, that kind of thing.
I wanted to provide a social explanation for reason, but leaving it more or less intact -- and this is the aporia that Lewis ran into, that he couldn't directly link up the convention account of language to the model-theoretic account he was also committed to.
So recently I've decided that if I have to give up the the timeless truth of logic to get to a social grounding for reason, something consistent with psychology and naturalism, that I'll just have to give in to full-bore pragmatism, no more mysterious third realm for logic and good riddance.
I don't know what you had in mind regarding a social explanation for reason, but I do see there being a very strong social explanation for reason, in that logic is deeply tied in with our use of language. I speculate that logic becomes a matter of undeniable intuition as we are grasping the relationships between language about reality and reality itself. Because our intuitions about logic develop alogically when we are young, as recognition of patterns in how language relates to facts about the world, by the age we start thinking 'metalogically', those intuitions have the 'feel' of apriori knowledge.
That is very much a matter of intuitive speculation though, and not something I feel equipped to make an evidential case for. So please point out any holes that might seem obvious to you.
Roughly just an "arguing first" view -- that logic is not a handy tool waiting to be used, pre-existing our use of it when arguing, but that the rules of logic come out of what we do when we discuss and argue. That looks impossible because what other criteria could we have for whether I win the argument or you do, for whether my argument is better or yours? --- Still, I'm convinced (at the moment) it has to be done.
Quoting wonderer1
I'd be hesitant to put it that way. I don't quite want to just block anything that smacks of a representational view of language -- at the very least I'd want to know why we are so strongly inclined to think of language as representational.
Also, if you think of logic as kind of a distillation of language, something implicit in it, I think that's going to turn out to be wrong. As above, I think there's a strong impulse to think of language this way, as carrying along logic inside it as its necessary skeletal structure, but natural language is much more subtle, much more flexible, and also much wilder than logic. But I also don't think it's simply a mistake; something about the way language works, or the place of it in our lives, almost demands that we misrepresent it, so to speak.
But now I'm speculating about your speculation...
Absolutely. The thing about psychological theories is that everyone has them, you have to have, otherwise your strategies when interacting with others are random. We don't just throw darts blindfold when deciding how to respond, we have a theory about what our actions/speech is going to do, how it's going to work. That's a psychological theory.
People mistakenly disparage psychology for merely attempting such modelling, but we all do it by necessity. Psychology is simply trying to develop other methods of doing so, something supplementary to just passively taking in the experiences in your small social circles and then 'having a reckon' about it. If psychology fails, it is its methodology that's at fault, not it's objectives. So in order to be useful (unlike many other sciences), we only have to be better than guesswork, because there isn't a 'I just won't have a psychological theory then' option.
After I left academic research, I worked for a risk management company, and that was my exact job pitch - it's better than guesswork. That's enough, apparently to be worth a consultancy fee.
Quoting wonderer1
Ahh. The mods are kittens really. I'd have about two thirds of the membership banned within the first day of having such powers invested in me - but yes, have a care. Sneak it in to a thread about something else...!
Very nice. I'll respond simply by highlighting the differences and similarities with my own thinking...
Quoting Srap Tasmaner
I'm with you regarding the habits, but I think the expected behaviour os more agreement than thanks. I think what we're looking for is confirmation that we've got it right. that can take the form of an agreement, but also the form of a passive student (after all, if they accept what we say, they've agreed we're right). Minor difference (but you'll see there aren't any major ones). Then, yes, either way - "you don't agree! Now how am I going to predict your responses?"
Quoting Srap Tasmaner
Yes. Not only easing the discomfort, but this is also the most profitable policy for reducing surprise. If actual agreement (top priority) doesn't reduce surprise, then we can at least fall back on predictable narratives about conflict. If we see our 'opponent' as 'one of those types' and substitute a set of beliefs we think we know the causes of (erroneous ones), we can settle in to a little vignette which we know the script for. Recognise any of this from this very thread? "Someone mentioned history in a vaguely negative sense! I know the argument in favour of history, I'll substitute that for an argument against whatever this lunatic is actually saying".
Glance through the major conflict threads, you'll see hundreds of examples. The counter-argument isn't against the actual argument given, it's against the script that the interlocutor ought be following, given that they're 'one of those'.
Quoting Srap Tasmaner
This is brilliant. If I could have explained it that well I would have saved myself a lot of trouble. The underlined is the part we deal with here. Of course your reasoning seems convincing to you, it already convinced you. It's what motivates the majority of the posts which begin with "Obviously..." It's such a strange beginning to a post, yet so common (I'm sure I've done it - this isn't exculpating). By the very nature of the activity you're engaged in, it 'obviously' isn't obvious, but here it is proclaiming that what you're expending all this effort demonstrating to (presumably) an epistemic peer, is, in fact, obvious.
I'd say more about this section because it's very inline with my thinking, but unfortunately that limits rather than extends what there is to say. Just 'yes'.
Quoting Srap Tasmaner
Possibly. But that 'daisy-chaining' isn't at all risk free either. I think denying premises can become a serious worry when the denial is unexpected. There are premises which we hold, but expect people to genuinely hold the opposite of (like economic theory, what exactly happened last Tuesday, who's to blame for the 'state of the world', etc.), and there's premises we don't expect people to genuinely hold the opposite of and so it's easier to simply assume disingenuousness (moral sentiment, aesthetic judgement...). Coming back to what you said earlier, these areas are, not by coincidence, the same areas where we don't have a good set of reasons for why we believe what we do. As such, we lack a script for the persuasion game.
Quoting Srap Tasmaner
Yep. It sounds like you've reached the same point I have. what makes a support relation convincing between two beliefs we already knew (but presumably didn't have the support relation for before hearing the argument)?
So currently (work very much in progress) I have it turned on its head. It's not that I'm looking for the support relation that my audience will accept as leading to the conclusion. It's that I'm selling the whole package of support relations as a whole (the more the better). So we might already agree that all support relations are just that, but that this whole package is less messy than that, or has fewer surprises (uncertainty), or whatever - depending on our rhetoric.
So if we come unstuck at step 7. it's not that there's disagreement about whether step 7 is a supporting step, it's over whether step 7 fits with step 6, 5, 4, etc. Does it make a good story? I think of it like characters in a book, the author is saying "and then he thought this, and then he thought that, and then he thought..." and you (if you dispute it) are thinking "hang on, he thought this other thing two pages ago, this guy just isn't very realistic..."
Quoting Srap Tasmaner
I'm responding paragraph by paragraph (a bad habit of mine). I see you're pretty much saying what I've just said already - at least that's how I've interpreted it. I'd add that habits are heterogeneous, I don't think there's a single set, just an 'acceptable' set. Broadly, we're looking for predictability, adherence to one of the known sets, not going 'off script'.
That's a piece I was missing.
Quoting Isaac
Interesting. One thing I forgot about is kettle logic. [hide="(For people who don't happen to know this one.)"](Freud's analogy for the 'logic' of dreams. Comes from a joke about a neighbor returning a borrowed kettle with a hole in it: he defends himself by saying, (a) it had a hole in it when I borrowed it, (b) it doesn't have a hole in it, (c) I never borrowed a kettle from you. --- Kettle logic is actually enshrined in our legal system; briefs will often present mutually inconsistent arguments for the same result and they don't care which one the court accepts.)[/hide]
There's still something a little off though.
If I make some claim, I might expect you to agree. (Remember our "same as me" discussion, my weird insistence that this would be the cheapest and fastest way to model you?) But suppose you don't. I said that presenting some reasons is an attempt to bring your views into alignment with mine, but that feels both obviously true and a little weak. If I now know that you disbelieve P, I should be able to model you just fine, so that's not the whole story. (Keep reading, progress below.)
When you disagree, there is also the surprise that I've been modeling you wrong, and it feels like one of our first responses is to get a quick sitrep on that failure -- to assess just how much damage this response does to my model of you, to figure out how wrong I was. This you can definitely see on the forum: people go from noting your disagreement right to "You mean you don't think you're conscious? You can't smell the sunrise and see the flowers??" That incredulity is a siren going off at the model-of-you desk. Oh, and we need to make sure the failure is confined to you, that I haven't been getting all kinds of stuff wrong.
But once things settle down again, likely through the emergency deployment of narrative, why do I try to change your views? That could actually be the same as what was going on above -- an attempt to determine whether I've gotten more than you wrong. Are you in fact right? Do I need to update to ~P? So I request a report from the modeling team -- why is P in the model anyway? (It's fun writing as the clueless executive. I literally don't know why P is in the model! There are some nerds somewhere who take care of that stuff...) The modeling team -- working on a deadline -- throws something together and sends it up and I show that to you. "This is what the boys down in modeling say about P, and it sounds pretty good to me." That will look like an argument, and if I didn't have you around, but were only entertaining a doubt of my own, that might be that. But now my trust in the modeling department has weakened, so by showing you their report, I'm also checking up on them, testing them. "Look, you seem to know something about this P business. Here's what my boys are telling me. Is this any good? Did they miss the boat here?"
Around here (TPF) it's almost a certainty that your answer will be "This report is crap. Your modeling team got this one wrong." But by saying this, you've now disagreed with more of my model, and even though my confidence may have been shaken, I don't just reset to impartial open-mindedness; I may have fallen from 95 to 93.8, that's all, so your responses are still being discounted by default as overwhelmingly likely to be wrong. By disagreeing with my Official Reasons, you're just pigeonholing yourself as an anomaly for me, making the case that my model only failed to recognize how perverse you are, while getting almost everything else right.
Through these first few exchanges, there's been no sign of the need to bring your views into alignment with mine, only a brief flirtation with bringing mine into alignment with yours. --- Actually some of the initial incredulity-driven tests might amount to "Surely you misspoke," so there's that.
There might be something else going on here though. When I recognize that you had a genuinely different view of what I assume is the same body of evidence, that piques the curiosity of the modeling team. "How did he come up with that?" There might be a bad algorithm there worth knowing about and avoiding, or there might be an interesting inference technique there we didn't know about, and even if it doesn't change our view in this case we're always on the lookout for new inference tech. So there's going to be a strong need to know why you had a thought that I didn't. Oh, and of course this plays directly into my need to model you better! My model of you was inaccurate; I need to update it with a model of the crappy inference algorithm you're using, in case I talk to you again.
Still no sign of needing to change your mind though, even though it looks like that's what arguments are for. The only thing I can think of is some hand-wavy thing about cooperation in the general project of all of us staying alive. I might (will! do!) prefer not to have to maintain a desk just to keep track of your screwy views and it would be easier and cheaper to bring you back into line with "practically everyone". --- Or, at least, assign you to one of the narrative departments. I just don't have the manpower to track every rando's views individually.
That's actually not bad, and less hand-wavy than I thought.
It's very good. I love the way you are off and running with this.
I do plan on responding to your earlier posts in a more in-depth way. However, I spent all day yesterday on the forum, and my 17 year old son and I are going on a two week tent camping road trip to Yellowstone next Saturday, and I have a lot of organizing and packing to do. (I don't suppose you live somewhere between Indiana and Wyoming? It would be fantastic to be able to talk to you in person. Although I do like forum style communication a lot, because it allows for input from a wide variety of perspectives.)
Anyway, I'm going to have to limit my forum time for awhile, and clearly you are well setup to do a lot of very productive thinking without further input from me.
I've been there! Just in a touristy way, not camping. It is beautiful.
My favorite Yellowstone story. Driving along with the in-laws, and we see some cars pulled over, which is a sign there's something to see, so we pull over. There's a bunch of tourists standing around in a little picnic area and a couple of park rangers standing over by some trees talking to them, because behind the rangers out in the meadow is a grizzly bear. So this dude is standing with his back to a tree, and the meadow, answering questions about the bears and being educational. The other ranger is off to one side where he can see everyone and also glance over toward the bear. "Uh, Bill," and a nod toward the meadow, where it turns out the grizzly has covered some ground since the talker last looked. He turns to glance over his shoulder and visibly jumped! "Okay, everyone, you all need to move back now, that's it, move on back now, DON'T GO IN THE WOODS!" Just ever so slightly lost his cool as this grizzly ambled toward us, it was awesome.
That is awesome.
I just got the bear spray I ordered last Friday. I'm hoping we get to see grizzlies in the wild. The Lamar Valley in Yellowstone, has been called the Serengeti of North America. Spending at least one day there watching wildlife is part of the plan. Yellowstone (and Grand Teton NP) is so huge that I suspect two weeks isn't going to seem like enough time.
Joke I heard while I was there about the little bells people wear especially on particular trails: What's the difference between grizzly scat and, say, black bear scat? Well, they look almost identical, except the grizzly scat has these little bells in it.
I have visuo-spatial strengths, and often I have the experience of thinking, "How can you not see that?", because it is difficult for me to imagine what it is like to lack the visuo-spatial abilities I take for granted. On the other hand I have weaknesses in processing speed, and would be a horrible umpire, with people yelling at me, " How can you not see that?"
So an aspect of communicating skillfully for me, is to develop some sense of where an individual's cognitive strengths and weaknesses lie. With at least some sense of an individual's cognitive strengths and weaknesses, I can try to capitalize on the strengths of the individual and work around the weaknesses to improve communication. Perhaps we all do this subconsciously to some extent, and pragmatically we don't tend to have much other option than to go with our intuitons on such matters. I just wanted to point out this complicating factor, because I'm a complicator and that's what I do. :wink:
Perfect! For me. Just moments ago I realized I meant to say something about Whitman over in the thread where I'm pissing on the law of non-contradiction:
And what you posted is almost exactly what I wanted to say. Ask me a question and I'll be responsible for the answer I give, as a person, as a moral agent by society's reckoning, but that doesn't mean it was "I" who answered. Some cognitive lieutenant piped up and said, "I've got this one, boss." We have many many specialty departments, and one of them produces the answer I (the person) give.
This is perfectly clear in some linguistics research. You can identify a race between concurrent processes -- maybe one applying the "-ed" rule and one looking up the irregular preterite -- and whoever gets there first wins. Availability bias is obviously like this too, and suggests multiple sources of the answers we give, the things we say.
I've complained about it, recently, with respect to my own posting habits, when I notice that I'm giving a type of answer out of habit, even if it's no longer representative of my thinking. We say stuff, and sometimes the stuff we say strikes even us as someone else talking with our voice.
Yeah, just did some pissing of my own. I need to throw my Kindle in the toilet now, and try to break my TPF addiction.
Have a great trip! We'll be here when you get back.
I feared as much...
Quoting Srap Tasmaner
Yes. So we can come at this from an information-first perspective and say that I'm using you (or vice versa) to update my beliefs about some external state, say a simple situation in which you've witnessed and event I didn't see - let's use last night's match as an example (what was Wenger thinking sending Walcott on that early? - no, let's not go there again). So I have virtually zero certainty about what happened, you tell me, and I update my beliefs. So far so simple, but if I have some certainty, I'm still going to use the same policy (maximise my model evidence) only this time, I have a more complete model. The consequence of model evidence maximisation policies is that they tend to be confirmatory. If we take perception as simpler example, perhaps. If I think what I'm seeing is a table, I'm not going to scan the whole scene like a dot-matrix printer, I'm going to go straight to where the legs should be, confirm there's four, and retire ("yep, table - called it"). So transferring to communication (still in enquiry mode for now), I'm going to use you to update my model, but only under my model evidence maximisation policy for whatever I already slightly believe. That means I'm interrogating those bits of your belief that will confirm mine.
So, my checking is directed, I'm not offering up all my model, nor am I interested in all of your model. I'm only interested in that specific bit of your model that might most efficiently confirm (or possibly rule out) my model.
So when you say...
Quoting Srap Tasmaner
... I think this is right. Basically, my first pass is going to be so honed on model evidence maximisation that's it's almost more a test of how useful a source you are, a check to see if you're going to have the confirmatory evidence I need. When you give something completely unexpected in response, you're no longer a useful source for model evidence maximisation. The policy has to change. So...
Quoting Srap Tasmaner
... now it's about you the external state, not you the source. Coming back to perception, I've checked where the legs should be, disaster! (for the model), not only do I find no legs, but no ~legs either. Nothing. It's dark. My saccade policy has failed. Now it's about the external state 'darkness'. What's going on and what can I do to remedy this failure of my model optimisation?
I think here is where most interaction sits in philosophy-type conversations (politics, social arrangements etc. too). The cost of using you as a source is so high, since our respective models are so misaligned, that my best surprise minimisation strategy might well be to fix you. I need you thinking like me so that I can use you as a future source. You're no use to me constantly surprising me with completely left-field models, because I don't doubt my own models that much, that I'm going to 86 them and insert yours wholesale.
And I think this is where habits of thinking come in. The effect of this practice over time in communities is to hone in a basic set of thought habits that at least keep us all vaguely useful to each other as model evidence maximisation sources.
Or in your own words...
Quoting Srap Tasmaner
I'm not leaving until Saturday, and now I'm done with trip prep for the day. So perhaps I'll get somewhat caught up with responding to you before I go.
Exactly. Furthermore, it seems worth pointing out that everyone has one, but some are based on looking into the evidence and some aren't. (Not that I need to tell you that.)
Quoting Isaac
"If psychology fails" to me, seems a question that would come out of an excessively dichotomous way of looking at psychology. I'd think a more relevant question is whether there is progress in psychology. From my perspective its pretty undeniable that progress is ongoing in psychology. We are fantastically complex creatures though, so of course progress takes time.
Yes, that's sort of what I was trying to get at. No matter how bad our methodology is, it can't be abandoned unlike, say the study of black holes prior to the equipment needed to measure them. In the latter case we could say, "oh well, we'll never know" and go about our daily lives ignoring the issue. In the case of psychology, that's not an option, we have psychological theories, we act on them every day, our political policies are built on them. Even a modicum of slightly scientific analysis is better than none. a 1% replication rate is 1% more certainty than we had before.
Quoting wonderer1
Yes. And ever changing. If psychology is affected by culture (and I'm certain it is) then what was true yesterday in psychology might not be true today. We're playing catch up.
Should we consider philosophical questions largely in isolation or should we be thinking in terms of a larger picture, e.g. "where is human thought coming from and where is it going?" Are there patterns in philosophical thought such that we can see where we might be headed from where we have been?
Is the "Great Conversation," the canon of philosophy, simply a collection of influential works that happen to cite one another, or is it an example of an unfolding dialectical process? Do we continue to study the works of Plato, Kant, etc. because there is something truly great about the primary sources? Or do we keep going back to reshape their work for our times, in a way "reshaping," the history itself? If the latter, is there any discernible pattern to how this is done?
Is "philosophy... [its] own time apprehended in thoughts?"
For skeptics of the speculative attitude, I'll just throw this quote out there on speculative history more generally and add a bit more below:
From M.C. Lemon's Philosophy of History
Additionally, if we believe science tells us true things about the world then presumably we believe that at least one human project does undergo progress. To be sure, we don't think it always gets things right, but we also tend to think that a biology textbook from 2020 should be much closer to the truth than one from 1960 and that one from 1960 gets more right than one from 1860.
If we can make progress here, such that human beliefs hew closer to the truth over time, why not in other areas? Why not in some or all areas of philosophy?
Humans are goal driven and can accomplish their goals. Indeed, a big trend now is to ground the emergence of meaning in an "essentially meaningless," physical reality in the goal oriented nature of life itself. Groups of humans also accomplish intergenerational goals, e.g., the great cathedrals rose over several generations. Whole nations have at times been successfully mobilized to accomplish some goals, e.g., the standardization of Italian and German out of several dialects in the 19th century, or the revival of Hebrew as a spoken language after 2,000+ years. This being the case, what stops us from recognizing a broader sort of global "progress," or a more narrow sort of progress in some areas of philosophy?
If the reason progress is impossible is because "progress" can't be statically defined long term, is there any pattern to how we redefine " progress," over time?
I don't want to derail the thread, if we want to have a thread on the philosophy of history we can, these questions are more rhetorical. Obviously the relevance of history changes depending on how you answer them. There is an argument to be made that focusing on arguments in isolation is akin to putting all your effort into finding out the best way to walk and making the most accurate maps, while completely ignoring the question of where you are walking from or to and why.
Just as an example, the cooption of Peirce by the logical positivists is relevant re: questions on ontology writ large if we see logical positivism largely as a reaction against the influence of Hegel. The move wasn't entirely reactionary though, it doesn't go back to mechanism, but instead moved to an empiricism so radical that I honestly find it closer to idealism than today's popular mechanistic accounts of physicalism. In this, and many other ways, it is more a synthesis of Hegel with other contradictory strands in philosophy. Hegel was sublated and subsumed within the new "logical positivism," and this helps us see why logical positivism was born containing the seeds of its own destruction. A set of implicit contradictions was there from the beginning, just waiting for folks like Quine to make manifest.
If ideas and theories don't simply "evolve," due to natural selection towards truth (i.e. Fitness vs Truth Theorem), but rather advance through a dialectical model, then history is a good deal more "active," in how thought develops in all contexts. Saying these turns are "necessary," might be a bridge too far, but they also aren't as contingent as in a "natural selection-like" theory of how knowledge progresses. Something like an attractor in systems theory might be a better way to conceive of the dialectical advance, maybe blended with the idea of adjoint modalities and the way a proof of one object serves as a proof of others (a key intuition in the development process of category theory from what I understand).
Obviously the above example would need a lot more fleshing out than I want to put into it to be compelling and I certainly don't want to presuppose Wayfarer was thinking anything like this. Not all speculative history need be quite so Hegelian.
That is exactly the sort of position I was hoping someone would advocate -- but for some reason even you hedge here and don't advocate it -- and why I didn't feel comfortable just branding @Wayfarer's lectures on history non-sequiturs.
Quoting Count Timothy von Icarus
And this is just obviously right.
Here's two points, one from the thread, and one kind of its background:
(1) Lots of people say history has pedagogical value, that you can understand ideas better if you know their history, what they were responding to (as in the second quote), the whole context, even what came after in response.
(2) Some people hear "the Enlightenment" and think, "Greatest wrong turn in history, still sorting out the mess it made," and some people think "Finally! That's when we got on the right path, the only trouble is staying on it."
I think one of the issues @Isaac was raising is that (2) exerts a considerable influence over how you enact (1). Are you going to put the Enlightenment into a story in which it's the good guy, disrupting Bad Old Tradition (especially religion), or the Bad Guy, depersonalizing nature, atomizing everything, destroying the tried and true holistic understanding of things (and banishing God to fairy land). @Isaac's suggestion is, I believe, that there is no 'objective' context to recover to understand the Enlightenment; however you describe that context, before and after, is going to be shaped and colored by the story you're telling about it.
And that's likely just true, but may leave some room for comparing stories, judging them more or less comprehensive, more or less true to the (cherry-picked) facts, just the usual stuff. I mean, of course we do that. But the calculus changes here if you recognize that all you have the option of doing is comparing stories (and what they present as evidence for themselves) to each other; it's obvious with history, but true everywhere, that you don't have the option of judging a story by comparing it to what it's about, 'reality' or 'what really happened'. Comparing stories to each other might give some hope of 'triangulating' the truth, until you remember that this triangulating process is also going to be shaped and colored by narrative commitments, just like the material we're trying to judge.
Thanks for bringing us back to the topic. More interesting points in there than I've responded to.
Great point! Unfortunately in the school and even in the university science and especially math isnt taught like this: how not only the mathematician/scientist came to the conclusion, but how the scientific community accepted the result. There simply isnt the time. Hence you are taught the theory, the proof, the conclusions. And thats it, then forward. Not much if anything on how it was done, what were the objections, possible earlier errors etc.
Knowing the history, the older ideas, the now ancient technology and methods used makes it all far more clear. Its not just that you simply learn by heart to use the science/math like an algorithm to answer a certain question.
Good point about the moving target. Furthermore, I'd think propagation of psychological understanding itself contributes to the target moving.
My reticence isn't so much due to the fact that I find speculative history to be wrong-headed or hopeless, but rather that it's almost impossible to advance such a theory in a convincing and well-supported manner while also keeping the argument short. It's also the type of thing where coming up with criticisms is much easier than convincing positive arguments, and I wouldn't want to get side tracked defending the particular merits of some one theory when the question is more about the merits of speculative history and how it interacts with philosophy in general.
The "we went off on the wrong track here," type arguments aren't necessarily without their merits, but these tend to be arguments about how there is some "truth" or "ideal" out there and how we can discover/actualize it, rather than being an attempt to describe progress as a whole.
This is certainly true to some degree. There is no one objective frame in the first place because different people have different opinions within their own eras, and oftentimes these are diametrically opposed. The same people also view the same events differently at different times. Nor are trends ever absolute; every period of "romanticism," has its rationalists, every period of "rationalism" has its romantics.
That said, I don't think this leaves us unable to analyze intellectual history at all. We can observe that Renaissance thinkers "rediscovered," classical culture in an important way. We can spot major swings in US culture when comparing the 1950s and 1960s and be quite confident in describing real differences in trends. The problem is often one of degree, we can overemphasize some trends, etc. There are different reasons we have for turning to history, so how deep we go in exploring nuance is something that gets determined on a pragmatic basis.
Moreover, the problem only seems so intractable if we insist on seeing history in terms of agents and intent, a stage where individuals are running the show. "How can you privilege this or that voice? How can you be sure this description of events isn't self-serving?" These are certainly valid questions, but questions about individuals' original intent are only paramount if we think man is firmly in the driver's seat of history, that there is one proper unit of analysis in human affairs: the individual. I think this is a major mistake.*
There is plenty of work in the social sciences to suggest that institutions have goals that aren't continuous with those of the individuals that compose them, that organizations exhibit emergent forms of intelligence in problem solving, and that "group minds" are a useful way of understanding some emergent behaviors. From ant colonies, to lymphocytes, to neurons, we see patterns in how complex systems work, and these seem to apply to human social organizations. So, just as no one ant knows what the hive is doing, the same can be true for us vis-a-vis history.
This is what Hegel gets most unambiguously correct. He's at the same time an early progenitor of complexity studies and still on the bleeding edge of being willing to follow it to its logical conclusions. What "a man," thinks doesn't drive history in the long run, but what rather what "mankind," thinks. We are but accidents of a social "substance," i.e., "Spirit." His teleological claims about where Spirit is headed is less supportable.
Seen from above, the various threads of philosophy over the years look akin to the "terraced deep scanning," preformed by lymphocytes as they dynamically explore a massive sample space in an attempt to solve problems. Some areas get explored more thoroughly than others, some lines of inquiry receive more resources at one time, and multiple lines work in parallel.
IMO, the problem of sorting out bias is not the central problem when considering history, although it is a real one. The larger problem is that we're in the role of a neuron having to explain what the entire brain is doing, or a fish being asked to explain the behavior of its school. This is why we have needed to build such a massive apparatus of data collection and analysis, and so many separate fields of inquiry in the social sciences. Our narratives are akin to neuronal action potentials or honey bees dances; they're the way individual components of the system talk to one another.
However, this doesn't doom civilization's attempts at self-knowledge any more than a human being's mind being the work of small components precludes us from having a sort of emergent, imperfect self-knowledge. Sure, a sole neuron is never going to understand the brain alone, but then the neuron doesn't work alone either. History writ large is communicated, and its information processed, by systems of people, not individual people in isolation. I think the correct analogy is a perceptual system, a mind mulling something over, not a map that gets pieced together by individuals.
But if this is true, than science and philosophy are also not a mapping process, but more akin to group cognition. In the big picture they are just another link in a great chain of systems whereby being encodes being, representing itself to its self. This chain continues in ever higher levels of emergence, from the most primitive genomes, to nervous systems, to language, to cultures, and upwards, with each system undergoing its own form of natural selection and evolution in a sort of fractal recurrence.
To what end? We can consider that the earliest life didn't "understand," the universe so that it could survive, but rather survived because it somehow encoded relevant information about the enviornment into its structure. In life, "knowledge" pre-dates goals (as only makes sense, you have to know something to have goals). But goals aren't irrelevant to survival in intelligent life, they just take time to emerge. We as individuals have goals, organizations have goals, but my guess is that we have yet to reach a point where the highest order organizations we are a part of can have goals.
And perhaps goals undergo their own sort of selection process?
Exactly. Sort of like how the the visual cortex doesn't work with any of the original light waves that are the "subject" of sight, and the components of the auditory cortex don't have access to, or communicate with sound waves. Narratives are the action potentials of history.
*(Interestingly it is also a mistake that human beings make almost universally vis-a-vis nature, both:
A. Early in the development of civilizations - i.e., animism is ubiquitous, seeming to occur across cultures until a civilization develops some form of philosophy that starts to look for abstract principles that determine how nature works. E.g., "the river floods or doesn't flood because it wants to, the rock falls because it wants to, etc.
B. Early in human development. Research shows that young children are far more likely to describe events (including those involving only inanimate objects) in terms of agency than adults.
This is an interesting parallel. Do people in a more advanced society need to retread the mental routes their ancestors have taken to reach the same developmental stages? Or maybe it is a coincidental similarity?)
Okay this is the perfect example.
What will you say about the difference between the 50s and the 60s? Let's say this comes up in a discussion here on the forum, and broad strokes are acceptable. You want to describe the difference, how will you do that? What words will be in your description?
There are to start with the two obviously opposing views, which I won't rehash in any detail. The 60s was either a time of liberation or of everything going to hell. But suppose you don't want to say either of those because you're doing philosophy, you're being scrupulous, you don't want to rely on an explicitly tendentious description of the 60s, so what will you say instead?
You might just state some facts, by "facts" here meaning statements about the 60s you assume will be for the most part uncontroversial. More young people read Marx than in previous generations, and more claimed to have read Marx. Young people in considerable numbers publicly protested many government policies relating to the war in Vietnam. Many people protested racialized laws and police practices especially in segregated Southern states and large cities throughout the country. Blah blah blah. We're going to aim for neutrality here.
My issue is not to nitpick over how neutral you can manage to be, but this: the more neutral you manage to be, the less likely it is that what you say has any direct connection to the larger argument you're making. That won't be true for all cases, obviously; if someone claims to prove that young people have never taken to the streets, that argument lands on a factual claim which can be refuted with a counterexample.
But the cases I was interested in look more like this:
A: The Industrial Revolution was a mistake.
B: Why do you say that?
A: The steam engine was invented in the 18th century and the first commercial use of Watt's improved design was in 1776 by an ironworks...
Etc etc etc.
By being scrupulously neutral in your description of history, you force the reader to 'connect the dots', to figure out what inferences you intend them to make. In a case like this it's obvious there's some connection, and depending on the rest of the paragraph many more connections might be implied or inferred, but none of that is actually stated. This is exactly the point I was addressing in the OP.
So to make the point you're making clear, in many cases, you'll have to give up on this scrupulous neutrality and give in to being at least a little tendentious. In a lot of cases. Where you only need facts to support your argument, no. But if you need something taken as something for it to hook into your argument or your claim explicitly, for it to be anything more than obiter dicta, then you're down in the trenches offering an interpretation.
Exactly. This is true with the sciences too. If we want to challenge the existing paradigm in any way we need to make more theoretical arguments. If we retreat to only looking at the well-verified, replicated empirical observations, we can only say certain types of (generally uninteresting) things.
You can't explain empirical results coherently without some degree of theory-ladenness in the first place. If you avoid making any theory-laden claims then at best you implicitly cede the role of explanation to the dominant theories, at worst you just have an incoherent list of observations.
But I also get why people simply make historical links between ideas without advancing a theory. I think the appeal here is due to how our cognition functions. When I read, "Spinoza," or "Descartes," there is a whole rich web of interconnected concepts attached to those names. I don't have to unpack all those concepts to use them. Sometimes it's easier to do analysis on compressed data (e.g. it is easier to see that 10*10^900 > 10*10^899 in this format than by looking at the numbers written out in decimals). The names become a vehicle of tremendous data compression. Which is just to say that "x also thought y," may be able to do a lot of heavy lifting in tying together concepts, but only if people actually share the same reference points, which they often don't in philosophy because the field is too big.
IMO this works because the "parallel terraced scan," does indeed explain how the mind works in key respects. This is also why sentences in specialized sub-disciplines sometimes appear to be saying completely trivial things as an excuse to drop in names or arcane terms (granted, this also happens to paper over lack of substance or due to bad writing skills).
I also think people like historical narratives of how science, math, etc. develop because we are innately geared towards remembering people, conflicts between people, social interactions, etc. as a social species. Hence the desire to "tag" abstract ideas to some individual or group.
Yeah I'm not contesting any of that.
Let's put it another way: suppose you're making some argument and you have in mind a particular interpretation of the 60s that would support your claim; but instead of presenting that version, you present a scrupulously neutral presentation of the 60s at the point where your tendentious interpretation would hook into the larger argument you're making. The reader either gets what you're (not) getting at or they don't.
But what you've done is suppress your reason for referring to the 60s at all by moving to the scrupulously neutral version, and you've done this instead of just not reaching for the 60s in making your argument. You're trying to have your cake and eat it too, and violating Grice's maxims. It's not about whether the point you're making is persuasive or worth considering or 'legitimate' in some sense; it's the roundabout way of (not) making the point that is at issue.
I do the same thing you did, where you suggested that 'an argument could be made ...' I use that one. I also use 'Some might argue ...' I think that's acceptable when it's really and truly not my position but a position I want to talk about or I want someone else to talk about. (I tried it several times in this thread.) I also use that when I'm not sure what to say about it except that it's a position that occurred to me is possible.
But it might be a habit worth breaking, or at least it might be better just to directly say what those little phrases are standing in for. I think I'm happier when I just say things like "I think there are three options here..." and then lay them out. No confusion there about whether I'm advocating in a deniable way, etc.
Probably true in most cases, but there is a place for strategic ambiguity if it's done right. Heraclitus, Zen Koans, Biblical poetry, Hegel- ambiguity can allow a work to be more dynamic.
E.g., given his background in theology, which let him see how the Bible was interpreted and reinterpreted over millennia, his love of Heraclitus, and his expectation that his own words would be studied by future generations, I get the feeling that Hegel sometimes intentionally wrote like such an asshole.
Maybe philosophers writing in mystical poems will make a comeback some day... we could turn to the renewable energy source of Russell spinning in his grave.
A surprising answer! Good for you.
Nietzsche has that line, "Most philosophers are bad writers because they show you not only their thought, but the thinking of their thought."
He might have been bullshitting though.
I'll say this though: the style you're describing can be perfectly appropriate as a pedagogical tool, and it's one reason what sprung to (your) mind was wisdom traditions, where someone is definitely the master or teacher and someone else is the student.
That is not appropriate here, where we are all collaborators, and that's why you won't find this kind of thing in science either. What we do is a cooperative venture. There are no masters here to gnomically bring us to enlightenment.
BTW, I'd love to give you a clearly stated thesis I feel comfortable defending on this topic. I have been equivocal because I don't have a well-developed theory re: the history of ideas writ large. I don't mind defending speculative history, e.g. this post, but the topic of how philosophy progresses seems more difficult because lone individuals can have such a huge effect on the discipline.
However, one thing I would note is that the problem you mention here:
..seems like it stems from the common tendency to conflate "truth" and "objectivity." This is hardly surprising given the continued influence of logical positivism, which advocated the position that objectivity does get us closer to truth, and that complete objectivity becomes equivalent to truth at the limit.
I don't think this is true though. Statements can be objective, but also flat out false, while it's also possible to give a biased account of a phenomenon that is true. We tend to think of truth in terms of a binary, something is true or it is not (the law of the excluded middle), but objectivity is something we define by degree. IMO, your example is a good indication of how people tend to undermine themselves by seeking a standard of "objectivity," that it isn't worth aspiring to. Sometimes trying to be more objective can actually drive us away from truth. The "view from nowhere," is a contradiction, one doesn't see without eyes and one doesn't understand without judgement.
So, I think you make a good general point about ways in which arguments can be poorly formulated-- poorly written, but I think it also touches on a larger issue, that there is a tendency to pursue objectivity at the expenses of accurate representation. E.g., I think "The Twilight War," is in generally a quite accurate, well-researched description of the contentious relationship between the US and Iran since the Iranian Revolution.* However, it is also quite biased, it largely looks at the relationship through the lens of how the US saw the conflict, using largely declassified/leaked US documents, interviews with members of the US government, etc. The books is also biased because it ignores the larger context in which the events it documents occurred. Due to its scope it can't explore the foreign relations of either nation as a whole, the Iran-Iraq War, etc. But, would we be better served by a book that attempts to get closer to the truth by eschewing interviews and documents, or issues of "intent" and instead limiting itself to quantitative analyses on relevant metrics? Absolutely not. We're describing international relations, not a math problem.
(For the interested, "Guardians of the Revolution," and "The Shia Revival" are good English-language takes from the Iranian perspective).
The quotes below might be relevant as well:
For example, during the heyday of logical positivism, some textbooks on mechanics came out that proudly proclaimed that they lacked any diagrams. Everything would be explained in terms of equations, because equations, being more abstract, and allegedly less subject to being shaped by the human sensory system, were thus more objective (and so closer to a "true" representation). However, there is no obvious reason I can think of why a bunch of algebraic statements should be a "truer" representation of what the world is actually like than a diagram.
I guess this sort of gets at what @Isaac was saying before about some presentations being more convincing. Maybe mathematical arguments are more persuasive, this is certainly a reason why the social sciences and modern management leans into quantification and data collection so heavily. The question is, should they be? Might the process of turning complex social phenomena into a series of values in some SQL database actually get us further away from accurate representations?
I'm not a huge Clayton fan so much as I think his book addresses issues of major import for the sciences that had not previously been addressed in an accessible way.
In terms of things being "more likely to be true," I tend to think of this in the Bayesian/subjective probability sense, i.e., "what level of certainty can we put on each hypothesis," and "how are our hypotheses related, does evidence against one cause a cascade that makes us doubt other hypotheses."
However, unlike Clayton, I don't think Bayesianism solves all the problems we're facing. These problems seem fairly intractable, to the extent that I've started to wonder if it would be worth simply advancing as sort of "virtue epistemology" for the sciences. Something Aristotelian like "if you want to be a good scientist, these are the traits you should aspire to." Rather than keep searching for a foundation that doesn't exist, and building new foundationalisms on sand, we admit that the problem is open ended and instead try to build a set of different foundations that we can use in a context dependent manner.
I have no idea how you got that out of what I wrote or what you quoted. I was making very close to the opposite point, that you need to commit to an interpretative presentation for the history lesson to hook into a larger argument. Reciting only facts leaves out how those facts contribute to the argument and why what they contribute matters (unless that's clarified elsewhere, obviously). It turns reasons into non-sequiturs.
Here, for free I'll give you another reason you might not express explicitly what makes particular facts relevant to the case you're making: they're not. This can be play out a couple ways but the result is the same: the connection between the facts recited and the point you're making doesn't show up because there isn't one.
I was agreeing with you, sorry if that wasn't clear. My point was, people turn their arguments into lists of facts because of a widespread perception that objective = accurate. The type of bad argument you're describing is the result of forgetting why we want to be objective in the first place.
I was just adding that this is a bad thing to do not only because it isn't persuasive or clear, but also because objectivity itself gets you further away from accurate representation. More a "yes and," point.
Thank goodness! Yes that makes perfect sense, and I see now you were filling in a possible motivation.
I was deeply confused. Apologies if I misread you.
One simplistic way to put the divide is to say that analytics focus on problems, "the problem of identity," "the problem of free will," while continentals focus on proper names, "what does Heidegger think about identity," "how should Derrida's critique of Hegelian free will inform discussions today?" etc.
This is simplistic because continental philosophers do indeed focus on and specialize in problems, and analytics have started paying more attention to biography, but it does reflect a real distinction.
For the continental traditions, which sees man as essentially limited, fixed within his historical context, it makes no sense to "talk only about the problem," or "focus on just the argument." To do this is to presuppose a level of objectivity that isn't possible; a view from nowhere which must always be beyond man's reach.
Thus, how different camps progenitors are claimed by each camp, attempts to turn the idealist pragmatics into "analytics," is then exactly the sort of thing that is of key importance.
Of course, the for analytic, this seems in danger of "devolving into literary critique," or worse "being suspiciously French." :lol:
IDK, mostly stuff we already covered, but it was interesting how the history of the split is explained in terms of the (comic) British bipolar ambivalence about Europe (i.e., the fact that it seems like it's bound to demand that all citizens do their part to row it out into the Atlantic, or failing that, expand the Channel one of these days). This just sort of spread to the rest of the Anglophone world through common texts.
Yeah that was part of the motivation here. I was acknowledging my historical preference for Anglo-American philosophy, with its rejection of historical approaches, and inviting arguments from a more continental approach. It's right there in the OP.
There ended up being no clash of schools but it became clearer for me what norms of discussion, or perhaps reason, were being violated, so that's something.
Compare:
[quote=Richard J. Bernstein, Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis, 1983] Cartesian anxiety refers to the notion that, since René Descartes posited his influential form of body-mind dualism, Western civilization has suffered from a longing for ontological certainty, or feeling that scientific methods, and especially the study of the world as a thing separate from ourselves, should be able to lead us to a firm and unchanging knowledge of ourselves and the world around us. The term is named after Descartes because of his well-known emphasis on "mind" as different from "body", "self" as different from "other".[/quote]