Arguments From Underdetermination and the Realist Response
Introduction:
Broadly speaking, an argument from underdetermination is one that attempts to show that available evidence is insufficient to determine which of several competing theories is true. That is, many different theories might be able to explain the same evidence, hence any move to choose between theories must be underdetermined, i.e., not determined by the evidence. Within the class of such arguments, there are many that go a step further. These will often purport to show that for any body of evidence, there will always be an infinite number of different explanations that are consistent with that evidence.1
Since the start of the 20th century, these arguments have increasingly played a pivotal role in philosophy, often leading to radical conclusions. Yet even those who have resisted such conclusions have often found these arguments difficult to refute. Such arguments were not unknown to pre-modern thinkers, and yet they were generally considered fairly easy to dispatch. Hence, we are left with a bit of a mystery. Why did arguments from underdetermination begin to seem insurmountable?
In this post, we will try to answer this question. As we shall see, it is not that the ability of underdetermination to produce support for radical skepticism was lost on pre-modern thinkers. Rather, it was that different starting assumptions tended to render such arguments fairly innocuous. To illustrate this point, we shall explore a typical Thomistic response to arguments of this form.
Underdetermination and Skepticism:
Arguments from underdetermination have played a defining role in contemporary thought. There is for example:
David Humes argument against causal inferences and explanations, as well as his hugely influential Problem of Induction;
Ludwig Wittgensteins rule-following argument, as well as Saul Kripkes influential reformulation of it;
W.V.O Quines argument for the inscrutability of reference;
Quines holist arguments for the underdetermination of theories by evidence, as well similar arguments for forms of theoretical underdetermination made by J.S. Mill and expounded upon by Pierre Duhem;
Thomas Kuhns arguments about underdetermination at the level of scientific paradigms;
As well as many others, including Feyerabends epistemological anarchism, Goodmans new riddle of induction, etc.
It would not be an understatement to say that these arguments have been, in many ways, the defining feature of the Anglo-American, empiricist-analytic philosophical paradigm. It is also hard to overstate the radical nature of some of the conclusions that have been drawn from these arguments. Bertrand Russell, for instance, suggested that if Humes Problem of Induction could not be resolved then there is no intellectual difference between sanity and insanity.2 Richard Rorty and others, drawing from Quine and Wittgenstein, argued that truth and knowledge themselves would have to be radically redefined. Truth, rather than being the adequacy of the intellect to beingthe view held, in varying forms, for most of historywould instead have to be understood in terms of a new beliefs coherence with our preexisting beliefs, or as merely a compliment we pay to beliefs we find good to hold. Similarly, these arguments have been used to argue against even modest forms of scientific realism, resulting in a radical anti-realism, which claims that scientific truth is merely a social label bestowed upon well-entrenched practices.3
Obviously, not all thinkers within this tradition have agreed with such radical proposals, yet they have often found it hard to counter them. Were a typical Enlightenment or Victorian-era philosopher transported to our time, I do not think it is a stretch to say they would find these skeptical conclusions shocking, a radical reversal of the faith in reason and empirical methods that dominated in their epoch. So too, we can well imagine that pre-modern scholastic thinkers would be astounded by shifts such as the attempt to eliminate causation from science (the very thing that, for them, defined science), or the various moves to radically redefine truth.
On this last move, the redefinition of truth, it is worth noting just how radical the paths that have been charted in contemporary thought might seem. We could consider here how Quine's more radical successors took their epistemological holism to require a shift to a coherence-based, pragmatic model of truth, where truth is a function of how well a new belief fits in with our web of preexisting beliefs. Such changes, and there have been many suggested, are of course, rarely presented as rejections of truth, but instead as necessary redefinitions (this being true even of most proponents of deflationary theories of truth). Yet we could well imagine our time travelers objecting: but this is just equivocation. Truth is thoughts grasp of being. These redefinitions amount to denying truth, and then recommending some other thing to fill the gap. The equivocation just softens the blow. On your view, we are no longer capable of scientia (knowledge), but merely a system of conceptual and linguistic manipulation. That is, the charge would be that this is really just radical skepticism and epistemic nihilism masquerading as an a mere adaptation.
However, there is one group who might not be particularly surprised by this course of events: the original empiricists. After all, Sextus Empiricus, from whom the name empiricist is derived, was part of the explicitly skeptical Pyrrhonist school of ancient thought. A key idea within that school was that of equipollence. Equipollence exists when opposing arguments or theories appear to have equal strength or plausibility. In this way, it is quite similar to underdetermination. If we have attained equipollence, then we are not led to assert one conclusion over all others, and so we are not compelled towards any particular belief.
Rather than viewing equipollence as a sort of epistemic crisis (which was often how arguments from underdetermination were initially viewed in modern thought4) these skeptics strove to attain equipollence. The idea here is fairly straightforward. If we are not forced to any particular conclusions, then our passions will also not be raised by any particular conclusion. This allows us to achieve ataraxia, a state of serene calmness. By contrast, strong beliefs were thought to promote anxiety and distraction, the opposite of the state of dispassion sought by these thinkers. To be clear, the argument here is not that all arguments are equally strong, or that truth is, in principle, impossible to attain.5 It is rather that we can reason to the view that many arguments seem equally strong (which in turn does put the reliability of most truth claims in question).
It is useful to clarify here what made empiricism unique. It is not that it used sense observations to try to explain the world. If this alone made a philosophy empirical, then essentially all philosophy is empiricist. Nor was it a commitment to the proto-scientific methods of the ancient world. Aristotle, labeled as a dogmatist by the ancient empiricists, is often considered to be the father of many sciences. Of course, modern empiricists do sometimes claim Aristotle as one of their own on account of this fact. Yet, if empiricism is construed in these terms, then the overwhelming majority of all thinkers would be "empiricists." Scholasticism and its modern variants would be empirical, as would Hegelian philosophy, etc.
Rather, what made the ancient empiricists unique, aside from seeking equipollence, was their means for doing so: rejecting causal and metaphysical explanations. That is, they stuck to descriptions of phenomena and observed patterns. The early empiricists were physicians who employed a practice of recording case studies in order to keep track of patterns. Rather than posit causes that would explain why some cases improved and some did not, or why some treatment was efficacious, they simply guided their practices based on the consistency of the patterns they observed. This allowed for conclusions like: treatment with this herb works consistently in these cases, but not explanations for why this was so. Prima facie, this is actually somewhat at odds with the scientific method as popularly conceived, since one could hardly develop hypotheses to test without violating the prohibition on causal theses.6
Hence, it is perhaps unsurprising that a later philosophical tradition modeled on these thinkers would eventually find itself mired in (or perhaps, set free by) skepticism. Indeed, this connection was clear for some in the tradition, such as Hume, although it certainly became obscured as empirical was increasingly used as a synonym for a scientifically-informed philosophy and world-view (with science overwhelmingly being understood in realist terms in the 19th and early 20th centuries).7 For some, this skeptical shift, and a move towards anti-realism has been a positive outcome. Somewhat akin to the ancient empiricists, these thinkers think that the death of realism and certainty is an improvement, often because it is thought to free us by allowing us to think in new ways. Against this though, we might consider that, if realism is the case, our ignorance of reality would itself represent a limit on freedom.
At any rate, for most, the skepticism resulting from underdetermination has been seen as a serious threat and challenge. Yet responses have often involved work-arounds that might themselves been seen as skeptical. Here, it is worth employing Saul Kripkes distinction between different responses to skeptical problems: straight solutions and skeptical solutions. The straight solution aims at showing that the skeptic is simply wrong, that their skepticism is unjustified. A skeptical solution, by contrast, allows that the skeptic has a point, but will maintain that our ordinary practice or belief is justified becausecontrary appearances notwithstandingit need not require the justification the sceptic has shown to be untenable.8 For example, Kripkes own skeptical solution vis-à-vis linguistic meaning claims that language is not meaningful because of metaphysical facts, but only on account of shared behavioral regularities related to social norms and public agreement.9
Again, the difficulty here is that the solutions often seem quite skeptical, e.g., words never refer to things, there is never a fact about exactly what rules we are following, etc. Here, it is worth considering what it is one ought to do if one sees an argument with an absurd conclusion. The first things to do are to check that the argument is valid, and crucially, that the premises are true. I would argue that contemporary thought, particularly analytic thought, has far too often only done the first. Because it holds many empiricist presuppositions beyond repute (indeed, dogmatically might not be too strong a word) it has not generally questioned them.
Yet if an epistemology results in our having to affirm conclusions that seem prima facie absurd, and if further, it seems to lead towards radical skepticism and epistemological nihilism, or an ever branching fragmentation of disparate skeptical solutions and new anti-realisms, that might be a good indication that it is simply a bad epistemology. Indeed, an ability to at least secure our most bedrock beliefs might be considered a sort of minimal benchmark for a set of epistemic premises. Yet, due to the conflation of empiricism with the scientific method, as well as modern cultures preference for iconoclasm, novelty, and creativity, the starting assumptions that lead to these conclusions are rarely questioned. With that in mind, let us turn to the realist responses that, in prior epochs, made these arguments seem relatively insubstantial.10
The Realist Response:
A Thomistic or Aristotelian response to arguments from underdetermination would deny the assumptions that make such arguments plausible in the first place. It is only because of the presupposition that no metaphysically substantial premise can enter into our epistemology that such arguments are able to gain momentum. This, paired with the demand that all relevant evidence must be third-person and external, arguably makes explaining knowledge (a first-person experience) impossible from the outset. Indeed, it should be unsurprising that such limitations result in theories of truth that claim that truth has no metaphysical import; such a conclusion has been simply assumed as a premise!
These different assumptions show up a key in difference in how sense knowledge is considered. For the empiricist, perception is merely raw, inchoate sense data, from which patterns can be derived. For Aquinas and Aristotle, there is a real form in things that is received by the senses, and then abstracted and known by the intellect. The latter process explains the phenomenon of our understanding anything at all. From the realist perspective, we start from a world populated by trees, rabbits, etc. When Quine concludes that we can never tell which word in a foreign language refers to rabbits, because there is always other stimuli that accompanies the presence of a rabbit that could be referred to instead, he is involved in a sort of question begging. He has assumed, from the outset, that there are no things with determinant natures for us to name. Yet if this is assumed, it can hardly be shocking that we are forced conclude that words cannot refer to things, for we have already decided that there are no things to refer to.11
Here, the empiricist might object: thats a fine thesis, but how can you prove that this form or actuality exists? The key assumptions that underwrite these notions boil down to this:
1. Things do not happen for no reason at all. Things/events have causes. If something is contingent, if it has only potential existence, then some prior actuality must bring it into being. It will not simply snap into being of itself. Our experiences are contingent, thus they must be caused by something that is prior to them.
2. Being is intelligible, and to be is to be intelligible. Every being is something in particular. That is, it has a form, an actuality, that is determinant of what it is (as well as the potential to change, explained by matter). This actuality determines how a thing interacts with everything else, including our sense organs and our intellects. If this was not the case, interactions would be essentially uncaused, and then there would be no reason for them to be one way and not any other (i.e. random).
That is it. These are assumptions, but they do not seem to be particularly objectionable ones. Indeed, if they did not hold, if being were unintelligible and things did happen for no reason at all, we might suppose that philosophy and science are a lost cause. Of course, much disagreement here results from a misunderstanding of the nature of form, potentiality, etc. Sometimes, these are taken to be something like terms in a scientific theory, rather than a set of metaphysical principles. Yet a things form is simply that which makes a thing anything at all. The basic idea is this. We perceive a world around us. It is not a world of indistinct sense data, but a world full of things, particularly living things that continually act to sustain themselves, to keep being what they are (i.e., to maintain their form). Such perceptions must come from somewhere, since they cannot occur for no reason at all. They have causes. Whatever acts on our senses, and is understood by our intellects, has a prior actuality that activates our potential to perceive and understand.12
This is not a naive realism. Perception can be thought of as of the interaction between the immediate environment and our sense organs. For instance, light waves carry the form of a tree to our eye.13 Perception is, in a sense, the experience of this interaction, but the interaction relates us directly to the objects known by the senses. Knowledge of trees, an understanding of what a tree is, comes from the presence of this form in our intellect after it has been abstracted from the senses.
A denial of the transmission of form would essentially be a denial of the idea that any prior actuality in the things we perceive and known makes it to our minds. Yet if this were the case, it hardly seems that perception could be of anything. If perception is not caused by the prior actuality of things, it would seemingly be uncaused. More importantly, it could not involve the appearance of anything; rather any such appearances would simply be free standing appearances that are of nothing in particular (in which case, the appearance/reality distinction seems to collapse, and it turns out that phenomena just are reality).
However, it is clear how confusion can arise here. We do experience error. We can, for instance, mistake a fake apple for a real one. However, crucially, this is because the fake apple has the form (at least partially) of a real apple. Likewise, even if we were to experience seeing an apple due to some sort of electromagnetic stimulation of our visual cortex, we would still experience the sight of an apple because something carrying the form of an apple (or something very much like it) is interacting with us. The point here is that perception emerges because we possess a determinant actuality (a nature) that interacts with other determinant natures. No perception occurs in a vacuum. Placing a healthy, experiencing human body in an absolute vacuum would result in death; consciousness requires a quite narrow range of environments. It is the entire system (object, environment, and person) that is required for sensation. Error can occur within our thoughts, but it will always be an error with, in principle, distinct causal origins.
The second key realist assertion is that the intellect is able to abstract this form. This abstraction is not merely a sort of pattern recognition. In abstraction, the form of what is known becomes present in the intellect; the intellect is in a sense identical with the thing known. This explains the act of understanding. This does not imply that we come to know everything about the actuality of the form. Indeed, we will never know everything about any sort of thing. As Aquinas famously put it: all the efforts of the human intellect cannot exhaust the essence of a single fly. Nonetheless we know what a fly is. We understand it. It is this phenomenological experience of understanding that is the key datum which epistemology is supposed to explain. Empiricism, by rendering this experience off-limits, is essentially sawing off the branch on which any epistemology must hang.
Form and its transmission explain two crucial things. First, it explains why we perceive anything at all, why we experience one thing and not another, and how we come to possess understanding. Second, it explains why we experience things that are outside of ourselves. Much more can be said here. In depth accounts have been given of how these principles might work in terms of the mechanics of perception. However, this basic understanding is all that we need to critically weaken many famous arguments from underdetermination. It is not the case that anything at all (or nothing at all) can be responsible for our understanding. Understanding must flow from causes, from a prior actuality (form). If the cause of my understanding of apples is not apples, it must still be something the contains the form of an apple. Likewise, an understanding of number (dimensive quantity)of magnitude and multitudepresupposes some understanding of a measure, which must be defined in terms of distinguishable qualites (i.e., virtual quantity). To know three ducks or half a duck requires the measure duck, and so it is for all quantities, although we can abstract quantity from substantial form.14 Our understanding of number then, must come from a prior actuality that is abstracted from the senses.
To be sure, we might always be mistaken about the things which we only partially understand, or how and why different things interact. Nonetheless, there must be actualities that are responsible for our understanding and we seem to have absolutely no good reason to think these actualities are anything other than trees, ants, flowers, stars, etc. (i.e., all the things known by the senses and understood by the intellect). When we learn something new about these entities, e.g., when combustion theory replaces phlogiston theory as an explanation of fire, we are still involved in understanding the same entities and natures. What is important to keep in focus here is that theories, models, etc. (as well as ideas and words) are how we know, not primarily what we know. What we know, the form, remains the same.
Arguments to the effect that we can never know from our own experiences alone which sorts of arithmetic operations we are preforming rely on denying the interior act of understanding as a valid datum in epistemology. Yet, prima facie, epistemology, the study of knowledge, is precisely the field that is supposed to explain this phenomenon. That is, it is supposed to explain how we know what we know, and secondarily, the possibility and causes of error. Excluding all noetic experience makes the main goal impossible by default, leaving only the secondary goal, elucidating error. In such a context, skepticism is inevitable because the only valid topic is error. Of course, another irony here is that external justifications themselves always rely on experiences that are first-person, and which require an act of understanding to be intelligible. When the intellect is disrupted, as during a stroke, sense data is still received, and yet without the action of the understanding there is no knowledge (as described by victims, who can no longer recognize everyday objects or words).15
This helps explain a huge disconnect in modern epistemology. Through advances in quantitative methods, computing methods, and instrumentation, we can now make predictions far better than ever before. We are better able to collect data, and to use this data to forecast the future, or to predict the outcome of a particular action. We can explain prediction, but not understanding. Indeed, we seemed pressed ever closer towards either a sort of anti-realist pragmatism bordering on epistemic nihilism or an ontic structural realism that claims that the quantities of our predictive models just are all that exists. Yet hasnt this outcome been inevitable? We have removed the act of understanding from the datum of epistemology. Next, we removed all the metaphysical infrastructure that explained how such an act was possible, on the grounds that it was extraneous to explaining prediction and pattern recognition. The result is that the underdetermination of sheer prediction becomes unanswerable, and skepticism reigns.*
On a closing note, I would just suggest the possibility that, on further investigation, give the strict epistemic presuppositions often in play, it could be revealed that many arguments for underdetermination are in fact themselves underdetermined. That is, their truth might be underdetermined by the evidence we can muster. In particular, this is likely to affect arguments from phenomenological underdetermination (the idea that false beliefs can "feel like" true ones).
---
1. This is akin to the idea that, for any finite set of points on a graph, there will always be an infinite number of functions that will pass through every point.
2. Bertrand Russell. A History of Western Philosophy. (1946) pg. 699. Russell, of course, also extended Humes thought on causation to propose that causation itself should be wholly eliminated from scientific discourse
3. E.g., the work of David Bloor and others working from the sociology of scientific knowledge paradigm.
4. Recall Russells statement about Humes conclusions collapsing the distinction between sanity and insanity.
5. The ancient Academic skeptics did utilize similar methods, mostly focused on phenomenological underdetermination (i.e., that false beliefs feel like true ones) to argue against our ability to grasp truth.
6. Thus, the emergence of a strong anti-metaphysical movement within early-20th century empiricism was a pivot back towards empiricisms roots (sometimes a conscious one).
7. Indeed, this conflation of empiricism with science is probably the key reason that the epistemic presuppositions of empiricism go unchallenged regardless of the apparent absurdities they lead to.
8. Saul Kripke. Wittgenstein on Rules and Private Language. (1982) pg. 66
9.E.g., we do not mean tree by tree because the form of trees is present in our intellects and signified our utterance.
10. Another intriguing explanatory connection here is the historical linkage between empiricism and the now hegemonic political ideology of liberalism. Liberalism itself often relies explicitly on skepticism about human nature and the human good for its justification.
11. Of course, the intelligibility of Quines argument requires that we understand what a rabbit is.
12. See: St. Thomas Aquinas. Summa Theologiae I, Q.84-86, De Veritate Q.2, A.6 & Q.3, A.3 and Aristotles De Anima, Books II-III (as well as St. Thomas commentary).
13. This can be conceived of in terms of a triadic semiotic relationship. Nathan Lyons Signs in the Dust: A Theory of Natural Culture and Cultural Nature (2019) includes a detailed description of the mechanics here, relying on Aquinas notion of intentions in the media.
14. See: Aristotle. Metaphysics. (Book X, Ch. I)
15. Indeed, in some forms of phenomenology, a preference for immediate experience, and a bias against any sort of conceptual understanding seems to elevate the experience of the stroke victim or infant to a sort of ideal. The sage and scientist are said to be lost in abstractions, while the unformed or damaged mind becomes an idealized epistemic goal.
Broadly speaking, an argument from underdetermination is one that attempts to show that available evidence is insufficient to determine which of several competing theories is true. That is, many different theories might be able to explain the same evidence, hence any move to choose between theories must be underdetermined, i.e., not determined by the evidence. Within the class of such arguments, there are many that go a step further. These will often purport to show that for any body of evidence, there will always be an infinite number of different explanations that are consistent with that evidence.1
Since the start of the 20th century, these arguments have increasingly played a pivotal role in philosophy, often leading to radical conclusions. Yet even those who have resisted such conclusions have often found these arguments difficult to refute. Such arguments were not unknown to pre-modern thinkers, and yet they were generally considered fairly easy to dispatch. Hence, we are left with a bit of a mystery. Why did arguments from underdetermination begin to seem insurmountable?
In this post, we will try to answer this question. As we shall see, it is not that the ability of underdetermination to produce support for radical skepticism was lost on pre-modern thinkers. Rather, it was that different starting assumptions tended to render such arguments fairly innocuous. To illustrate this point, we shall explore a typical Thomistic response to arguments of this form.
Underdetermination and Skepticism:
Arguments from underdetermination have played a defining role in contemporary thought. There is for example:
David Humes argument against causal inferences and explanations, as well as his hugely influential Problem of Induction;
Ludwig Wittgensteins rule-following argument, as well as Saul Kripkes influential reformulation of it;
W.V.O Quines argument for the inscrutability of reference;
Quines holist arguments for the underdetermination of theories by evidence, as well similar arguments for forms of theoretical underdetermination made by J.S. Mill and expounded upon by Pierre Duhem;
Thomas Kuhns arguments about underdetermination at the level of scientific paradigms;
As well as many others, including Feyerabends epistemological anarchism, Goodmans new riddle of induction, etc.
It would not be an understatement to say that these arguments have been, in many ways, the defining feature of the Anglo-American, empiricist-analytic philosophical paradigm. It is also hard to overstate the radical nature of some of the conclusions that have been drawn from these arguments. Bertrand Russell, for instance, suggested that if Humes Problem of Induction could not be resolved then there is no intellectual difference between sanity and insanity.2 Richard Rorty and others, drawing from Quine and Wittgenstein, argued that truth and knowledge themselves would have to be radically redefined. Truth, rather than being the adequacy of the intellect to beingthe view held, in varying forms, for most of historywould instead have to be understood in terms of a new beliefs coherence with our preexisting beliefs, or as merely a compliment we pay to beliefs we find good to hold. Similarly, these arguments have been used to argue against even modest forms of scientific realism, resulting in a radical anti-realism, which claims that scientific truth is merely a social label bestowed upon well-entrenched practices.3
Obviously, not all thinkers within this tradition have agreed with such radical proposals, yet they have often found it hard to counter them. Were a typical Enlightenment or Victorian-era philosopher transported to our time, I do not think it is a stretch to say they would find these skeptical conclusions shocking, a radical reversal of the faith in reason and empirical methods that dominated in their epoch. So too, we can well imagine that pre-modern scholastic thinkers would be astounded by shifts such as the attempt to eliminate causation from science (the very thing that, for them, defined science), or the various moves to radically redefine truth.
On this last move, the redefinition of truth, it is worth noting just how radical the paths that have been charted in contemporary thought might seem. We could consider here how Quine's more radical successors took their epistemological holism to require a shift to a coherence-based, pragmatic model of truth, where truth is a function of how well a new belief fits in with our web of preexisting beliefs. Such changes, and there have been many suggested, are of course, rarely presented as rejections of truth, but instead as necessary redefinitions (this being true even of most proponents of deflationary theories of truth). Yet we could well imagine our time travelers objecting: but this is just equivocation. Truth is thoughts grasp of being. These redefinitions amount to denying truth, and then recommending some other thing to fill the gap. The equivocation just softens the blow. On your view, we are no longer capable of scientia (knowledge), but merely a system of conceptual and linguistic manipulation. That is, the charge would be that this is really just radical skepticism and epistemic nihilism masquerading as an a mere adaptation.
However, there is one group who might not be particularly surprised by this course of events: the original empiricists. After all, Sextus Empiricus, from whom the name empiricist is derived, was part of the explicitly skeptical Pyrrhonist school of ancient thought. A key idea within that school was that of equipollence. Equipollence exists when opposing arguments or theories appear to have equal strength or plausibility. In this way, it is quite similar to underdetermination. If we have attained equipollence, then we are not led to assert one conclusion over all others, and so we are not compelled towards any particular belief.
Rather than viewing equipollence as a sort of epistemic crisis (which was often how arguments from underdetermination were initially viewed in modern thought4) these skeptics strove to attain equipollence. The idea here is fairly straightforward. If we are not forced to any particular conclusions, then our passions will also not be raised by any particular conclusion. This allows us to achieve ataraxia, a state of serene calmness. By contrast, strong beliefs were thought to promote anxiety and distraction, the opposite of the state of dispassion sought by these thinkers. To be clear, the argument here is not that all arguments are equally strong, or that truth is, in principle, impossible to attain.5 It is rather that we can reason to the view that many arguments seem equally strong (which in turn does put the reliability of most truth claims in question).
It is useful to clarify here what made empiricism unique. It is not that it used sense observations to try to explain the world. If this alone made a philosophy empirical, then essentially all philosophy is empiricist. Nor was it a commitment to the proto-scientific methods of the ancient world. Aristotle, labeled as a dogmatist by the ancient empiricists, is often considered to be the father of many sciences. Of course, modern empiricists do sometimes claim Aristotle as one of their own on account of this fact. Yet, if empiricism is construed in these terms, then the overwhelming majority of all thinkers would be "empiricists." Scholasticism and its modern variants would be empirical, as would Hegelian philosophy, etc.
Rather, what made the ancient empiricists unique, aside from seeking equipollence, was their means for doing so: rejecting causal and metaphysical explanations. That is, they stuck to descriptions of phenomena and observed patterns. The early empiricists were physicians who employed a practice of recording case studies in order to keep track of patterns. Rather than posit causes that would explain why some cases improved and some did not, or why some treatment was efficacious, they simply guided their practices based on the consistency of the patterns they observed. This allowed for conclusions like: treatment with this herb works consistently in these cases, but not explanations for why this was so. Prima facie, this is actually somewhat at odds with the scientific method as popularly conceived, since one could hardly develop hypotheses to test without violating the prohibition on causal theses.6
Hence, it is perhaps unsurprising that a later philosophical tradition modeled on these thinkers would eventually find itself mired in (or perhaps, set free by) skepticism. Indeed, this connection was clear for some in the tradition, such as Hume, although it certainly became obscured as empirical was increasingly used as a synonym for a scientifically-informed philosophy and world-view (with science overwhelmingly being understood in realist terms in the 19th and early 20th centuries).7 For some, this skeptical shift, and a move towards anti-realism has been a positive outcome. Somewhat akin to the ancient empiricists, these thinkers think that the death of realism and certainty is an improvement, often because it is thought to free us by allowing us to think in new ways. Against this though, we might consider that, if realism is the case, our ignorance of reality would itself represent a limit on freedom.
At any rate, for most, the skepticism resulting from underdetermination has been seen as a serious threat and challenge. Yet responses have often involved work-arounds that might themselves been seen as skeptical. Here, it is worth employing Saul Kripkes distinction between different responses to skeptical problems: straight solutions and skeptical solutions. The straight solution aims at showing that the skeptic is simply wrong, that their skepticism is unjustified. A skeptical solution, by contrast, allows that the skeptic has a point, but will maintain that our ordinary practice or belief is justified becausecontrary appearances notwithstandingit need not require the justification the sceptic has shown to be untenable.8 For example, Kripkes own skeptical solution vis-à-vis linguistic meaning claims that language is not meaningful because of metaphysical facts, but only on account of shared behavioral regularities related to social norms and public agreement.9
Again, the difficulty here is that the solutions often seem quite skeptical, e.g., words never refer to things, there is never a fact about exactly what rules we are following, etc. Here, it is worth considering what it is one ought to do if one sees an argument with an absurd conclusion. The first things to do are to check that the argument is valid, and crucially, that the premises are true. I would argue that contemporary thought, particularly analytic thought, has far too often only done the first. Because it holds many empiricist presuppositions beyond repute (indeed, dogmatically might not be too strong a word) it has not generally questioned them.
Yet if an epistemology results in our having to affirm conclusions that seem prima facie absurd, and if further, it seems to lead towards radical skepticism and epistemological nihilism, or an ever branching fragmentation of disparate skeptical solutions and new anti-realisms, that might be a good indication that it is simply a bad epistemology. Indeed, an ability to at least secure our most bedrock beliefs might be considered a sort of minimal benchmark for a set of epistemic premises. Yet, due to the conflation of empiricism with the scientific method, as well as modern cultures preference for iconoclasm, novelty, and creativity, the starting assumptions that lead to these conclusions are rarely questioned. With that in mind, let us turn to the realist responses that, in prior epochs, made these arguments seem relatively insubstantial.10
The Realist Response:
A Thomistic or Aristotelian response to arguments from underdetermination would deny the assumptions that make such arguments plausible in the first place. It is only because of the presupposition that no metaphysically substantial premise can enter into our epistemology that such arguments are able to gain momentum. This, paired with the demand that all relevant evidence must be third-person and external, arguably makes explaining knowledge (a first-person experience) impossible from the outset. Indeed, it should be unsurprising that such limitations result in theories of truth that claim that truth has no metaphysical import; such a conclusion has been simply assumed as a premise!
These different assumptions show up a key in difference in how sense knowledge is considered. For the empiricist, perception is merely raw, inchoate sense data, from which patterns can be derived. For Aquinas and Aristotle, there is a real form in things that is received by the senses, and then abstracted and known by the intellect. The latter process explains the phenomenon of our understanding anything at all. From the realist perspective, we start from a world populated by trees, rabbits, etc. When Quine concludes that we can never tell which word in a foreign language refers to rabbits, because there is always other stimuli that accompanies the presence of a rabbit that could be referred to instead, he is involved in a sort of question begging. He has assumed, from the outset, that there are no things with determinant natures for us to name. Yet if this is assumed, it can hardly be shocking that we are forced conclude that words cannot refer to things, for we have already decided that there are no things to refer to.11
Here, the empiricist might object: thats a fine thesis, but how can you prove that this form or actuality exists? The key assumptions that underwrite these notions boil down to this:
1. Things do not happen for no reason at all. Things/events have causes. If something is contingent, if it has only potential existence, then some prior actuality must bring it into being. It will not simply snap into being of itself. Our experiences are contingent, thus they must be caused by something that is prior to them.
2. Being is intelligible, and to be is to be intelligible. Every being is something in particular. That is, it has a form, an actuality, that is determinant of what it is (as well as the potential to change, explained by matter). This actuality determines how a thing interacts with everything else, including our sense organs and our intellects. If this was not the case, interactions would be essentially uncaused, and then there would be no reason for them to be one way and not any other (i.e. random).
That is it. These are assumptions, but they do not seem to be particularly objectionable ones. Indeed, if they did not hold, if being were unintelligible and things did happen for no reason at all, we might suppose that philosophy and science are a lost cause. Of course, much disagreement here results from a misunderstanding of the nature of form, potentiality, etc. Sometimes, these are taken to be something like terms in a scientific theory, rather than a set of metaphysical principles. Yet a things form is simply that which makes a thing anything at all. The basic idea is this. We perceive a world around us. It is not a world of indistinct sense data, but a world full of things, particularly living things that continually act to sustain themselves, to keep being what they are (i.e., to maintain their form). Such perceptions must come from somewhere, since they cannot occur for no reason at all. They have causes. Whatever acts on our senses, and is understood by our intellects, has a prior actuality that activates our potential to perceive and understand.12
This is not a naive realism. Perception can be thought of as of the interaction between the immediate environment and our sense organs. For instance, light waves carry the form of a tree to our eye.13 Perception is, in a sense, the experience of this interaction, but the interaction relates us directly to the objects known by the senses. Knowledge of trees, an understanding of what a tree is, comes from the presence of this form in our intellect after it has been abstracted from the senses.
A denial of the transmission of form would essentially be a denial of the idea that any prior actuality in the things we perceive and known makes it to our minds. Yet if this were the case, it hardly seems that perception could be of anything. If perception is not caused by the prior actuality of things, it would seemingly be uncaused. More importantly, it could not involve the appearance of anything; rather any such appearances would simply be free standing appearances that are of nothing in particular (in which case, the appearance/reality distinction seems to collapse, and it turns out that phenomena just are reality).
However, it is clear how confusion can arise here. We do experience error. We can, for instance, mistake a fake apple for a real one. However, crucially, this is because the fake apple has the form (at least partially) of a real apple. Likewise, even if we were to experience seeing an apple due to some sort of electromagnetic stimulation of our visual cortex, we would still experience the sight of an apple because something carrying the form of an apple (or something very much like it) is interacting with us. The point here is that perception emerges because we possess a determinant actuality (a nature) that interacts with other determinant natures. No perception occurs in a vacuum. Placing a healthy, experiencing human body in an absolute vacuum would result in death; consciousness requires a quite narrow range of environments. It is the entire system (object, environment, and person) that is required for sensation. Error can occur within our thoughts, but it will always be an error with, in principle, distinct causal origins.
The second key realist assertion is that the intellect is able to abstract this form. This abstraction is not merely a sort of pattern recognition. In abstraction, the form of what is known becomes present in the intellect; the intellect is in a sense identical with the thing known. This explains the act of understanding. This does not imply that we come to know everything about the actuality of the form. Indeed, we will never know everything about any sort of thing. As Aquinas famously put it: all the efforts of the human intellect cannot exhaust the essence of a single fly. Nonetheless we know what a fly is. We understand it. It is this phenomenological experience of understanding that is the key datum which epistemology is supposed to explain. Empiricism, by rendering this experience off-limits, is essentially sawing off the branch on which any epistemology must hang.
Form and its transmission explain two crucial things. First, it explains why we perceive anything at all, why we experience one thing and not another, and how we come to possess understanding. Second, it explains why we experience things that are outside of ourselves. Much more can be said here. In depth accounts have been given of how these principles might work in terms of the mechanics of perception. However, this basic understanding is all that we need to critically weaken many famous arguments from underdetermination. It is not the case that anything at all (or nothing at all) can be responsible for our understanding. Understanding must flow from causes, from a prior actuality (form). If the cause of my understanding of apples is not apples, it must still be something the contains the form of an apple. Likewise, an understanding of number (dimensive quantity)of magnitude and multitudepresupposes some understanding of a measure, which must be defined in terms of distinguishable qualites (i.e., virtual quantity). To know three ducks or half a duck requires the measure duck, and so it is for all quantities, although we can abstract quantity from substantial form.14 Our understanding of number then, must come from a prior actuality that is abstracted from the senses.
To be sure, we might always be mistaken about the things which we only partially understand, or how and why different things interact. Nonetheless, there must be actualities that are responsible for our understanding and we seem to have absolutely no good reason to think these actualities are anything other than trees, ants, flowers, stars, etc. (i.e., all the things known by the senses and understood by the intellect). When we learn something new about these entities, e.g., when combustion theory replaces phlogiston theory as an explanation of fire, we are still involved in understanding the same entities and natures. What is important to keep in focus here is that theories, models, etc. (as well as ideas and words) are how we know, not primarily what we know. What we know, the form, remains the same.
Arguments to the effect that we can never know from our own experiences alone which sorts of arithmetic operations we are preforming rely on denying the interior act of understanding as a valid datum in epistemology. Yet, prima facie, epistemology, the study of knowledge, is precisely the field that is supposed to explain this phenomenon. That is, it is supposed to explain how we know what we know, and secondarily, the possibility and causes of error. Excluding all noetic experience makes the main goal impossible by default, leaving only the secondary goal, elucidating error. In such a context, skepticism is inevitable because the only valid topic is error. Of course, another irony here is that external justifications themselves always rely on experiences that are first-person, and which require an act of understanding to be intelligible. When the intellect is disrupted, as during a stroke, sense data is still received, and yet without the action of the understanding there is no knowledge (as described by victims, who can no longer recognize everyday objects or words).15
This helps explain a huge disconnect in modern epistemology. Through advances in quantitative methods, computing methods, and instrumentation, we can now make predictions far better than ever before. We are better able to collect data, and to use this data to forecast the future, or to predict the outcome of a particular action. We can explain prediction, but not understanding. Indeed, we seemed pressed ever closer towards either a sort of anti-realist pragmatism bordering on epistemic nihilism or an ontic structural realism that claims that the quantities of our predictive models just are all that exists. Yet hasnt this outcome been inevitable? We have removed the act of understanding from the datum of epistemology. Next, we removed all the metaphysical infrastructure that explained how such an act was possible, on the grounds that it was extraneous to explaining prediction and pattern recognition. The result is that the underdetermination of sheer prediction becomes unanswerable, and skepticism reigns.*
On a closing note, I would just suggest the possibility that, on further investigation, give the strict epistemic presuppositions often in play, it could be revealed that many arguments for underdetermination are in fact themselves underdetermined. That is, their truth might be underdetermined by the evidence we can muster. In particular, this is likely to affect arguments from phenomenological underdetermination (the idea that false beliefs can "feel like" true ones).
---
1. This is akin to the idea that, for any finite set of points on a graph, there will always be an infinite number of functions that will pass through every point.
2. Bertrand Russell. A History of Western Philosophy. (1946) pg. 699. Russell, of course, also extended Humes thought on causation to propose that causation itself should be wholly eliminated from scientific discourse
3. E.g., the work of David Bloor and others working from the sociology of scientific knowledge paradigm.
4. Recall Russells statement about Humes conclusions collapsing the distinction between sanity and insanity.
5. The ancient Academic skeptics did utilize similar methods, mostly focused on phenomenological underdetermination (i.e., that false beliefs feel like true ones) to argue against our ability to grasp truth.
6. Thus, the emergence of a strong anti-metaphysical movement within early-20th century empiricism was a pivot back towards empiricisms roots (sometimes a conscious one).
7. Indeed, this conflation of empiricism with science is probably the key reason that the epistemic presuppositions of empiricism go unchallenged regardless of the apparent absurdities they lead to.
8. Saul Kripke. Wittgenstein on Rules and Private Language. (1982) pg. 66
9.E.g., we do not mean tree by tree because the form of trees is present in our intellects and signified our utterance.
10. Another intriguing explanatory connection here is the historical linkage between empiricism and the now hegemonic political ideology of liberalism. Liberalism itself often relies explicitly on skepticism about human nature and the human good for its justification.
11. Of course, the intelligibility of Quines argument requires that we understand what a rabbit is.
12. See: St. Thomas Aquinas. Summa Theologiae I, Q.84-86, De Veritate Q.2, A.6 & Q.3, A.3 and Aristotles De Anima, Books II-III (as well as St. Thomas commentary).
13. This can be conceived of in terms of a triadic semiotic relationship. Nathan Lyons Signs in the Dust: A Theory of Natural Culture and Cultural Nature (2019) includes a detailed description of the mechanics here, relying on Aquinas notion of intentions in the media.
14. See: Aristotle. Metaphysics. (Book X, Ch. I)
15. Indeed, in some forms of phenomenology, a preference for immediate experience, and a bias against any sort of conceptual understanding seems to elevate the experience of the stroke victim or infant to a sort of ideal. The sage and scientist are said to be lost in abstractions, while the unformed or damaged mind becomes an idealized epistemic goal.
Comments (108)
I want to nitpick these examples on the basis that they're underdetermined -- or, the flip side of "underdetermination" is confirmation bias. There's some reason for the selection of examples, and that selection of examples may justify what you're saying as "this is where I'm coming from", but how are we to know that these are good examples of underdetermination such that Aquinas or Aristotle or the pre-modern mind had answers to these questions if we just dropped the questions and read Aquinas, Aristotle, and the ancients only?
This is something I thought while reading MacIntyre. Yes, I see what you're saying, but like Heidegger you're sort of inventing a whole mindset that is "pre-modern", and justifying it with many quotes -- but at the end of the day if you haven't spoken to people from the pre-modern era then, my brother in christ, you cannot make claims about how pre-modern people think no matter how many texts you read from that era.
It elucidates how we think, but it may not be the panacea of problems contemporary philosophy faces.
It looks soothing -- but ultimately when someone says that if we go back to some ancient or medieval thinker as the person who saw it all I think that we're kind of fibbing to ourselves.
Perhaps with good purpose, and definitely with good thinking -- but it's more imaginative than the statement reads. We're attempting to reconstruct the thoughts of people we can't talk to, yes. Especially in the history of philosophy -- that we even have a moon-shot chance of doing so is itself amazing. But something I've learned from doing Epicurus studies is that humility is important in approaching anyone pre-Gutenberg. Aquinas may be so well read because it was just before the Gutenberg press rather than because he had the insight into the real nature of things, and he provided a soothing picture of the world as a harmony.
I see what you're saying, The reason that I picked these is because they are influential and fit the basic idea, and because they generate theses that are fairly radical (and so relevant and interesting). Basic underdetermination of theory is considered by medieval writers (and 19th century guys) but it isn't that exciting because its consequences are contained (generally, only affecting weaker quia demonstrations "from effects", but I didn't want to get into that). There were arguments from underdetermination in the ancient world and the Middle Ages though. I am pretty sure Epicurus appeals to the underdetermination of theory by data re astronomical models in one of his surviving letters, and Aquinas mentions the same issue vis-a-vis astronomy in the Summa. Islamic occasionalists also argued from underdetermination, etc. It's just that the scope of under determination for certain sorts of things was limited by the assumptions.
For example:
IIRC the Commentary on On the Heavens has more on this. The idea that one might "save appearances" by tweaking a theory, a big idea for guys like Quine and Kuhn, was known to the medievals and considered problematic, but only so problematic because it would only affect a certain sort of model that tries to reason from observed (generally distant) effects back to principles. Empiricism (speaking very broadly of course) sort of had the effect of making all knowledge come to be affected by this difficulty, because now all knowing fits this sort of pattern recognition/internal model building structure. That's partly why I don't think it's impossible to go across the eras here; it's the same problem, expanded to new areas because of a change in upstream positions.
Sure, it's a real limitation. But wouldn't this apply, to a lesser degree (and perhaps not even that much lesser) to talking to people from different backgrounds across a language/culture gap today? In many ways, because of the historical lineage, we might have more in common with a pre-modern Western thinker than someone extremely steeped in some parallel tradition of thought. And then the same issue could be said to apply to greater or lesser degrees across a whole range of contexts, e.g., for even saying what we ourselves would have thought about something years ago, or for generalizing about what "Americans" or "Chinese" think today, let alone in 1980 or 1890.
Nevertheless, I still think plenty can be said with careful analysis. And note, the topic is not super broad. We can have a quite good idea about how people thought about arithmetic in the past because they both wrote about it in detail and it's not a super broad subject.
I think another ameliorating factor is that there has been an unbroken, and fairly robust/large Thomistic and Neoscholastic tradition dating all the way back to that era. And so, even if we cannot say what the medievals would have thought, we can say what people steeped in their texts have generally thought, and it has generally been that underdetermination, while interesting and relevant in some areas, shouldn't support the radical theses that have been laid on it.
Plenty can be, and has been, and ought be said in the future.
I think it's broad in that you're talking about any and all arguments from underdetermination and using the ancients to say they have solutions to the arguments for underdetermination.
Specific when thinking about the pre-moderns, yes -- there's a great dialogue there to engage with, and I think medieval and ancient philosophy ought be given more time. i.e. i favor the historical method -- but that does not then mean that those of a previous era who did not see the modern problems as interesting thereby solved the contemporary problems.
Yes, we can focus on what they were talking about, but if Sartre is who we're interested in then all this remembrance of another philosophy, another tradition which :
says such a thing, then "radical theses" are what are being pursued. The wondering isn't about what is generally comfortable for thought, but about problems for thought.
Yes, there's a connection through the tradition of Thomism, at least. And, honestly, it's an amazing connection in that it's a line of flight that has managed to develop in spite of the historical contingencies.
It's cool, but if it doesn't address what others are thinking then it's not a panacea. Ultimately I don't see the world as a harmony, for instance -- I think it's absurd.
Your ameliorating factor ameliorates some doubts, but what if I think that Hume, Quine, Wittgenstein, Feyerabend, et al. , have a point? Do I just need to read more Thomas Aquinas to see the errors in my ways?
Well, I pointed out that many advocates prefer these results, just as the ancient skeptics thought skepticism was a preferable outcome. But, for the many who find them deeply troubling (e.g. Russell on Hume), it might be helpful to consider past approaches. For instance, Russell moves to eliminate causation (a move that was quite unsuccessful), and yet from other traditions there are some strong solutions to the Problem of Induction that rely precisely upon rejecting Hume's deflated notion of causality (indeed, they generally agree with Russell that causes serve no real function when they have been deflated to this degree). Likewise, there are a lot of people who bemoan how scientific anti-realism and arguments for science coming down to sociology and power relations has been used to pernicious effect on public debates on vaccine safety, global warming, GMO crops, etc., and are looking for solutions to underdetermination here.
That is, there are many who see these primarily as problems to be overcome, hence, old solutions should be interesting.
I also think insights into the difference would be just as useful for empiricists who want to defend such views (although presumably not all of them, because some of the skeptical solutions contradict one another). It would allow them to give a better explanation of why both common sense and long standing ideas in Eastern and Western thought needed to be rethoughtwhich assumptions need to be defended (there is a potential circularity here worth noting too, because the phenomenon of understanding is often itself removed as a proper datum of epistemology because it is said to be underdetermined!).
This seems useful to me because sometimes you see this sort of thing dealt with using simple appeals to "old is worse, new is better," which doesn't seem like a particularly good heuristic in philosophy, particularly when very old ideas are often recycled and become the new cutting edge.
For instance, I would think one option would be to say: "yes, epistemology should be properly the study of prediction or error. The experience and possibility of "knowledge," whatever it might be, should be a topic of psychology and phenomenology, not epistemology and philosophy of science. Indeed, this is sort of what some views do, reducing learning and knowledge to statistics.
Quoting Count Timothy von Icarus
I recently listened to Nathan Jacobs' podcast on realism and nominalism, which I thought gave a good overview of the territory. I started but have not finished his follow-up podcast, "The Case for Realism," in which he argues for realism and explains why he abandoned nominalism for realism. So far it has been good, and has tracked some of the same points you are making.
Quoting Count Timothy von Icarus
There is a good exchange on this point between Robert Pasnau and Gyula Klima, where Pasnau takes up skepticism arguendo against Aristotle and Klima responds.
Quoting Count Timothy von Icarus
I would want to add that the realism quandary is also internal to "predictionism." The one who predicts is attempting to predict ad unum (towards the one, actual, future outcome). Without that future-oriented determinacywhether actual or theoreticalthe "predictionist" cannot function.
I am looking forward to following this thread. I think it will be especially hard to keep it on-topic given that it touches on so many neuralgic subjects which could lead us far afield of the OP.
You don't seem to be engaging the OP at all.
Quoting Moliere
Such is philosophy. You do the same thing with the philosophers you appeal to and interpret.
There is lots of substance in the OP. Why not address that substance instead of trying to undercut it with ad hominem gesturing towards Aristotle or Aquinas? Count spent a fair amount of time on this. I would want to honor that.
I want to see some responses that show evidence that the OP was actually read. Your post doesn't manage that. It could be recycled for any Aristotelian-Thomistic OP, regardless of content. Therefore if one reads your post they will not be given any insight into the content of the OP, given that your post in no way reflects the content of the OP. They will only be able to infer that the OP involves the thought of Aristotle and Aquinas.
Is it enough to say
"Modern philosophy has problems. These medieval thinkers didn't have these problems. This is because modern philosophy invented this problem for itself by stripping out all the thoughts which earlier thinkers relied upon in making such inferences. Therefore, we should adopt these earlier approaches, given the incredible progress knowledge has made -- there is a disconnect between ability, and these supposed modern problems that we can pass over by reading the older solutions" ?
Does that demonstrate having read the OP?
My thinking is with respect to underdetermination and its value -- what I read were some solutions to underdetermination based on a generalization of a few select authors rather than what I might say in favor of underdetermination, for instance. So I wanted some sort of reason why these are even appealing at all?
For myself I don't feel a deep need to argue for underdetermination because to me it explains why we go through all the hoops we do in making scientific inferences -- we don't just see the object as it is, we frequently make mistakes, and go about looking for reasons to justify our first beliefs while discounting possibilities not on the basis of evidence, but because they do not fit. This is inescapable for any productive thought at all -- but it has the result that we only have a tentative grasp of the whole.
Basically we don't need Hume's rendition of causation to point out that underdetermination is part and parcel to scientific practice: hence all the methodological hurdles one must overcome to be justified in saying "this is a scientific conclusion"; if it were something we could conclude without underdetermination then the scientists would be wasting their time, to my view.
I think this is a mistake to draw these philosophies towards some sort of anti-scientific agenda. At least, not when I speak on them they're not -- more like I'm very interested in the truth of how science actually works, and I don't want the cartoon version but to really understand what's going on (and, in that pursuit, noting how the goal is itself almost infinite, if not fruitless, in that we never really finish philosophizing about science where we finally have The Answer, but it still provides insight)
I mean, that's fair. I said above my position is to default "the other way", so it may just be that the article isn't addressed to me. I like digging out old ideas and trying them in new ways -- that's a time honored philosophical practice. I suppose it just doesn't appeal to me is all.
Epistemology should be of practical use in the world, and in the real world we are nearly always deriving conclusions from limited information. IBEs are the practical ideal.
One of the more graceless posts Ive read here on the forum.
Thanks! :up:
So, I think Pasnau is right that the identity doctrine has, at least vis-á-vis Aquinas in particular, more often been used more to deal with representationalism and subjective idealism. However, it is used explicitly to counter empiricist skepticism by a number of the Neoplatonists. Gerson has a good article on this appropriately titled: "Neoplatonic Epistemology." That empiricism and academic skepticism died out, in part perhaps because of these arguments, is why St. Thomas doesn't have them as major contenders to rebut in his epoch.
I am not sure about the rhetorical strategy of continually expressing perplexity about the doctrine you are expounding on or its use by people you are criticizing. I think though that in this case it actually suggests a real confusion it probably doesn't mean to imply. The argument for why form in the intellect (the intellect's move from potency to actuality) cannot be unrelated to its causes comes from the idea that: a. every move from potency to act has a cause in some prior actuality; b. causes cannot be wholly unrelated (i.e. arbitrarily related) to their effects (completely equivocal agents) or else they wouldn't be causes in the first place and what we'd actually have is a spontaneous move from potency to act. Form is just that which makes anything actual to effect anything at all, so form is, in one sense, always present in all causes (granted there are analogical agents). Arguing for this doesn't require question begging and presupposing the doctrine, it requires upstream premises (I see now that Klima appears to have hit on this in more detail).
Yeah, that's true. Even seemingly very abstract and deflationary, formalistic approaches that make everything Bayesian have a sort of unresolved kernel of volanturism in that the agent has some sort of purpose for predicting, or else they just collapse into mechanism.
I hope this is not what you take the earlier approach to underdetermination to be because that's certainly not what I was trying to convey. As noted earlier, underdetermination was acknowledged, rather, it is some of the more radical theses that flow from it that are contained. For instance, the move where"x is true" becomes merely "hooray for asserting x," seems fairly destructive to ethics and epistemology. Hence the point about "dressed up nihilism."
The basic idea is that deception is always parasitic on reality (actuality) because what determines thought must always correspond to some prior actuality.
So, consider the point about the apple. It's not denying that we can be fooled by fake apples or that some sort of sci-fi technology might be able to use EM stimulation to get us to experience seeing an apple. It's that both the fake apple and the stimulus contain the form of the apple. The "brain in the vat/evil demon" argument is generally trying to show that we have absolutely no (sure) veridical perceptions/knowledge and thus no grounds for actually saying how likely it is that we are so deceived. But the point here is that this absolute prohibition on meaningful knowledge doesn't hold up given some fairly straightforward assumptions about things not happening for no reason at all. Even the illusion must derive from something actual. The deceivers manipulation has to carry intelligible structure (form) from somewhere.
Nonetheless, in theory, if we were brains in vats then all the biological species and weather phenomena, elements, etc. we know could be false creations that don't really exist outside some sort of "simulation." (I would just point out here that this is basically magic, not sci-fi , and magic tends to do damage to philosophy, that's sort of the point). The things we know could be compositions and divisions of other real natures that exist in the "real world" but not our "fake world." And this still seems to leave open a very extreme sort of skepticism. Yet it's not the totalizing skepticism of the original demon experiment, where there is no ground on which to stand to argue that this is implausible.
Here, there is a related argument about the teleology of the rational faculties. The intellect seems to be oriented towards truth. If it weren't, then there would be no reason to believe anything, including the brain in the vat argument.
Likewise, a common argument in early 20th century empiricism was that we cannot be sure that the universe wasn't created seconds ago along with all our memories (also from underdetermination). But this also rests on the assumption of either a spontaneous move from potency to actuality or else a volanturist God who does arbitrary things (i.e., not the God of natural theology, but a sort of genie).
Well, if extreme forms of underdetermination are successful, the scientist is wasting their time. They cannot even know if they have actually run any of their experiments or what the real results are, because an infinite number of possibilities/experiences are consistent with their thinking the results are one thing when they really aren't. The Academics use phenomenological underdetermination to motivate a sort of nihilism.
No, not really. No mention of underdetermintion or realism. You're basically assuming that the OP is about something that it doesn't claim to be about, hence the ad hominem nature. The OP is about underdetermination and realism. That's the core.
Quoting Moliere
So you think Hume, Wittgenstein, Quine, Sextus Empiricus, etc., offer poor arguments for underdetermination? That's intelligible. What is your alternative argument for underdetermination?
Quoting Moliere
Okay, well that's a good start for an argument for underdetermination. :up:
I would want to actually look at some of these premises you are alluding to. For example:
1. We don't just see the object as it is
2. We frequently make mistakes
3. We frequently go about looking for reasons to justify our first beliefs
4. We have only a tentative grasp of the whole
5. Therefore, Underdetermination explains why we go through all the hoops we do in making scientific inferences
I don't see how (5) follows from your premises. Here is something you might want to revisit regarding premises like 1, 2, or 4:
Quoting Count Timothy von Icarus
This is a really interesting objection. Is an IBE underdetermined? Remember that the conclusion is not, "X is the explanation," but rather, "X is the best explanation." I actually don't see why underdetermination would need to attend IBEs.
A significant part of this thread will turn on what exactly is meant by "underdetermined."
I think it depends on how far underdetermination is allowed to roll. If you pair these arguments, their reach is far greater than scientific theories. The term is most associated with the underdetermination of scientific theories, but as noted in the OP is has been used for substantially broader effect.
If some of these arguments go through, then the "best" explanation is not "the most likely to be true (as in, corresponding to reality)," but rather "the explanation I most prefer," or "the explanation society most prefers, given its customs."
An inference to the best explanation normally relies on the security of some prior ideas. But what if causes are underdetermined? What if induction is out? What if theories cannot even be "explanations" in the conventional sense? Underdetermination of scientific theories seems to me like the most benign of these.
I didn't say they must lead that way, or even that they are designed to. I said that, historically, they absolutely [I]have[/I] been used on both the right and the left to push such agendas. And yes, this is normally in a sort of corrupted, naive form, but some propagandists, radicals, and conspiracy theorists have a very good grasp on this stuff and have become quite adept at molding it to their causes. On the left, it's tended to be used more for things like casting doubt on all findings related to sex differences, or often the entire field of behavioral genetics.
That seems likely. :up:
Quoting Count Timothy von Icarus
I think this is good. I don't mean to derail the thread with those papers, but they are related to these central theses of your OP:
Quoting Count Timothy von Icarus
I'd say that if people want to object to the OP then this is a good place to begin.
At play is the classic clash of metaphysics-first vs epistemology-first paradigms, and it's fairly hard to find a rope such that both sides can grab on and start tugging in different directions. This is actually why I think folks like @Moliere are ultimately tempted to take the shortcut of relativizing Aristotle or Aquinas.
So at the birds-eye level, we have something like:
A. If (1) and (2), then underdetermination is false
B. (1) and (2)
C. Therefore, Underdetermination is false
...that's at least the argument coming from your vantage point. There are arguments for ~C coming from other vantage points above.
Quoting Count Timothy von Icarus
Regarding the definition of underdetermination, we seem to have at least three options:
I think everyone agrees that (U1) does occur. (U2) may be a point of contention, yet (U3) is probably the point of contention that the average opponent would be more comfortable defending. So perhaps (U3) helps narrow the thesis in question, and if so, then I still think the arguments you have offered are decisive.
Hrrrmmm, I don't think so. But fair that I misread you, then -- in part at least. There's still something here that I can see that wasn't conveyed on my part.
Basically my thought is that if anti-realism is true that has no effect on the value of science. It'd be like saying because dancing is not really a thing dancing is not valuable: no, the value question is separate from the descriptive question. If science doesn't "reveal reality", but rather makes us aware of which parts we are interested in manipulating it will still chug along regardless of the philosophical interpretation of the science.
Quoting Count Timothy von Icarus
Mkay. Then I suppose I'd just say that if it's been used by both sides so has the "realist" side been mis-utilized by the same actors.
All the various phrenologies which basically justify social hierarchy are what I have in mind there, or "race science" or eugenics.
So I'd rather put the bad actors to one side since they'll use either argument that they see fit, but this does not then reflect upon the philosophy if we are treating it properly.
Still thinking on a return to your OP, this is just what leaped out for now.
And the medievals are the ones who have a better solution to underdetermination and realism, yes? Is the outline that I gave of @Count Timothy von Icarus 's argument entirely wrong, just unrelated whatsoever?
They acknowledge it, as Tim put it, but don't draw the radical conclusions.
I'm sort of saying "Well, what if the radical conclusions are true, after all? Maybe it's the realist philosophy of science which is wrong, then" -- I'm a realist, but not a scientific realist, exactly. It's too provisional a discipline to draw metaphysical conclusions from, even if we'll want to pay attention to its limited conclusions while thinking about nature.
Quoting Leontiskos
Underdetermination is the theory that theories are not determined by the evidence, but rather are chosen in order to organize the evidence, and in some way are a selective pressure on which evidence is relevant to consider.
1-4 are observations of human beings attempting to generate knowledge which fit with this belief -- basically an IBE, or really just a set of reasons for why I think underdetermination is a good default position. I.e. I don't have a deep quandary with denying causation as a metaphysical reality. That's because causation isn't real but how we decide to organize some body of knowledge.
Closer, or does that just read as more of the same to you?
Sure, and I want to see arguments for and against underdetermination, for and against realism. It makes no difference whether Aristotle or Aquinas or Aladdin are the ones who made the good arguments. Focusing on individuals at the expense of the arguments is no help, especially at the very beginning of a thread.
So let's look at your arguments:
Quoting Moliere
Okay, that is an interesting definition. :up:
Quoting Moliere
No, I think this is all quite helpful. We now have alternative definitions, arguments, considerations, objections, etc. :up:
(I will come back to this when I have more time. I was mostly trying to expend some effort to try to keep the thread focused on the central theses and the arguments).
Yes, but just to clarify, and I realize the OP didn't do this very well because I thought it had gotten to long, the point isn't that underdetermination doesn't ever exist. It clearly does. If we do the Monte Hall problem, the evidence we get from seeing one goat doesn't determine that switching will get us to the prize (although it does determine that it is more likely to).
It's the global sorts of underdetermination that result from excluding the act of understanding as a valid datum of epistemology that are resolved, which then includes a denial of our having any real grasp of principles, such that all demonstrations are necessarily either merely from effects (and so open to underdetermination) or only hypothetical (and thus also underdetermined).
I think we'd have to look at each individual case to see if it is resolved unfortunately, but it seems to me to affect many of those listed. For instance, consider the argument that there is no fact of the matter about what rules we are following because all our past actions are consistent with an infinite number of possible rules. This would also imply that there is never a fact of the matter about which rules nature is "following" (or which would describe how it behaves). For example, "gravity has always worked like x," is also consistent with "gravity spontaneously changes after a given time." If any sort of formal causality is axiomatically barred, then there is no way to deal with this.
Obviously, these sorts of conclusions have a follow-on effect for science. It isn't just that "language is not meaningful because of metaphysical facts, but only on account of shared behavioral regularities related to social norms and public agreement," but this also holds for science (and of course, for any language science is expressed in). I think it's easy to see how such conclusions help lend weight to the parallel arguments from underdetermination that are used to argue for a redefinition of truth, one which, in its more deflationary forms, seems to me to come close to epistemic nihilism.
So, we don't "resolve underdetermination tout court," rather, we resolve some specifically pernicious instances of its application. And then, when it comes to scientific theories, the problem of underdetermination is less concerning because our knowledge isn't just a sort of statistical model, which if radically altered, has "remade the world." When we shift paradigms, it isn't that the old world of trees, fire, stars, and sound is revealed to be illusory, and a new socially constructed world has taken its place. We are still dealing with the same actualities as apprehended through new conceptual means. And crucially, while there might be many ways to correctly describe something, these will be isomorphic. When underdetermination becomes more pernicious is when it denies this isomorphism, such that scientific findings become "sociology all the way down" or "power struggles (will to power) all the way down."
I think you're misunderstanding by "extreme forms" here. I don't mean anti-realism, but rather those sorts of "Boltzmann brain" type arguments that conclude that it is more likely, or just as likely, that the world will dissolve at any moment or radically alter its behavior, as to maintain in its reliable form. This implies that science isn't even likely to be predictive or "useful" on any consistent timescale, and I don't see how that doesn't make it a waste of time.
IDK, my reading would be that denials of any knowable human good ("moral/practical anti-realism," which is often aided by other forms of anti-realism) have tended to be destructive to politics, applied science, and ethics. That a key concern of contemporary politics, and a constantly recurring motif in our media is that our technology will drive our species extinct or result in some sort of apocalypse or dystopia because it is "out of anyone's control," suggests to me a fundamental problem with the "Baconian mastery of nature" when combined with anti-realism about human ends and the ends of science. If the aim of science is to improve our casual powers, but then we are also driven towards a place where we are largely silent on ends, that seems like a recipe for disaster, the sort of situation where you get things like predictable ecological disasters that will affect generations of future people but which are nonetheless driven on largely by unrestrained and ultimately unfulfilling appetites.
Phrenology was discredited because it was thought to be false. But if "true" and "false" are themselves just social endorsements, then truth cannot arbitrate between racist, sexist, etc. scientific theories. So, sure, both forms are open to abuse, but only one can claim that abuse isn't actually abuse, and that all science is about power struggles anyhow. If science is really just about power or usefulness, then there is strictly speaking nothing wrong about declaring sui generis fields like "Jewish physics" just so long as it suits your aims and gets you what you want.
Well, it's not actually my main point. Only the last third or so is about a particular solution. I would summarize it this way.
Arguments from underdetermination is extremely influential in contemporary philosophy.
They have led to many radical, and seemingly skeptical theses.
These theses are perhaps more radical than we today recognize, when seen from the perspective of Enlightenment and pre-modern prevailing opinion.
These types of arguments were not unknown in the past, and were indeed often used to produce skeptical arguments.
The tradition most associated with these arguments, ancient Empiricism, sought skepticism on purpose, as a way to attain ataraxia.
Thus, we should not be surprised that borrowing their epistemology leads to skeptical conclusions.
Hence, if we do not like the skeptical conclusions, we should take a look at the epistemic starting points that lead to them.
Indeed, if an epistemology leads to skepticism, that might be a good indication it is inadequate.
The Thomistic response is given as one example of how these arguments used to be put to bed. I use it because I am familiar with it and because the Neoplatonist solution is quite similar. (But the Stoics also had their response, etc.).
I do think that solution is better, but the point isn't to highlight that specific solution, but rather the genealogy of the "problem" and how it arises as a means of elucidating ways it might be resolved or else simply understanding it better.
---
Had I more space, I might suggest some Indian thinkers here. They have the idea that the sensible world is indeed, in an important sense, illusory (maya). However, they do not see this as barring access to the knowledge that really matters, which grounds our approach to happiness and ethics, or even to a sort of first principle. Likewise, the "arbitrary world" is able to be eliminated. This also has to do with their starting points, which are in some ways quite similar to the Neoplatonists. So skepticism also loses its bite in these contexts.
Quoting Count Timothy von Icarus
I wouldn't propose radical skepticism, but also it's not a possibility I feel the need to deny. It is, after all, logically possible -- it's just entirely irrelevant to the task at hand.
Generally I treat radical skepticism as a special case rather than a case we generalize from, except for the cases where a philosopher is purposefully arguing for or utilizing it towards some other philosophical question (so, Descartes and Hume are the "good" kind of radical skeptics; The Freshman philosophy student who just heard about the possibility of solipsism isn't -- rather, that's a sort of "right of passage" that all people interested in philosophy bumble over)
Basically I think such arguments are sophomoric, in the literal rather than pejorative sense, and someone would have to present a radical thesis to make it credible, to my mind; i.e. the "default" position isn't radical skepticism, to my mind, and so isn't so worrying. Sure it's logically possible, so are a host of irrelevancies just like it. Where the bite?
Quoting Count Timothy von Icarus
Heh, this is something we're wig-wamming our way about here because it seems we both believe things like "it's a good idea to talk about ethics, especially with respect to what science does" and "Jewish science is a pseudo-science", but we keep on reading these bits of evidence towards our respective views :D
All to be expected, but I want to say that I think it possible to be a skeptic towards scientific realism and realize it's important to direct ethically -- in fact, because there's no Architectonic of Science that one must follow, we are free to modify our practices to fit with our ethical demands.
I think there's a fundamental problem with reducing reality to science, and with prioritizing the mastery of nature in our understanding of what science does. But then this might be something of an aside with respect to underdetermination. (heh, the rhetorical side of me thought: In fact, because underdetermination is true we should see that science's activity is a direct result of our ethical commitments rather than an arche-method of metaphysical knowledge that's value-free.
It's descriptive, but not value-free, if that makes sense. Science is always interested for some reason, even if that reason is "I just think snails are cool and like to study their behavior because they make me feel happy when I'm around them"
Quoting Count Timothy von Icarus
Phrenology was always a pseudo-science. It has all the characteristics -- the theories follow the form of confirmation and don't try to disconfim them. They held some social significance which allowed people to justify their position or actions to others. They were vague and easy to defend in light of evidence.
Now I'll go this far: If underdetermination, as a theory, leads us to be unable to differentiate between science and pseudo-science, and we believe there is such a thing as pseudo-science (I do), then we're in a pickle.
But like you have a theory which takes care of underdetermination, within realist parameters I'd be able to defend our ability to spot pseudo-science on the social model of the sciences -- i.e. it's not just me, but all the scientists that say what science is. "Jewish Science" wasn't even as clear as phrenology; it was definitely a racist category for expelling Jewish scientists from the academy. That it resulted in expelling people who we still consider scientists -- like Bohr -- is an indication that it's not a science even if "Jewish Science" happened to get the aims desired after.
I.e. though underdetermination complicates the question, it's still addressable by my lights without a realist science.
Quoting Count Timothy von Icarus
Ok, fair. It may just not be for me, then -- here I'm saying "but I like the skeptical conclusions", and so the rest kind of just doesn't follow. The motivation isn't there for me.
But you were talking about a lot of the things I think about which is why I replied. I see I missed a good chunk of the essay just because of what grabbed my attention, though.
Yes, I think that is a good elaboration. :up:
Quoting Count Timothy von Icarus
Right, and not to get ahead of things, but it is this crucial move that is especially interesting (and is also seen in thinkers like Nathan Jacobs). We eventually come to this point where one must decide whether they prefer the skeptical premises even at the expense of the absurd conclusions. Your "good indication" sums up the complexity of this, because this approach is not demonstrative. That's not necessarily a problem, and perhaps there is no strictly demonstrative alternative, but it is worth noting how the inference at this crucial juncture is rather sui generis (and could even perhaps be construed as coherentist, depending on one's appraisal of the reductio and reductio arguments in general).
I see your point, that by labelling X and IBE, underdetermination may not apply. Labeling it the explanation would be underdetermined.
But I suggest that in the real world, we operate on beliefs, which are often formed by inferring to the best explanation from the facts at hand (background beliefs will unavoidably affect the analysis). We make errors, of course, but a proper objective is to minimize these errors (more on this, below).
Quoting Count Timothy von Icarus
Forgive me if I misunderstand, but this sounds a bit fatalistic, to me - in that it seems to imply the quest for truth is irrelevant or hopeless. I suggest that we have a deontological duty to minimize false beliefs and maximize true beliefs. To do otherwise is irrational, and this includes embracing an explanation simply because he prefers it (there are exceptional cases where this might be appropriate, but I'll leave that aside).
Even if we were perfect at this, the resulting beliefs would still be "underdetermined", but ideally they will be our best explanation for the set of information we have. There will necessarily be subjectivity to it (we each make a judgement, and it will be based on our background beliefs - many formed the same way, others the product of learning). This is a proper objective for critical thinking.
Yes, that's right.
Quoting Relativist
Well, do I believe that every one of my beliefs is "the explanation," or do I believe that some of my beliefs are "the best explanation"? I think we believe that some of our beliefs are only IBEs, and therefore we believe that some of our beliefs are more than IBEs.
Quoting Relativist
I would basically argue that some theory which is believed to be underdetermined is not believed. So if I think there are only two theories to account for a body of evidence and that both are exactly 50% likely to be true, then I psychologically cannot believe one over the other.
So I think we would need to get more precise on what we mean by "underdetermined." For example, why do you think "the resulting beliefs would still be 'underdetermined'"? Does that mean that the person might change their mind when they reconsider the evidence from a different point of view? If so, then I would say that that possibility to change one's mind (and one's ratio or angle of perspective) is different from one's belief being underdetermined. On Aquinas' view, that form of 'underdetermination' is essential for free will. Apparently it is possible to read the complement of "underdetermination" as fatalism or determinism.
I ought say that underdetermination, to my mind, is at odds with a strictly empirical epistemology -- it's more of a rationalism of empiricism. "Yes, we have to go and see, but..."
It highlights that the mind is at least partly responsible for our knowledge -- we don't have a blank slate which is imprinted upon by reality, ala Locke.
Agreed that we need to establish what "undetermined" means, when were talking about beliefs. I've been treating "underdetermined" as any belief that is not provably true (i.e. determined=necessarily true). Under this extreme definition, nearly every belief we have is underdetermined. I also agree that we ought not to believe something that has a 50% chance of being false.
Most of our beliefs are not provably true, so I have labelled them IBEs. I don't see how else one could claim to have a warrant to believe it. So if you say your belief in X is "more than an IBE" - is it really, if it's not provably true? Or is it still an IBE, but with strong support?
Quoting Relativist
So would you say that some of our beliefs are provably true?
(I would say that, but I am just verifying that you would also say such a thing.)
This sounds to me a bit like post hoc rationalization, as if one is going to decide on a theory and then allow their theory to be "a selective pressure on which evidence is relevant to consider."
The difficulty here is that you seem to be redefining "theory" to be something that precedes rather than follows after evidence, and such is a very strange redefinition. For example, on this redefinition someone might say, "I have a theory...," and this statement would be indistinguishable from, "I have a prejudice..." The basic problem is that 'theory' and 'prejudice' do not mean the same thing. We distinguish between reasoning and post hoc rationalization, and yet your definition seems to have made such a distinction impossible. It seems to have made impossible a distinction between "following the evidence where it leads," and, "engaging in selection bias in favor of some a priori theory."
---
Quoting Moliere
I think this is one of the places where the problems become more apparent. For example, if underdetermination requires that there be multiple possible and inadjudicable theory-candidates, and nevertheless pseudoscientific theories do not belong to this set of viable candidates, then there must be some real way to separate out the wheat from the chaff. Even if one thinks this is possible they have already abandoned full-throated underdetermination in favor of an underdetermination that is nevertheless determinate vis-a-vis determining which theories are scientific and which are pseudoscience. They are doing something akin to "stance underdetermination," which is a species of a, "Underdetermined subset theory." I.e., "Within this specific subset a quasi-global underdeterminacy holds, but apart from that subset it does not hold." All of these theories struggle mightily to say how or where the specific subset ends and the complement-set begins. The task is so difficult that few such proponents even really attempt to answer that challenge.
What's the argument here: "There is no problem with identifying pseudoscience because in these examples scientists came around to calling out the pseudoscience?"
Why exactly will science always tend towards correctly identifying pseudoscience? Will this always happen? What's the mechanism?
Anyhow, on some anti-realist views, legitimate science just is whatever current scientists say it is. Science has not always been quick to identify pseudoscience. Lysenkoism wasn't considered pseudoscience within the Soviet bloc. Scientists said it was legitimate. Millions of people died before it was rejected. Arguably, all simply pointing to some infamous cases where pseudoscience was eventually identified does is show the norms of science change.
Some ideas identified as pseudoscience (largely for being wholly unfalsifiable and dogmatic) were in wide currency for lifetimes (e.g. aspects of Marxist political economy, aspects of Freudian psychoanalysis, and even aspects of liberal capitalist political-economy have been accused as such, while graphology is another long lived example with less import). Hence, I am skeptical of the idea that scientists will just know without some sort of notion of how they would know.
The 19th century was rife with pseudoscience, and I think developments in scientific methods and the philosophy of science played a significant role in curbing this.
I think it is a pretty dismal view. So too for the Nietzschean idea that the desire truth is "just one among all the others." But underdetermination of scientific theory is only ancillary related here. Arguments for a more widespread skepticism or relativism I am familiar with tend to instead rely on a more global underdetermination of things like all rules/rule-following, all causal/inductive reasoning, or the underdetermination of any sort of solid concept/meaning that would constitute the possession of knowledge, which is a step up (or down) from simple scientific underdetermination.
Good question. We have beliefs that follow necessarily from other beliefs/facts, so they're provable in that sense. It seems inescapable that we depend on some foundational beliefs. So nothing can be proven without some sort of epistemological foundation. What are your thoughts?
I think we should keep the underlying logical limitations in the background of our minds, but we shouldn't let this undermine practical critical thinking for making judgements and arguments. As one example: a lot of people embrace some conspiracy theory because it explains some facts, and defend their judgement on the basis that it's not provably false. It's a distortion of inference to best explanation. This is a tangent from the theme of your thread, but it's an issue I consider extremely important.
That seems right to me. In an Aristotelian sense we would speak of demonstrative arguments and non-demonstrative arguments, where demonstrative arguments have premises which are foundational and certain, whereas non-demonstrative arguments have premises which are non-foundational (and therefore also not certain). We could also apply that distinction to the inferences rather than the premises if we are not counting an inference as a premise.
So when you said:
Quoting Relativist
...I guessed that this meant that some of our beliefs are provably true, and are not IBEs. If that guess is correct, then apparently you must hold that some foundations are certain. If that guess is wrong, then it would seem that you hold that all beliefs are (unprovable) IBEs. Do you disagree with any of that?
Now for Aristotle one certain foundation that we are capable of having is the principle of non-contradiction, and therefore an example of a provable conclusion would be, "This entity before me is either a tree or it is not a tree."
Good questions. I gave three characteristics, but they are generalizations which aren't always strictly true in the universal way.
Also, I don't think scientists will always do so -- it could have been the case that, for instance, Jewish Science "won" vs. Nazi Science, at least hypothetically, and my suspicion is that it could uncover true things but it'd be because they waited until a person of the right designation said it rather than because the first person who noticed it said it.
But I'd argue that since reality is wider than these races' thoughts, at least on the philosophical level (which a fascist would not allow), maybe we should listen to the other people who had the thought first rather than wait for one of our "master race" people to pillage the thought and say it?
I don't think there's a mechanism in scientific practice, though -- not like a bike with a chain or a conveyor with a gear etc.
It could be the case that in the future it falls to some stupidity. At times I feel like it's still doing so, given science's marriage to Capitalism, but also -- that's what we have to do now to live, and "pure" knowledge still gets funding sometimes.
Quoting Count Timothy von Icarus
I agree here, too!
I'm beginning to wonder if "underdetermination" is wider than I think on it. "Boltzmann Brains" aren't something I'd even associate with "underdetermination" -- I largely think of "underdetermination" with respect to the philosophy of science. So a (scientific) theory is underdetermined by the evidence it references. That doesn't make it false it just means that an observations doesn't determine the theory, not even in a large set of observations.
Eventually the theory determines what you're looking for once it's a good scientific theory. It's starting to point out patterns we can talk about, predict, describe, and agree upon.
But that, in turn, means that one must -- to make scientific progress -- ignore many irrelevant facts.
And sometimes those deemed irrelevant happen to be relevant.
****
So, again, I feel we're agreeing on the basics but disagreeing on the interpretation. I'm still wondering why or where.
And, still -- I also wonder if it's just not "for me" -- I'm an absurdist who accepts causation isn't real. Many people baulk at that and wonder about what you're wondering.
Both can be good philosophies, but it's hard to find a bridge.
I think this is the scary part of underdetermination.
I was taught that evidence leads to conclusions.
That's still true! They do!
It's just more complicated than that. It's not like I can just gather the evidence and then know the conclusion -- that's because we're limited, we're human, and can only form provisional thoughts which are justifying themselves and present them to others to critique.
I agree that "theory" and "prejudice" are different -- but i'd say that this is the difficulty being presented by the under-determination of a scientific theory by its evidence.
At the point of scientific revolutions empirical justification is what decides things.
But at the secondary education level the theory is what decides things, since it's very likely the student is wrong.
So, for the sake of clarity, the Boltzmann brain comes in because our experiences and memories are consistent with both our living in the world we think we do and with our being Boltzmann brains that might dissolve at any moment. The evidence we have doesn't determine our embracing one theory over the other.
Actually, given some multiverse formulations, it seems that we are vastly more likely to be Boltzmann-like (there are many similar variants) brains than citizens in a lawful universe. Or, even if we are in a seemingly lawful universe, it would be vastly more likely that we are in one that has just randomly happened to behave lawfully by sheer coincidence for a few billion years, and will turn chaotic in the coming moments. In which case, while the case is underdetermined, we might conclude that our being Boltzmann like is vastly more likely.
Now, if we are hardcore Bayesian brainers, what exactly is the wholly predictive mind supposed to do when available data forces it to conclude that prediction is hopeless? It's in a pickle!
(This flaw in multiverse theories that fail to place any real restrictions on the "multiverse production mechanism" (e.g. Max Tegmark's view that all mathematical objects exist) is, IMHO, completely fatal to attempts to offer up the multiverse as a solution to the Fine Tuning Problem, but that's a whole different can of worms.)
Is this different from the Cartesian scenario, in your mind?
Quoting Count Timothy von Icarus
Heh.
I'm a "Copenhagen interpretation" dude, but only by habit and because it made more sense than the others.
In my mind, at least, Boltzman brains can be treated the same as Evil Demons.
Quoting Count Timothy von Icarus
I'm not, and would be surprised to hear you express that I am.
I've not pushed it, but have been against Bayesian epistemologies.
Not Bayesian analysis, tho. Bayes' theorem has many relevant truths and uses.
I just don't think it does much to explain knowledge or inference. It's a bit of a "just so" theory, in which case why not phenomenology?
Quoting Count Timothy von Icarus
Given that I've said I'm an absurdist it probably isn't, or shouldn't be, surprising that "The Fine Tuning Problem" is something I'd "pass over" as a bad problem, though would only address it in a thread on that problem because, holy moly, you've been gracious to me (which I appreciate) but that would take us way off track.
[I]"..I guessed that this meant that some of our beliefs are provably true, and are not IBEs. If that guess is correct, then apparently you must hold that some foundations are certain. If that guess is wrong, then it would seem that you hold that all beliefs are (unprovable) IBEs. Do you disagree with any of that?"[/i]
I don't think ALL beliefs are IBEs:
-We have some basic, intrinsic beliefs, that aren't inferred. Example: the instinctual belief in a world external to ourselves.
We also accept some things uncritically (no one's perf ect).
Re: certainty- that's an attitude, and it may or may not be justified. Justification doesn't require deductive proof. Consider your example "this entity before me is either a tree or it is not a tree." Solipsism is logically possible, so that there actually isn't something before you. We can justifiably feel certain despite the logical possibility we're wrong.
I have some questions about this. I hope it is not too disruptive to raise them.
1. What happened to falsification, which is based on the argument that there can never be enough evidence to establish a theory? Falsification is much easier and can be conclusive when positive proof is not available.
2. There are several criteria, I understand, that are applied in order to choose between two competing theories - Occam's razor, elegance, simplicity etc. Kuhn suggests that the wider context - sociological, technological, practical considerations - all have influence here.
3. If an alternative theory explains more data than the existing theory, it is preferable. If it explains less data, the existing theory is preferable. If both explain exactly the same data, how are they (relevantly) different?
4. Is there really anything special about our making decisions based on less than conclusive data? (We even have a special word for this - "judgement" - admittedly it is not always used in this way.)
Okay good, so let's look at this quote of yours:
Quoting Relativist
In that last sentence you seem to equate "not provably true"/undetermined with "IBE." Now you are implying that something that is not provably true might not be an IBE if it is not inferred. Do you stand by your decision to label beliefs that are not provably true IBEs?
The more central question can be restated with your claim, "Under this extreme definition, nearly every belief we have is underdetermined." The "nearly" makes me think that some beliefs are not underdetermined, but I'm not sure if you really hold that.
Quoting Relativist
In English "certitude" connotes subjectivity, whereas "certain" and "certainty" need not. When I said, "premises which are foundational and certain," I was using 'certain' in this objective sense, which is quite common. For example, you that some beliefs follow necessarily from other beliefs/facts. I might ask, "But do they really follow necessarily?" You might answer, "They certainly do." Your answer would not mean, "I have a high degree of certainty or a high degree of certitude that they do follow necessarily." It would mean, "They objectively follow necessarily."
Now if you want to try to find a different word than "certain" to describe the phenomenon in question, you can do that. I think certain is actually the correct word, given that it couples the knower and the known in the proper way needed to speak about first principles.
Quoting Relativist
Well the same problem crops up here. What is certain and a feeling of certainty are not the same thing, just as justification and a feeling of justification are not the same thing. One's being justified and one's deeming themselves justified are two different things.
No, not all unproveable truths. I was being careless in my wording. More precisely: most of our rational, acquired beliefs are IBEs. (My objective had only been to contrast this with the notion that our beliefs are somehow proven deductively; in most cases - IBEs are the best we can do, and that's perfectly fine).
Quoting Leontiskos
I do hold that we have some beliefs that are not underdetermined. The belief that the object before me is a tree or not a tree is not underdetermined. Properly basic beliefs (e.g. there is a world external to ourselves) aren't underdetermined, because they aren't determined through reasoning at all- so the term seems inapplicable (however, arguably- they are determined by the environment that produced us. This aspect is what makes them properly basic - a variation of Alvan Plantinga's reformed epistemology).
Quoting Leontiskos
I understand the semantic distinction, but are the attitudes actually distinct? (Remember that I suggested certainty is an attitude). Some may insist there is a parallel distinction of attitude, but I'm not convinced.
This is associated with my view toward "Bayesian epistemology": which depends on assigning an epistemic probability to every belief. It is a practical impossibility to do this consistently. We can only apply some subjective coarse grained level of certainty (which may vary from one day to the next). So it seems to me that our epistemic judgements aren't sufficiently fine-grained to truly have an attitude distinction that matches the semantic one. Example: I'm certain the sun will come up tomorrow, despite there being a nonzero probability the sun will go nova.
Quoting Leontiskos
This seems similar to someone believing a proposition to be true vs the proposition actually being true. All we can ever do is to make a judgement: there is no oracle to inform us that our judgement is correct. One or more people may examine the reasoning and concur, but this only elevates a subjective judgement to an intersubjective one. Similar with the feeling of certainty: it's subjective, and so is the analysis that leads to the feeling. When we're certain of something, we believe we've arrived at objective truth - that's what it means to be certain.
Okay thanks, that makes sense (although I may come back to this dichotomy between deduction and abduction).
Quoting Relativist
Okay, so you are saying that some beliefs are determined and are therefore not undetermined, such as the belief about the object before you; and that "properly basic" beliefs are neither determined nor undetermined. It seems that by "determined" you mean something like "deduced," and that this is why a "properly basic" belief is not determined. The same would presumably hold for foundational beliefs in general.
Quoting Relativist
My point here is that certainty need not be an attitude, and is not always an attitude in English. When someone says, "They certainly do," they are not expressing an attitude. See for example <this entry> from Grammarist, where they point to feeling (certitude) vs factuality (certainty).
Quoting Relativist
I think you are still running roughshod over the difference. There is a difference between believing a proposition to be true vs the proposition actually being true, and this is tracked by the fact that people are saying different things when they say, "I believe it is true," and, "It is true." Similarly, when you say, "That's what it means to be certain," what you are saying is, "That's what it means [for someone] to be certain." But again, "certain" is not always predicated of persons. It is very often predicated of propositions. For example, from the Grammarist entry, "Its a near certainty that the 17-member nation eurozone wont survive in its current form." This is not a predication about an attitude or a subjective state.
Now one could stipulate that "certain" or "certainty" is always subjective, or always a matter or attitudes, or always person-indexed, but that's not actually how we use the word in English. Sometimes it pertains to an "attitude" and sometimes it doesn't.
I think you're discussing the semantics of everyday language - many people say "I believe X" to convey a degree of uncertainty. There could be an implied "but I could be wrong". Some people also say they "know" something to be true, conveying absolute certainty, not the philosophical sense of knowledge. Everyday language is imprecise. In a conversation, one might need to clarify.
I'm using the terms more precisely- using definitions that dovetail epistemology (dealing with beliefs and their justification) and psychology (what a belief IS to a person).
[Quote]For example, from the Grammarist entry, "Its a near certainty that the 17-member nation eurozone wont survive in its current form." This is not a predication about an attitude or a subjective state.[/quote]
It's a statement of belief* by whoever formulated it and becomes a belief* of any person who reads it and accepts it. If no one had ever formulated it, then it wouldn't be a belief* held by anyone.
* a subjective state of an individual.
Perhaps you're thinking, "it would be true even if no one had formulated it". But what exactly would you be referring to as the "it" that is true? The statement? Does the statement exist independently of human minds? Do all possible statements have some sort of independent existence? In my opinion, statements only exist in minds.
Good question. Many thinkers think that falsification's role as the core criterion of science was badly weakened by an argument from Quine (and refined by others)... get this... from underdetermination.
That is, every seeming "falsification" can always be explained away in other terms that do not falsify a theory, and the choice between these explanations will be underdetermined by the evidence.
Plus, historians of science were quick to point out that, pace Popper, theories, and particularly paradigms, are often falsified and rather than being challenged post hoc explanations are offered. For example, Newton's physics was falsified almost immediately when applied to astronomy. But rather than reject it, astronomers posited unobserved, more distant planets to explain the irregular orbits of visible planets. These were eventually identified with improvements in telescope technology, but were originally unobservable ad hoc posits.
To be sure, falsification is still a very useful criteria. Quine's argument doesn't mean there aren't distinctions. Marxist theory, or arguably the anthropology of liberal political-economy, is unfalsifiable in a stronger sense. Literally any observation of human behavior is easily rendered explicable by the theory itself, and challenges to the theory can be explained by the theory (just as challenges to Freudianism was a sign of a "complex"). Falsification is still a useful criteria, but not a particularly strong one in light of the underdetermination problem.
Right, and those "other criteria" are what are often used to suggest a fairly robust anti-realism, i.e., "sociology all the way down (with the world merely offering some "constraints"). Occam's razor is often poorly translated from Ockham into a straight preference for reductionism, but I suppose that's a different sort of problemexcept thisimplausible reductions and eliminations (e.g. eliminating consciousness or all mental causality) are often justified in terms of "parsimony " paired with the claim that any difference between reduction/elimination and its opponent theories must be "underdetermined by empirical evidence." This is precisely why "parsimony" wins the day.
This is a very tricky question when we get to philosophy of mind and language. I'd question whether many skeptical solutions actually do explain the data equally well. They often succeed by making constricting assumptions about what can count as "data" in the first place.
More broadly, outside the context of scientific models (which is less of an issue), your current experiences are consistent with the world, and all of our memories, having been created 5 seconds ago, no? And they are consistent with all other human beings being clever robots, and your living in an alien or AI "zoo" of sorts. But surely there is a metaphysical and ethically relevant difference.
"Special" how? It's certainly a very common experience. My interest though lay more in the use of underdetermination to support radical theses in philosophy, not so much basic model underdetermination. In part, this is because the historical comparison isn't that illuminative here. The ancients and medievals knew about and accepted model underdetermination ("saving appearances"), but the more interesting thing is that they didn't think this general form of argument led to much wider forms of underdetermination as respects rules, causation, induction, word meaning, free will, etc.
But if you are saying that everything is believed and nothing is known, then I don't find that to be epistemologically precise. According to standard epistemology some things are known and some things are merely believed, and belief is a necessary but insufficient condition for knowledge. It feels as if you've created a non-standard semantics where there is only belief and never knowledge; where "certain" and "certainty" always denote subjectivity or an attitude and never knowledge; where there are only opinions and never facts, etc.
Quoting Relativist
But it's not. That's the whole point of the article and the distinction. If it were a statement of belief then "certainty" and "certitude" would be identical. But they're not. "X is certain," is not a statement of belief, just as, "I know X," is not a statement of belief. You seem to be making a move where you say, "They only think its certain, or they only think they know it. Really they don't, because certitude is always an attitude and knowledge doesn't really exist." That looks like a tendentious move to me. It also undermines the basic idea that we can know things as simple as, "This object in front of me is either a tree or else it is not a tree."
Quoting Relativist
I think that's fine, but I don't think it follows either that statements are not about anything more than minds (nominalism), or that minds never know truth. At a very simple level, the way we linguistically distinguish facts from opinions highlights the way that facts are not subjective in the way that opinions are subjective, and that they exist all the same. That is: there really are facts (truths), and they really are something different than opinions. If everything returns to attitude, then it seems that there is nothing other than opinion.
I see two interesting questions, here. One is whether the sort of "probabilism" that you are proposing is coherent, given that it eschews knowledge. It may be that probabilism without knowledge is like branches without a trunk.
The other question has to do with the modern move where the subject is cut off from reality by fiat of premise. For example, if we can never get beyond our attitudes and make truth- and knowledge-claims that are not merely belief- or attitude-claims, then of course a kind of Cartesian skepticism will obtain. If every knowledge-claim is rewritten as a matter of the subject's attitude or nominalistic beliefs, then realism has been denied a hearing.
(I will be offline for a number of days, but will return. Thanks for the good conversation. :up:)
So the system worked, in the end. True, one has to be patient. True also that there is no time limit on such waiting. In the mean time, opinions will differ and arguments will rage. Nothing wrong with that.
Quoting Count Timothy von Icarus
Yes. But I don't see that anti-realism is a necessary consequence of the applicability of these criteria.
Quoting Count Timothy von Icarus
I can't see that the consequence is inevitable. Surely, it will depend on the details of the case.
Quoting Count Timothy von Icarus
This could mean that the theory is underspecified couldn't it? Or not even suitable for assessment as though it were a "scientific" theory?
Quoting Count Timothy von Icarus
One difference is that there is not the slightest reason to take any of those possibilities seriously. They are all fantasies. "Here be dragons".
Quoting Count Timothy von Icarus
Speaking of the general form of argument, these arguments look to me very much like re-heated old-fashioned scepticism. What's new about it?
It's been used in some novel ways. For example:
Sure, arguably it is "underdetermined."
Maybe, but it's a not unpopular opinion that the existence of the universe and its contents are an inscrutable "brute fact." Now, if inscrutable brute facts can "just happen" without causes, then there is no reason why they should be one way and not any other. So why would it be any more or less likely that a universe just "happens to be" with a first state something that looks something like cosmic inflation rather than a first state full of memories and people?
I will grant that this is only a problem for those asserting the "brute fact" view. But if we assert, to the contrary, that the cosmos had a determining cause, then presumably this is a realist claim. In which case, maybe an inability to dismiss anti-realism will bother us.
Mashing Hume, Wittgenstein, Kripke, Duhem, Quine, Kuhn, Mill, Feyerabend, and Goodman together and calling them "sceptics" ought ring alarm bells with anyone.
And those two supposed foundations - That things do not happen for no reason at all" and that everything has an explanation - at least some discussion might be worthwhile, rather than mere assertion.
But the Law of Diminishing Returns applies here. It's harder to critique than to make stuff up.
:up:
Yes, but you are conflating arguments from underdetermination and skepticism. The examples you listed are given as examples of the former, not the latter. These arguments are not necessarily skeptical. Wittgenstein, for example, is often read as offering a purely descriptive account, not a skeptical or metaphysical account. However, historically these arguments have often been extended towards anti-realism and skepticism, or positions that would have been considered skeptical and/or radical up until the 20th century. I do, in fact, point out that there is disagreement as to whether redefinitions of truth itself are mere modifications, or an equivocation that essentially denies truth and knowledge and offers up an alternative "pragmatic equivalent" such as "'true' is just an endorsement of beliefs we feel it is good to hold."
Kripke's skeptical solutions, for instance, are not straightforward skepticism and I do not present them as identical. They are also not strong refutations of skeptical claims however. Arguments to anti-realism from underdetermination are often but not always skeptical (e.g., "there is no fact of the matter we can empirically verify..."). They try to reduce the burden of proof they face, arguing merely for undecidability or skepticism instead of making positive metaphysical claims, but this is not always so, sometimes they make stronger metaphysical claims (often on the assumption the empirically verifiable differences are necessarily equivalent with ontological differences).
Second, just because a philosopher says: "I am not a skeptic," doesn't make it so. Hume accepted the label gladly. Contemporary philosophers tend to run away from it. But by prior norms and standards, many of these claims, or their extensions would be considered skeptical (and are still often considered so by some contemporary thinkers).You are missing this nuance.
The second one is not "everything has an explanation." It's:
If you want to argue that things/interactions/events are (and are one way and not any other) "for no reason at all," without causes, feel free. Likewise, if you want to object that something can be "nothing in particular (nothing determinant) of "nothing at all."
My semantics is standard in epistemology Here's what the Blackwell Dictionary of Western Philosophy says:
Starting from Platos dialogue Theaetetus, knowledge has been thought to consist in three necessary conditions: belief, truth, and justification. Traditionally, the focus is on the nature of justification...In 1963 Gettier showed that these three conditions do not really explain what knowledge is. For I may hold a justified belief which is true but which I believe to be true only as a matter of luck. Such a belief cannot count as knowledge. Epistemology since then has been debating whether the original conditions need to be modified, or whether further conditions must be introduced.
So if you KNOW X, this means you BELIEVE X, X is true, you are justified in believing X, and your justification is not a product of luck.
You said, "I know X is not a statement of belief". Well, it IS a statement of belief in standard philosophical discourse. I realize that in colloquial speech, some people consider belief and knowing to be something distinct, but that actually muddies the water.
You're also drawing a distinction with "certainty" and "certitude" but this also seems colloquial. We're still dealing with beliefs, even if there is no logical possibility that the belief is false.
Quoting Leontiskos
Our colloquial way of speaking is vague, and implies distinctions that are not real. An opinion is a belief. Colloquially, if we say "that's my opinion" - we may be qualifying the nature of the belief, but it's a vague qualification. A person might say this when he knows the basis for his opinion is weak, but another person might formulate an opinion only after a good deal of analysis. In either case, they're stating a belief.
A TRUTH is a statement that corresponds to some aspect of reality. Of course there are truths. Truth is what we all want to have in our possession. The issue is: how do we assess whether of not some statement is true? A justification is a reason to believe the statement is true. Some justifications are better than others. If it's derived from deductive reasoning, you're on very solid ground (although you're still dependent on the premises being true). The point I've been making is that we rarely use deduction; more often we use abduction - it's an imperfect guide to truth, but it's usually the best we can do. But it's better than choosing beliefs randomly, or jumping to conclusions without considering all the facts.
Not so much, although your walking back on scepticism is positive.
Your main argument is that underdetermination only seems feasible because of the rejection of your two metaphysical principles. From this you deduce that "...the skepticism resulting from underdetermination has been seen as a serious threat and challenge". But what you call "underdetermination" here is very different to the use of that term in Duhem and Quine. Where they showed us specific cases in which we could not decide between competing theories, you suppose that we can never decide between any cases, unless we accept your two premises. Doing so lumps together quite disparate approaches, flattening the philosophical landscape, reduces complex positions to caricatures. Your "argument" consists in labelling.
We ought look a those two premises closely.
The first supposes that every event has a cause, inviting your rhetorical ploy 'If you want to argue that things.. are "for no reason at all," without causes, feel free.' The issue here is not that there are uncaused events, so much as that a method that supposes explanation in terms only of ultimate cause is no explanation at all. Consider the extreme case, where god is the cause of all things. That the coin we flip comes up heads is supposedly explained as "the will of God"; but that explanation will work equally well if the coin had come up tails. Regardless of what happens, the explanation is "God caused it to happen that way", and so we never learn why this happened and not that; this is no explanation at all.
If presenting a cause is to function as an explanation, it must say why this even happened and not some other event. Saying that "Things/events have causes" is trivial, indeed frivolous. What makes talk of causes useful is their role in setting out what happens from what might happen.
"God wills it" satisfies your rejection of "underdetermination", but at the cost of providing no explanation at all.
The second supposes that there is always an explanation. Given my argument above, this is again trivially true. "God wills it" will explain everything, and yet is no explanation at all. But more, what you are setting aside here is humility, the capacity of admitting "I don't know". You would prefer an explanation that it complete yet wrong to one that is incomplete yet right; you would prefer a complete lie to a partial truth.
You picture yourself as defending rationality when you are denying it. Underdetermination is a feature rather than a flaw, marking the difference between rationality and dogmatism.
My point, in so few words. Nice.
At this point I'm wondering if the difference in positions is that I think of "underdetermination" with respect to scientific theories, especially -- rather than applying to the radical skeptical position.
Well, that's how it is used, apart from Tim's modification. Any finite set of observations can be satisfied by innumerable general theories.
Tim argues that scepticism follows from his version of "underdetermination", and so is using that term in an unconventional way. He seems to think that since evidence can't determine which of the competing theories is true, we cannot chose one theory to go on with - as if the only basis for choosing were deductive.
But of course we can choose Einsteinian relativistics over Newtonian relativistics, for all sorts of reasons that are not merely deductive.
Added: But of course Wittgenstein and Quine are not sceptics in the way Tim suggests. Each provides a way to move on without the need for deductive certainty.
Well it's not surprising that we see eye-to-eye, is it? :D
Do you think this a bad interpretation @Count Timothy von Icarus?
For some reason, a little while ago, I re-read Hume's Enquiry, and realized that he is not at all the sceptic that he is painted to be. His rejection of what he calls pyrrhonism is emphatic. (He does not believe that it can be refuted but argues that it is inconsequential, and recommends a stiff dose of everyday life as a remedy.) He distinguishes between pyrrhonism and "judicious" or "mitigated" scepticism, which he thinks is an essential part of dealing with life. See Enquiry XII, esp. part 3.
PS I read the conclusion of this section (book-burning) is a rhetorical trope, designed to show how extreme views can lead one into absurdity. That fits better with his actual analysis.
Quoting Count Timothy von Icarus
I don't think that a meaningful estimate of that is possible, unless we know the range of other possibilities. In any case, as every lottery winner knows, extremely unlikely or improbable events occur all the time. In the end, though, one has to consider whether it is more plausible (not likely or probable) that all our memories are false, and that no evidence actually points to a real past, or that at least some memories are true and some evidence is good. In any case, what difference would it make if it were true that the first state was full of memories and people? This is a blank space, which can only be filled up with fantasies and dreams.
Quoting Count Timothy von Icarus
Well, yes and no. If we think that we have to dismiss this anti-realism by means of argumentation in the traditional fashion, we have a problem. But if we analyse the terms and context of debate, we may not be so bothered. However, these arguments can have an effect on how we perceive the world. Many people find the vision of the world proposed since the 17th century extremely depressing and can get quite miserable about it. Others find it full of wonders and possibilities and find it extremely exciting. Neither view will be much affected by traditional argumentation.
This is a misreading. See the summary I provided above for Moliere. That particular response is offered as merelyone example of why the form of the argument, though known from antiquity, was not considered to be widely applicable in the past.
No I don't; that would be a bizarre "deduction." The comments on the difficulties with some of these positions are all in the first half of the paper, before those premises are presented.
A relevant section would instead be:
(Note: Kripke himself allows that the claim seems absurd.)
I write that the wider application of this form of argument, and that fact that it is thought to be extremely strong, has often led to theses that would have seemed radical for most of history. I also point out that thinkers have thought that these do pose a serious challenge (e.g., Russell's quote on Hume). I do point out that others have been enthusiastic about the ability to support a variety of radical theses, seeing this as liberating, or supporting a move to "pragmatism."
Some of the redefinitions of truth do strike me as dressed up epistemic nihilism with a hand wave to "pragmatism" as a modern panacea. I do think this particular trend is quite problematic. What this generally reduces to in practice is: "those with power decide what is true." Or at least, this is eventuality is not kept out.
No I don't, I say those are the two core premises related to the example. It's a relevant example because the Neoplatonist response is similar. The Stoics, however, had a different response, etc.
I define what I mean in the first paragraph. The arguments listed follow that general pattern. The term is associated with the "underdetermination of scientific theory," but that does not mean there are no other arguments from underdetermination. I mention the ancient Empiricist version and (briefly) the even broader Academic ones for instance.
Good, so you agree with it.
It doesn't mention anything about "ultimate causes." For the purposes of the arguments in question, the relevant causes are the causes of our perceptions, the act of understanding, etc.
A dichotomy of:
A. Contingent events just happen; and
B. A sort of occasionalism where God (or a necessary cause) is necessarily the [I]only[/I] cause.
For instance, in the example cited, "God wills it," is not the only cause. What is the positive argument that:
" 1. Things do not happen for no reason at all. Things/events have causes. If something is contingent, if it has only potential existence, then some prior actuality must bring it into being. It will not simply snap into being of itself. Our experiences are contingent, thus they must be caused by something that is prior to them."
Entails that "God wills it," is the cause of everything?
I suggest nothing of the sort. Obviously, we can have a sort of volanturist approach to theories, or invoke principles such as parsimony, etc. However, the appeal to parsimony is a particularly pernicious principle in the hands of empiricsts precisely because the main data of experience are excluded from consideration, and then phenomena such as consciousness or causes are recommended for elimination on the grounds of parsimony.
Funny, this is generally precisely how I've seen his approach to skepticism described. Others have just generally found his conclusions, particularly in ethics, more problematic than he does.
I don't think "skepticism" has historically been used only to describe extreme positions such as: "I can know nothing at all; the sky is falling!" For instance, this hardly describes the paradigmatic ancient skeptics. I would think Hume's approach actually isn't that different from the ancient skeptics however. Their skepticism is also largely pragmatic. What do you think makes them radically different?
Presumably, it would remove some of our warrant for thinking things will continue on as they have in the past, seeing as how there is no past.
I am not sure about that. The arguments I have seen for it seem me to use questionable premises. If the premises are (likely) false, there is no good reason to believe the argument is true.
If you think I am calling Wittgenstein himself a skeptic then this is incautious reading. The general pattern of the rule-following argument has been expanded in that direction by later, less cautious thinkers.
Perhaps I wasn't clear. The distinction I'm focusing on is the one that he himself adopts - between what he calls pyrrhonism, but which is probably actually closer to Cartesianism. (Given the history of philosophy taught in Philosophy 101, it seems very odd that he doesn't mention Descartes.) I think that he is really quite close to the old tradition, without the Stoicism (so far as I can discern). (I owe that understanding to you.)
Quoting Count Timothy von Icarus
That reading is a misunderstanding of Wittgenstein. I do not think you include Wittgenstein among the incautious sceptics.
Quoting Count Timothy von Icarus
I classify the expectation that things will continue as they have in past as a default position, in the absence of positive evidence for anything else. We don't need a warrant.
Ah, gotcha. That makes sense.
Sorry, that got shifted out of place. That was a reply to @Banno who thought I had Wittgenstein with the "skeptics." I do not think this is so. But some positions that appeal to Wittgenstein, such as the cognitive relativism thesis, lead to a sort of broad skepticism about our ability to understand others (outside our own time and culture, or even tout court). This is a misreading of Wittgenstein to the extent it is attributed to him I think, but I think most advocates don't say this; rather they claim that if you follow out Wittgenstein's insights, with their own additions, you get cognitive relativism.
I don't get this. I think there must be a typo or something here. ?
Broadly speaking, I see a tempting reading of the PI that is relativistic. But I don't think it what he intended. If he had, he would, surely, have not talked about our form of life, but about our forms of life. To put it another way, there are many differences between practices and forms of life which are more or less difficult to understand and communicate between. But, if we are all human beings, it is possible to grasp the other's form of life or practice. If it is really not possible (and how would one prove that?), then the other is not a human being. That's what lies behind his extraordinary remark that "if a lion could talk we could not understand him." The twist in this story is, of course, that there is a good deal of mutual understanding between humans and lions even in the context of an extreme language barrier and I see that as based on a shared form of life.
Quoting Count Timothy von Icarus
Perhaps this is more a labelling problem than an actual disagreement. But his position is scepticism de-fanged, if it is scepticism at all. The implication of what he says, to my mind, is a rejection of the doctrine. But I can see that others might not see it that way. Mind you, I'm not even sure that Descartes was really a sceptic.
There's no doubt that "God wanted it to happen" is empty, as it stands. But if our framework is that God controls everything, we can produce different explanations according to what happens. If the coin lands tails and I lose the bet, I can say "God is punishing me for my sins". If the coin lands heads and I win the bet, I can say "God is rewarding me for my virtues". My reason for rejecting these explanations as empty is that neither explanation will stand up to standard scientific experimental scrutiny.
Quoting Banno
I don't think it is as bad as that. Surely "every event has a cause" is not really an assertion. It is a methodological decision. It is not that we can always identify the cause of an event, but that we will approach every event on the basis that there is a cause to be found. It is a presupposition that the world is not disorderly. If we do not find one, we attribute that to our failure, not the failure of the principle. We would file such a case in the "pending" tray.
Where?
You appear here to have gone to great lengths to explain what your argument is not, without explaining what it is. Your reply is in such broad terms as to say very little.
Added: links and citations are conducive to clarity. It might be helpful if you did not remove them.
Causation, generally, is perhaps another idea that has hung around well past it's use by date. It made sense to Aristotle but suffered badly under Russell and continues to be difficult to characterise.
Science doesn't look to causes so much as to predictability. It's not about event A causing event B but about the relation between A's and B's, especially when that relation is expressed in an equation.
From my point of view, what we call reality does not validate our theories. In science, we work with phenomena and references, not with the aim of adapting our thinking to something we call reality. In scientific work, theories already work with phenomena and references, and this work is a kind of large mechanism in which phenomena and references function, for example, thanks to our technology and the operational practices of scientists that form the background to theories. A scientific theory is not something that is first created and then expected to be validated by something called reality; a theory is already involved in historical phenomena and references that give it a kind of validation and legitimacy.
In this sense, competing theories are not absolutely distinct from one another, since they have a historical foundation of other already legitimized theories that support them. This historical foundation is the work of scientists that precedes the creation of new theories. This historical foundation is also a practical-technical foundation in which new theories are embedded.
We must set aside the belief that our theories describe reality, but rather that they function with it, like the great machinery mentioned above. Thus, two competing theories can receive legitimacy and validation for different reasons, which may simply be socio-historical, but neither is more true than the other; rather, they may simply mesh better or worse with the entire historical and technical-scientific apparatus.
We must see science as just that, as a gigantic device of increasingly specialized practices carried out by human beings, and not simply as theories that fit reality. This device contains operations, references, phenomena, norms, legalities, practices, technologies, etc., where the theory/reality division has no place and is a mistaken and simplistic view.
It seems to me that "neither provable nor disprovable" is the beginning of the story, rather than the end. I mean that proof of the kind we require for specific causal explanations is inapplicable. The proposition is not in the business of asserting truths, but of articulating the conceptual structure in which specific causal connections are discovered and asserted. If you are looking for some sort of justification, that lies in the success of our attempts to find causes - and more than that, our determination to find what order we can in the world, so that when full causal explanations are not available, we wring from the data whatever order we can. So we switch models and go for statistical explanations.
Quoting Banno
There is a reservation here, because statistical laws don't really predict anything about individual cases. I've never quite worked out what probability statements say about them. It certainly isn't what I would call a prediction. However, they do come in very handy when it is a case of making decisions in a risk/reward context. Betting may be a bit iffy, but insurance is perfectly rational.
But perhaps more important in a philosophical context is that predictability is not enough. Plato, at least, would insist that the goal is understanding, not mere prediction. I think he has a point.
Quoting Banno
I hadn't thought that the move to equations amounted to actually abandoning causal explanations. But I can see that it is a very different model from the Aristotelian model.
You can still play a make-believe game of REALISM if it. . . to you. . . feels more intellectually useful in deriving the results you desire for whatever means. It may even prove more useful in TEACHING the next generation of physicists.
Some of those reasons could include looking for further technological advancements, empirical adequacy, unifying power, simplicity, aesthetic appeal, counterfactual restrictions on what is possible, etc.
__________________________________________________________________________________
Ontology has been superseded by empiricism in a wide mark but empiricism is. . . as a result of the many unfalsifiable notions that we can create ourselves. . . also proves to be limited. What is after empiricism?
Perhaps its a study in normativity. The choices in logical frameworks we make and the theories we prefer to continue research in even if those haven't found a leg up on others.
BECAUSE THESE DEBATES DO NOT DIE and they probably just transfer themselves to the new popular domain of philosophical discussion within the confines of the previous but with the ability to do things the previous domain could not settle.
I wasn't so much thinking of statistical laws as the basic equations of physics.
Some folk tend to think of F=ma as setting out how the force causes the acceleration. But what's actually happening here is that Newton defined the very notion of force as change in velocity times mass. F=ma is not a description of a cause and effect so much as the presentation of a new way of talking about motion. He didn't find a hidden cause - force - he set up a way to calculate changes in motion and mass. He found a new way to talk about the stuff around us, not a new thing in that stuff. It was a change in semantics, not in ontology.
Quoting Ludwig V
Yep.
That underdetermination stuff is a feature, not a problem. It's about being unhappy with a determinate causal answer such as "God willed it" and looking for more, doing the experiments, using your imagination, seeing what happens when you do this or that...
If we followed Tim we would still be in the monasteries.
Yes, I get that point. Are you saying that we should stop talking about causes altogether, or that we need to re-think the concept of causation?
Quoting Banno
Setting aside the local issue about God, I understand you to be saying that underdetermination is the space for research and discovery, rather than a prison of doubt and uncertainty. Is that fair?
I would be inclined to agree with you. But then I find that it is still alive and kicking.
Quoting substantivalism
If you want to argue with someone, it is best to start from where they are at.
Quoting substantivalism
Too right. That creates an interesting, and difficult, field of understanding what's really going on.
I'd favour the more humble point, that cause is overrated if it is considered to be the only, or even the most important, explanation. When causation is master, non- causal explanations are forced into casual form, as when ethics is seen as mere biology, or maths aw psychology; Non-causal structures and patterns are missed; or worst case, folk mistake the absence of a causal explanation for the absence of any explanation at all.
Quoting Ludwig V
Yep. That willingness to live with and investigate the precariousness inherent in the absence of deductive certainty is more than just science; it's the human condition" "I don't know, buy I'll take a look"
Somethgn like that, anyway.
OK. It's just that causal explanation, along with the metaphor of the machine, has been such an icon of what science is about that I find it hard to grasp the alternatives (apart from statistical explanations).
Quoting Banno
That sounds good. Actually, there are reasons for thinking that deductive certainty is not all that it is cracked up to be.
Oh, yes. But when one looks closely, it turns out to be difficult to say what sort of thing a cause is, and to describe actual science in causal terms. Like the scientific method, we know what it is until we try to say how it works.
Quoting Ludwig V
Yep. TheOP's framework assumes that genuine explanation must bottom out in metaphysical causes. But this misses how much successful science operates at other explanatory levels entirely.
That seems right. Efficient or proximal causation is the basis of mechanistic modeling. That kind of modeling tends to isolate the subject from its environment. For any event or change to occur there is presumably a whole network of conditions that constrain the ways in which that event or change can unfold. The most universal global condition seems to be entropy.
Cheers.
The mechanistic model is a great example of a case taken far beyond it's context. We seem to understand how causation functions in a game of billiards; the picture gains such a hold on us that we presume the same sort of causation is at work in thermodynamics, or quantum mechanics or psychology or sociology or even ethics.
I think logical structure, linguistic and semantic relations, normative and evaluative judgements and so on all come into how our explanations are structured. However, I still think that when it comes to explaining any natural event, efficient cause and general conditions form the backbone that carries the flesh of "structure, linguistic and semantic relations".
Also I am addressing only explanations of natural non-living phenomena? I acknowledge that explanations of animal and human behavior may be given in terms of reasons instead of causes. The overall set of conditions under which actions are taken by animals and humans will also obviously come into play in any explanation. Reasons as well as causes are constrained by global conditions.
Here you go:
"Arguments from underdetermination are extremely influential in contemporary philosophy.
They have led to many radical, and seemingly skeptical theses.
These theses are perhaps more radical than we today recognize, when seen from the perspective of Enlightenment and pre-modern prevailing opinion.
These types of arguments were not unknown in the [ancient] past, and were indeed often used to produce skeptical arguments.
The tradition most associated with these arguments, ancient Empiricism, sought skepticism on purpose, as a way to attain ataraxia.
Thus, we should not be surprised that borrowing their epistemology leads to skeptical conclusions.
Hence, if we do not like the skeptical conclusions, we should take a look at the epistemic starting points that lead to them.
Indeed, if an epistemology leads to skepticism, that might be a good indication it is inadequate.
The Thomistic response is given as one example of how these arguments used to be put to bed. I use it because I am familiar with it and because the Neoplatonist solution is quite similar. (But the Stoics also had their response, etc.).
I do think that solution is better, but the point isn't to highlight that specific solution, but rather the genealogy of the "problem" and how it arises as a means of elucidating ways it might be resolved or else simply understanding it better."
Now, the term "skeptical" here is a tricky one. I would call many, but certainly not all forms of anti-realism and eliminativism vis-á-vis core subjects such as consciousness, truth, causality, "skeptical" in a broad sense. But skepticism is not epistemic or moral nihilism. There are "skeptical solutions," which is pointed out. So, just as @Ludwig V rightly points out, Hume is a sort of skeptic, but he is hardly a nihilist. Most skeptics aren't nihilists. The ancient skeptics generally weren't, and neither were the Indian skeptics in general.
But I would question whether or not some modern forms of skepticism that appeal to "pragmatism" as a sort of panacea solution actually avoid a sort of nihilism. That's a broader topic.
It would be rather silly to remove them. As I believe I have pointed out before, I have to run an antiquated browser for other purposes and the pop up quote button doesn't work on it. I'll be honest though that I also forget that it's there on other devices out of habit.
Again, this lies on the unsupported claim of an essential dichotomy of: "either causes need to be abandoned or else there is only ever one cause, 'God willed it.'"
It should be obvious that this is a straw man. Nonetheless, the history is interesting here because the rise of the "New Science," nominalism, and empiricism was very much an effect of a new theology that positively wanted to make "God wills it" the proximate cause of all things. The idea of self-determining natures was originally rejected on the theological grounds that anything being any thing in a strong sense (i.e., "God cannot make a cat to be a frog while being a cat"), or the notion of final causes, violated absolute divine freedom. Historically, the theological case comes first, and then later arguments are developed explicitly to support it. This view is adopted by the advocates of secularism later. In part, the demand for secularism comes from skepticism about human ends that is generated by the reduction towards a wholly inscrutable divine will. This sort of skepticism was a corner stone of liberal theory for instance.
Even the language of the New Science reflects this. The appeal "natural laws" and the "obedience" of all things to them (rocks as much as men) is motivated by a volanturist theology of sheer divine will. It's from a theology of command and obedience. God makes the law, things must follow. This is why the sort of occasionalism you are referencing, while long popular in Islam (which had already turned to volanturism), only became popular in the West in the early modern period.
The huge gap between Anglo-empiricist and Continental thought is in some ways a continuation of this earlier theological division.
Well, here we run into one of the great difficulties of any genealogical account in this area. What is meant by "cause" varies considerably. I'd argue that it is precisely the empiricist epistemic presuppositions, and the metaphysics that tended to go along with them in the early modern period, that deflate causation into nothing but mechanism (there has been a push in philosophy and physics to restore notions of formal causality on this point, e.g., Max Tegmark's ontic structural realist account or Terrence Deacon's approach to biology). The empiricist reduction of causes to mechanism or mere temporal patterns is not how I intend the term.
I think Hume, as he so often does, provides a devastating diagnostic account of what causation amounts to given these suppositions, which is "not much." But as per the point made above, this shift on causes was not made on the grounds that "otherwise, God is the direct cause of everything and all causation must reduce to 'God wills it.'' Quite the opposite, particularly in the Reformed tradition influential in Scotland.
I'm not sure whether you mean the Aristotelian solution or the Neoplatonist one. Either way, I don't think we can assume that we can lift one part of a coherent system of thought and make it work in our context. More than that, there are, in my book, two versions of empiricism. One of them has been popular in philosophy and leads to the empiricism of appearances, ideas or sense-data. The other is mostly unspoken but is the foundation of science; this version understands experience in a common-sense way and doesn't posit theoretical objects that boast of being irrefutable and turn out to prevent us from understanding the stars or anything else.
Quoting Count Timothy von Icarus
A fair point. It looks as if we need to be a bit careful what we take from those times if we want to avoid the same sceptical conclusions.
Ah yes, I mention this. But I think this leads to an unfortunate and common conflation where the second sort of "empiricism" is appealed to in order to justify the first sort, such that all scientific progress is called on as evidence for the superiority of the first sort of empiricism, and a rejection of empiricism is said to be a rejection of science.
As noted in the OP, if we go with the second version, then Hegel is an empiricist and figures like Aristotle or Albert Magnus, and Archimedes would be more "empiricsts" than the original Empiricists.
This way of justifying the first sort of empiricism isn't just flawed on the grounds that it equivocates. As far as I am aware, it has no good empirical support either. The Great Divergence whereby Europe pulled dramatically ahead of China and India in economic and military development doesn't track well with the (re)emergence of empiricist philosophy. Areas where rationalists dominated did not lag behind in military and economic might. Many famous inventors and scientists did not hold to the first sort of philosophy. And I have never heard of an experimental study finding that having more empiricist (first sense) philosophical views makes one a more successful scientist or inventor.
It's easy to see how the two often become mixed together though. I think this is especially actue in metaethics, where empiricsts epistemic presuppositions essentially amount to metaphysical presuppositions. "Examine the sense data; there are no values (or universals, or facts about meaning, etc.) to be found." But of course, our lives are full of apparent universals (wholes) and values. The critic can rightly claim that these, in fact, seem to be everywhere. Phenomenology seems to find them, as did the philosophy of the past. So all the heavy lifting seems to be done by what is assumed to be admissable from experience.
At any rate, in arguments such a J.L. Mackie's "queerness argument" against values, I think it's clear that the epistemological presuppositions do all the work, and essentially assume the conclusion. But the conclusion that all prior talk about ethics, goodness, beauty, etc. is a sort of "error" is a radical, and in a sense, skeptical conclusion. Yet to my point in the OP, if our epistemology leads us to thisto dismiss claims as seemingly obvious as "it is bad to have my arm broken," or "it is bad for children to be poisoned at school" as lacking any epistemic grounding (i.e., not possibly being facts)then I'd say this is an indication that we simply have a bad epistemology. Doing science does not require such views. But this is particularly true as the same basic arguments once used to dispatch goodness and beauty have since been leveled at truth. With the deflation of truth into emotivism, such an epistemology becomes straightforwardly self-refuting.
The same sort of thing that happens with "empiricism" happens with "naturalism." Both have been equated with accepting or rejecting science to such a degree that virtually no one says that they aren't an naturalist. Yet this just leads to a huge amount of equivocation, where "naturalism" can be either extremely expensive, or "only reductive, mechanistic materialism." I think it is, in general, an increasingly useless term. It's also subject to Hemple's Dilemma, where "natural" just comes to mean "whatever there is good evidence for."
Naturalism is about causation. It posits that we should proceed with the assumption that natural causes are there to be found.
And British empiricism died a long time ago
If we explain naturalism in terms of "natural" we need to explain what "natural" means, and this is generally where there is equivocation. What makes a cause "natural?" (Also, it seems to me that people who are on board with eliminating causation still consider themselves naturalists; yet if there are no causes this definition won't work).
When naturalism is defined as the view that "the natural is all there is," then for something to be "natural" is just to say it exists. If ghosts and spirits and magic exist, then they must be natural. It's the same issue as Hemple's Dilemma.
You could say Augustine was a naturalist because he warned not to go around explaining everything with miracles, but to look for natural causes. If he understood what natural causes are, I don't know why it would be strange to you?
Yes. It is odd that those theories are often classified as idealist.
Quoting Count Timothy von Icarus
I can see your point. But it's only an over-view. It needs a slightly more detailed argument.
Quoting Count Timothy von Icarus
Oh, I think that Hume's argument is a bit better than that. There is some value to recognizing that statements of value (evaluations) are not in the same logical category as statements of fact.
Quoting Count Timothy von Icarus
The problem is that if one gives up using all those terms, and tries to concentrate on the issues rather than the labels, other people will pin them on us according to their needs.
I don't think so, because it is a statement of knowledge, and knowledge is not belief. It entails a belief, but it is not a statement of belief. If I say to you, "I baked you a loaf of bread," this is not a statement about yeast. It is a statement about bread, and bread includes yeast, but it is not a statement about yeast. You are trying to make belief central in an inappropriate way, and part of that is your idea that "I know X" is a statement of belief.
I think you are required to reject all of traditional epistemology, because traditional epistemology presupposes the possibility of knowledge and you reject (or else redefine) the possibility of knowledge. For traditional epistemology, there is a form of certainty that is not merely subjective attitude and which pertains to knowledge. For you, there is no form of certainty that is not merely subjective attitude.
Quoting Relativist
When we are talking about knowledge we are not really dealing with beliefs. Belief is a vacuous aspect of knowledge. There is no need to "deal with" what is vacuous.
Quoting Relativist
Here's an argument to show how you are equivocating:
Quoting Relativist
If you think there are truths then can you give an example of a truth?
Quoting Ludwig V
Sure, but that isn't new to Hume. The separation of practical and theoretical reason was centuries old. It's precisely the assumption that there are no final causes (and perhaps, no facts about goodness) that allows for a novel move here. Prior thinkers hadn't missed the difference between "ought" and "is;" yet they thought there could be descriptive statements about the good and beautiful (just as we can speak about what "ought to happen" given purely descriptive predictive models").
Here is a pretty typical intro to philosophy text in the empiricist tradition:
Note, it is assumed that we can "describe truth," but only "interpret" value. "Facts" are juxtaposed with value here. "Description" is juxtaposed with "judgement." Obviously, in the Continental tradition, you're more likely to see the claim that both are interpretive and involve judgement. Here, this is even expanded to "healthy."
Anyhow, there is a difference between:
"People ought to be kind to their mothers." and
"It is good for people to be kind to their mothers."
You can see this in the fact that if you replace "good" in the second statement with "common" you get a straightforwardly descriptive statement: "It is common for people to be kind to their mothers." Now, if the claim is that the latter can be said to be "describing a fact," yet the former cannot, I'm not sure how this isn't just assuming: "One can never describe facts about value" (which tends to flow from "there are no facts about values"). An appeal to "judgment" makes no difference here, because this applies to what we think is true sans issues of value as well.
Yet, if we allow for "facts about values" we might get something like:
X is better (more choice-worthy) than Y. ("is," fact)
Therefore, choose X. ("ought")
When framed this way, it seems a bit strange to have the objection that we cannot move from "X is more worthy of choice than Y" to "X should be chosen over Y." Yet, if someone is really a stickler on this, we can always include an ought premise to the effect of: "one ought always choose the better over the worse." I am just not sure why this is needed. It seems to me to be akin to demanding that every logical argument include the additional premise that: "we ought affirm the true over the false" tacked on to it. Granted, I see no problem in adding either since they seem obviously true.
Yes, and I think 'value' fails to be a neutral word here given the way contemporary philosophy is prone to the verb (subjective) form of the word. The idea is that the act of valuing is made, not found, and is therefore ephemeral. So perhaps the first shift is to move from the act of valuing to the recognition of value; from deeming worthwhile to recognizing intrinsic worth. There is an indoctrination into the idea that one should never speak about what has intrinsic value or worth. One must be shaken out of that doctrinal slumber. ...The word 'good' is not as easy to subjectivize.
Quoting Leontiskos
You're wrong - in terms of standard philosophical discourse. I provided the definition from the Blackwell Dictionary of Western Philosophy that categorically states that knowledge is belief (belief that is adequately justified and true). Here's another source:
Ever since Platos Theaetetus, epistemologists have tried to identify the essential, defining components of propositional knowledge. These components will yield an analysis of propositional knowledge. An influential traditional view, inspired by Plato and Kant among others, is that propositional knowledge hasthree individually necessary and jointly sufficient components: justification, truth, and belief. On this view, propositional knowledge is, by definition, justified true belief. This tripartite definition has come to be called the standard analysis.
-- Moser, Paul K.. The Oxford Handbook of Epistemology (Oxford Handbooks) (pp. 15-16). Oxford University Press. Kindle Edition.
Quoting Leontiskos
I'll give two examples:
I. :My name is Fred.
I believe this to be true, and I have strong justification to believe it (it's the name on my birth certificate, the name my friends and family have always called me, and the first name on a variety of legal documents). My justification is sufficient to categorize this belief as knowledge - so I can also say I know this to be true. A TRUTH is nothing more than a true statement (or proposition), so I can confidently say "My name is Fred" is a truth.
Now let's suppose you believe me - that my name really is Fred. Your justification is weak, compared to mine (you're just taking the word of a strange guy on a public internet forum), so you cannot claim this to be knowledge on your part. In spite of the fact that you can't know this to be true, you do believe this to be the case and this also means you regard "My name is Fred" to be a truth.
II. Consider Goldbach's conjecture:
Every even number greater than 2 is the sum of two prime numbers
This statement has not been proven, and it hasn't been disproven. This implies that one of the following is a true statement:
G1. [b][i]Every even number greater than 2 is the sum of two prime numbers
G2. There is at least one even number greater that 2 that is NOT the sum of two prime numbers[/i][/b]
No one can claim to KNOW G1 or G2, because there's no proof of either. But one of them has to be true in spite of this - and therefore one of them is a TRUTH. My point is that something can be true, even if no one can actually know it.
No, your source did not say that knowledge is belief. Go back and have another look.
Quoting Relativist
Neither did it say that knowledge is belief that is adequately justified and true. You keep playing these word games where you equivocate and stretch the meanings of words, omit certain semantic ranges, inaccurately portray what a source says, etc. That sort of tinkering causes a lot problems when precision is needed.
But my original point holds: saying that "I know X" is a statement about belief is like saying "I baked a loaf of bread" is a statement about yeast. Equivocation is occurring.
Quoting Relativist
So your argument here is, "I believe X is true and I have strong justification to believe it, therefore it is true [or, therefore I know it]." But why do you think those two conditions are sufficient? Those conditions obviously fail to generate knowledge in certain circumstances. And this idea of "strong" or "adequate" justification is not even in keeping with that broad sort of Gettier epistemology. It looks like a subset, something like probabilistic internalism.
But again, rather than falling into the rabbit hole of contemporary epistemology, my claim is that the traditional epistemic opinion is that knowledge is possible - that I can know and know that I know certain things. I don't see how you would be able to accept such a view.
We could look at the three Gettier conditions:
On your approach where everything is reduced to belief, we get something like this:
Yours is far from the Gettier model. You have three beliefs; the Gettier model does not. And no one thinks these three beliefs of yours generate knowledge. The Gettier model requires more than just belief, which is why your belief-reductionism is incompatible with it.
Now I think this form of skepticism is becoming common, so it's understandable in certain ways. My point is that it is a significant deviation from traditional epistemology. If one locks the subject within their own beliefs, then knowledge is impossible. This presumably includes even probabilistic knowledge.
Yes, it did. Let me be clear: the sources did not say (nor did I claim) that all beliefs are knowledge. Rather, both sources are saying that knowledge constitutes a a subset of ones beliefs. I'll also clarify that we're discussing propositional beliefs/knowledge.
I gather that you use the terms differently, but I've consistently been discussing what the standard philosophical terms are. If you are unable or unwilling to understand this, then there's no point continuing the discussion.
Well that's a rather different claim, isn't it? "X is Y" is not the same as "Some X is Y." Philosophical discussion requires linguistic precision. That sort of conflation, over and over, is unphilosophical.
Yes, indeed - even millennia old. So why do you think that the fact/value distinction is a distinctive error of empiricism - or even an error at all?
(As I remember it, Aristotle even asserts that "Reason, by itself, moves nothing". That's what motivates his construction of the practical syllogism.)
Quoting Count Timothy von Icarus
I think you are over-simplifying here. That decision was a re-configuration of the distinction between theoretical and practical reason, not, or at least not necessarily, an abandonment of the ideas of purposes and values. Oversimplifying again, final causes are not the province of science, that's all.
Quoting Count Timothy von Icarus
It is true that the interface between fact and value, or between theoretical and practical reason, is more complicated than is usually recognized. We do not always draw a clear distinction between the two, so one can always turn an evaluative statement into a factual statement - and there are many concepts that combine the two. Yet we can also to disentangle them. "Murder" combines fact and value, but I think everyone understands how to distinguish between the two aspects. "Abortion is illegal" is, in one way, a statement of fact and not of value (unless one is arguing that one ought to obey the law). But we can also ask whether abortion ought to be illegal. We can also, I think, see the difference between "ought" of expectation and prediction ("we ought to get home in three hours") and "ought" of moral or ethical principles ("you ought to be on time for this appointment"). The factor that can create confusion is that we usually expect people to meet their moral and ethical obligations.
Quoting Count Timothy von Icarus
Surely you can see that those two statements have very different force? One implies an instruction or command, or recommendation. The other doesn't. "`It is common for people to take a summer vacation" is an observation which does not have the force of a recommendation or instruction, while "It is good for people to take a summer vacation" does not imply that it is common and is compatible with it being rare to do so, but it does imply that one should. When the surgeon holds out his hand and calls "scalpel", it's an instruction and the surgeon expects the nurse to put one in his hand; when the nurse holds up a scalpel and asks what it is, the same word is a description - there is no expectation that the nurse will put it in his hand.
Quoting Count Timothy von Icarus
No, a logical argument does not require that premiss. If the argument is sound, it is sound whether or not people affirm the conclusion. It is true that when we are trying to explain the force of these arguments, we try to explain that, and why, we ought to affirm the conclusion. It's a knotty problem.
I'm not denying a difference between commands and recommendations and descriptions, just the idea that so descriptions involving values are actually commands or expressions of emotion. Such theories do violence to language.
The idea that "good" always refers to something like "thou shalt" is a product of Reformation volanturist theology, the tradition that shapes Hume. To say that all value claims are about "thou shalt" isn't to observer an ironclad law of philosophy or language. It's just the (originally explicitly theological) premise that shaped Hume's context, i.e., "there is no intrinsic value (teloi) because intrinsic value would be a constraint on the divine will. Thus, value must be about divine command."
"This is a great car," does not mean "thou shalt drive my car," or even "I should drive my car," just as "this is good (healthy) food" does not directly convert to "thou shalt eat this food," or even "you ought to eat this food." This is even more obvious when we move to the beings that most properly possess goodness. "Peter is a good man," need not mean "thou shalt choose Peter," or "I recommend Peter." It can, but it needn't; it can be merely descriptive.
Centuries of war waged against intrinsic value in the language haven't been able to paper over these issues. While "that's a good tiger," might seem a bit odd in English, descriptive value statements made in a slightly different ways are still common and natural. Hence, "that tiger is a perfect specimen," or "that is a perfect tiger," is generally about the tiger as tiger, not recommending the tiger or commanding us to do anything vis-á-vis the tiger. So too, "that is a pathetic, miserable bush," isn't telling us to do anything vis-á-vis the bush, but is normally telling us something about the bush as a bush.
So let me ask a pointed question: does the descriptive statement "x is y" essentially mean "you ought to affirm that x is y is true?" If not, then why, if y is "good" would it automatically change to "you ought to do y." To be sure, we ought to choose the good and avoid the bad. But we also want to affirm truth and reject falsity. And yet we don't say that "x is true" becomes equivalent with "affirm x," and so "x is good" shouldn't be subject to this sort of transformation either.
On the same point, "ought" is often taken to imply duty, and I think this is the same sort of deficient theological extension. "This food is good, you ought to try it," and "she likes you, you should ask her out," do not imply "you have a duty to eat this food," or "you have a duty to ask her out."
I'm pretty sure we've discussed this before. Hume's variant would be something like saying only the animal (sensible) and vegetative soul ever move the body. The rational soul (or at least a fractured part of it) is acknowledged, but powerless. Hume doesn't argue for this position in Book II though, he just stipulates this as a definition. He does not take up the influential arguments for the appetites associated with reason, but simply declares they cannot exist. But I consider the phenomenological and psychological arguments made for such appetites to be quite strong, and Hume's declarations to be quite destructive, so I have no idea why we should take them seriously.
The fact/value distinction in Hume (see Book II) is justified in a circular fashion from this premise. Reason only ever deals with facts. It can never motivate action (stipulated, not argued). Value must motivate action. Therefore, value is not a fact. He will use this in Book III to claim that when we investigate "vice" (disvalue) or presumably "virtue," we can never actually experience them anywhere. I can only say here that this seems obviously false and that millennia of thinkers disagreed. We live in a world shot through with value. We experience obscenity, depravity, cruelty, etc.
Hume's argument, that "virtue and vice" don't show up in our "sense data" is extended into the seeming reductio claims of later empiricists and phenomenologists, that we also don't experience cats, trees, the sun, etc., but only an inchoate sense stream, and so that these too are less real abstractions. If the one argument from abstraction is valid, I don't see why the other isn't, although the conclusions seem absurd. They suppose that the healthy man is surrounded by unreal abstractions and that the person having a stroke and the infant "see reality as it is" by experiencing it in an inchoate fashion. The sage is most deluded and the infant and victim of brain damage lifted up, just as the earlier argument makes the psychopath, the damaged and malformed soul, into the measure of the moral truth of the world. But as Hegel points out in the chapter on sense certainty in the Phenomenology, this process bottoms out in the completely contentless.
1. All men are moral
2. Socrates is a man
3. Therefore Socrates is mortal
The conclusion does not establish an equivalence between Socrates and being mortal. Neither does the statement "knowledge is belief" entail an equivalence between knowledge and belief. Regardless, it appears we've gotten through this misunderstanding.
Quoting Leontiskos
I agree that "Strong" justification, per se, is not sufficient for knowledge. But if one believes that knowledge is possible, one would then have to agree that there are SOME justifications are sufficient for knowledge. Does "my name is Fred" qualify for knowledge? It doesn't really matter, because I was simply trying to illustrate the relation between knowledge and belief.
I do believe knowledge is possible (analytic truths, for example), but I also believe it is rare - because Gettier conditions are nearly always present. If one chooses to define knowledge more loosely, with somewhat less deference to Gettier conditions, then he would consider knowledge to be more common. But whether or not the term (knowledge) can be applied to some specific belief seems to me to be of no practical significance.
What is of practical siginficance (IMO) is the importance of making an effort to seek truth through good epistemological practices. What I've been arguing is that inference to best explanation (IBE) is usually the best we can do. I doubt that any IBEs can constitute knowledge, but that doesn't mean we should treat all inferences as equally credible. Conspiracy theorists draw inferences from data, but they tend to cherry pick data to fit a prior prejudice, while ignoring or rationalizing data that is inconsistent with the theory. They are not applying good epistemological practices.
I wasnt suggesting that descriptions involving values are actually commands. I was pointing out that descriptions involving values are also commands, or, more accurately, have the force of commands, etc.
Quoting Count Timothy von Icarus
I didnt limit that list to commands, or recommendations, but was gesturing towards a connection between certain descriptions and action (or inaction).
Quoting Count Timothy von Icarus
The theological premiss may have been the first version of the idea. But, given that he does not mention it, I think we can be reasonably sure that it was not Humes premiss.
Quoting Count Timothy von Icarus
Im sorry, I seem to have misled you. I was not saying that any description was synonymous with any command, in the sense that one directly converts to the other. I was saying that many descriptions have the force of commands, or recommendations, or (Humes favourite) approbations or even expressions of emotion. (Im assuming that you are reasonably familiar with the concept of speech acts.)
Quoting Count Timothy von Icarus
There are cases where it doesnt make sense to describe them as good, such as, This is a good disease. I surmise thats because their badness is built in to the concept. In other cases, like tiger, it may be because they have been known to kill us, and are dangerous. They are very good at hunting; the catch is that they are perfectly capable of applying those skills to hunting human beings. In yet other cases, the oddity may be because there are no criteria for evaluating them. I suspect planet may be such a concept; oxygen may be another.
Lets think about dog shows. The rules for these competitions include descriptions of the various breeds. The rules for a Shih Tzu and for an Old English Sheep Dog are different, but they specify what makes one competitor better than another in each class. To evaluate a Shih Tzu by the criteria for an Old English sheep dog is a solecism or a misunderstanding or nonsense.
The project here is to specify criteria for the evaluation of each competitor. Identifying a dog as a Shih Tzu tells us that specimen conforms to the criteria, to a greater or lesser extent, and so justifies the classification; but the rules also justify evaluating specimen as better or worse than another. It is essential to the project that description and evaluation are distinct activities, linked by the rules. This is a formalization of a conceptual structure that is very common in our language, and which presupposes that description and evaluation are distinct. Another field with a similar, but different, structure is the legal definition of a crime murder, say, or theft.
Quoting Count Timothy von Icarus
1) No. 2) Because good is an evaluation and x is y is a description.
Quoting Count Timothy von Icarus
The whole point of the distinction is that a (pure) description is not equivalent to an evaluation. But some concepts have both descriptive and evaluative components; sometimes, in specific context, a description may be treated as an evaluation.
I dont think that x is y, of itself, suggests that we should affirm it or should not affirm it, except in specific contexts. Sometimes we should and sometimes we should not. (Cf. Kant on truth-telling)
More later
Quoting Count Timothy von Icarus
OK. Would you mind explaining what the arguments are that you consider to be quite strong? Im intrigued by the idea of appetites associated with reason.
[quote="Count Timothy von Icarus;1011289]The fact/value distinction in Hume (see Book II) is justified in a circular fashion from this premise. [/quote]
Humes wraps up his premiss in some rather confusing flourishes, but he realizes that no set of facts can provide a deductive proof of any statement of value and sets out to provide an alternative explanation. Are you saying that he is wrong about that?
[quote="Count Timothy von Icarus;1011289]We experience obscenity, depravity, cruelty, etc. [/quote]
Hume doesnt disagree with you. On the contrary, he argues that morality is based on our responses to those experiences on how we feel about them. He realizes that those responses cant be validated by deductive reasoning, but believes that, nonetheless, they are the basis of morality. I think thats an over-simplification, but not unreasonable as part of a more comprehensive theory.
The biggest weakness in his argument, in my view, is that he seems to think of experience and our reactions to it as something given. He doesnt distinguish between what our experience is and how we have learnt to interpret it. That knocks a big hole in the idea that experience is the foundation of knowledge and, indeed, or morality. That doesnt rule it out as a contribution to or a factor in our knowledge and morality.
[quote="Count Timothy von Icarus;1011289]Hume's argument, that "virtue and vice" don't show up in our "sense data" is extended into the seeming reductio claims of later empiricists and phenomenologists, that we also don't experience cats, trees, the sun, etc., [/quote]`
Yes. Actually, it was Berkeley that first articulated that argument and critics made the same criticism. It didnt impress Berkeley and it doesnt seem to impress modern idealists either. I dont know why Hume is not also subjected to it.
I'll respond to the rest later but I wanted to point out a potential miscommunication:
Right, I am aware of the distinction. But it isn't a "logical distinction" in the sense that it is something that is discovered about how logic works or syntax works. It is a distinction made based on a certain metaphysical theory (generally, anti-realism). It's akin to how emotivism claims that the logical function of "good" is equivalent to "hooray for." The distinction follows from the metaphysics in the same way that someone who accepts the Doctrine of Transcendentals will acknowledge a distinctive logical function for One, Good, and True and their derivatives (Something, Thing, Beautiful, etc.), in that they are transcategorical and that they are conceptual/logical (as opposed to real) distinctions that add nothing to Being but which are coextensive with it.
Now, certainly that distinction is very helpful, if you accept the metaphysics. But it's the metaphysics driving the recognition of the logical distinction (in each case).
As noted earlier, I don't think "good" always indicates or approves of an action. On something like an Aristotleian account, the goodness of actions is always parasitic. Goodness is primarily descriptive there and grounded in final causes, and particularly in beings (organisms). Even in common language today though, "good" often seems to be used in strictly descriptive ways. Nor do I think that what makes a claim "evaluative" is generally clear.
"That's hot" can be a claim recommending action. It can also be merely descriptive. "That's too big," is often a claim recommending action, but it can also be descriptive. Context determines if it is taken to recommend action or not. But more to the point, no one thinks that because "that's too big," or "that will break it" might recommend action, that they are not also, and often simultaneously fact claims and descriptions. Their being evaluative in one context doesn't remove their descriptive nature.
Anyhow, perhaps I interpreted this wrong, but you seemed to be supporting the general fact/value distinction in light of the logical distinction. If so, I would say this argument is circular. It would be like arguing for moral realism on the grounds that the Doctrine of Transcendentals makes a different logical distinction re "Good."
Note that the move to subjectivize value here could just as well he made for all descriptions. We could reinterpret all descriptive claims to the effect that "x is y" as "I believe/feel that x is y" (indeed, some have recommended this). This would, IMO, do violence to natural language in the same way that it does violence to natural language to assume that if "y = good" a claim always ceases to be a fact claim, or to assume that if "y = good" the *real* meaning is "hooray for x." Some people make a differentiation between first person declarative and third person informational statements. I find this distinction more useful, but it cuts across claims of value and "facts" and does not presuppose the two are exclusive.
Sure, there isn't always assertoric force. But in every language I am aware of, assertoric force is the default. In any case, there obviously often is assertoric force. If there is, then "x is y" is equivalent with "it is true that x is y." Now, we might not believe that "x is y," but surely if it is really true we ought to affirm it, right?
Although, I suppose it's true that for the values anti-realist "y is true" never implies "affirm y," and the move to affirm y must always come from irrational, inchoate sentiment. I am not convinced that this doesn't result in nihilism and misology if taken to its logical conclusion however. Whereas the counter to the effect that we have a "sentimental" desire for truth qua truth ("all men desire to know") is just reintroducing the rational appetites with the adjective "sentimental" tacked on.
I might have been unclear. I am referring to the section in Book III where he says that we never sense (touch, smell, see, etc.) vice or badness.
"Take any action allowd to be vicious: Wilful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. The vice entirely escapes you, as long as you consider the object."
This is very similar to his claim that we never sense causes. But, prima facie, a valid response is to say that when one sees a ball smash a window one has just seen a ball cause a window to shatter. I am not sure what Hume expects a cause or badness to "look like" or "feel like" or whatever. We certainly do directly experience the appetites (pain, revulsion/nausea, beauty, craving) and on many accounts of goodness it is being qua desirable. Under that view, this division seems bizarre. The appetites are part of sensous experience and relate to what is sensed directly and immediately. I am aware that Hume's empiricism denies this by claiming all of these are "impressions of reflection," and thus, by definition, internal, private, and subjective. But this is another case of an argument from mere stipulation. In reality, touch is continuous with pain, we don't cross a threshold that neatly divides them for instance.
Obviously, Hume is writing to his own context. In that context, morality tended to be thought of in terms of rules (which shows up in all the examples he uses). When I say Hume is being influenced by theology, I don't mean that Hume adopts this framing because of his personal theology. He might have been an atheist. I mean that he is going with the ideas dominant in the Reformed/Calvinists context he lived in. I bring it up because if one rejects that source for the framing, one might question if it is worth sticking with its categories. The idea that "good" involves something like "thou shalt" or that "ought" primarily denotes duty or obligation (or even action), is a product of that context.
Likewise, one need not suppose that Hume rejects final and formal causality on theological grounds to accept that he is writing in a context where final and formal causality have already been excised from "scientific/philosophical discourse" primarily on theological grounds. Obviously, athiests sometimes defend mechanistic causality with religious zeal, but they often do so because they see it as a historical product of science. My only point is that it wasn't. It's a genealogical argument. Of course mechanism might be justified on other grounds; this only attacks the claim that it is primarily "scientific." There weren't experiments run to "rule out final causes" (indeed, biology, medicine, the social sciences, all still rely on them); it was a theological/philosophical position to exclude them from consideration. This sort of thing still comes up in stuff like the Libet experiments.
I think this discussion is getting too complicated. I would like to set aside the historical debate. However, I cant resist two observations I dont expect you to agree with me, but I think we can make more progress by focusing on the core issues. This post is about sorting out the focus, setting aside debates, not because they are not worth while, but because one cannot deal with everything at once.
Quoting Count Timothy von Icarus
Its true that Hume was not involved in the ejection of final and formal causality from physics but that he was writing in the context of that decision. Whether that decision was made primarily on theological grounds is another question. I dont have the expertise whether that was so or not, so I wont argue the point.
Quoting Count Timothy von Icarus
This is a regular technique for the empiricists, isnt it? Theres always a catch. Here, it is as long as you consider the object our attention is directed away from the context. Certainly Berkeley is very fond of this move, though he doesn't let it get in the way of a good argument. I dont set much store by it. But consider the end of that section.
That doesnt sound like moral anti-realism to me. On the contrary, what he seems to think he has found is a foundation for virtue and vice that is consonant with his methodology. There are problems with it, of course. First, there is the let-out clause If favourable to virtue and unfavourable to vice. Im sure many people would point out that our sentiments are often not particularly favourable to virtue and unfavourable to vice. In addition, there is the Euthyphro question, whether the gods love piety because it is good or whether piety is good because the gods love it. On top of that, there is Moores fallacy.
Hume is more complicated, even slippery, than he is usually thought to be. I think standard representations of him are misrepresentations. But I dont think its a black-and-white issue.
Quoting Count Timothy von Icarus
Ive never understood metaphysics and I dont know enough about the doctrine to dissect this. But it looks as if metaphysics and logic reflect each other here and that someone who accepts the doctrine of transcendentals agrees that there is a distinction that is at least very similar to the modern fact/value distinction.
Quoting Count Timothy von Icarus
So I guess you dont buy the argument. So I won't let it distract me.
A theory is valid just within the precision of data. Given that the precision in data is limited, anything that one can imagine can happen beyond the limit of the data's precision, referred to as spiritual reality.
Quoting Ludwig V
It strikes me as a sop thrown to common moral sentiment in the context of his broader philosophy. Of course Hume doesn't deny the appearance of evaluative facts (sentiment), or that they seem important to us (indeed, nothing else could be "important"). Yet what his starting point has led to is a move the privatizes and subjectivizes value such that the distinction between reality and appearances collapses. On this view, what is "truly desirable" is just whatever happens to be desired. There are only appearances, so appearances just are the reality (for the person experiencing them). It leads to a sort of Protagorean relativism that he tries to paper over with an appeal to "common sentiment." But all desire ultimately bottoms out in inscrutable, irrational, and irreducibly private impulse. And if one differs from "common sentiment" there are no "reasons" to go along with it.
Obviously, the older ethics grounded in final causes also starts from the good as what is desired/sought. However, it says that there is a truth about what is actually "most desirable." If someone's highest desire (what they currently believe and feel, i.e., appearances) is to be a wastrel who drinks all day and scrounges off their parents, the older view would deny that this is what is actually "most desirable" given a true understanding of the good and healthy appetites. Hume has eliminated this distinction. He can appeal to the "general point of view," yet this is really just an appeal to "whatever people currently say is good." The "general point of view" in the context of "A Brave New World" sees that dystopia as eminently desirable, and so apparently it is. Slavery, child brides, etc. would be acceptable so long as general sentiment holds this to be so. Hume cannot claim that any socio-historical "general point of view" is better than any others without smuggling reason back into the picture (nor does he seem to have a strong argument for why this view should have any hold on the egoist).
This is what makes it an "anti-realism." Our desires (appearances) become the measure of goodness. Hume's change here is analogous to making "whatever we happen to believe" the measure of truth (for each individual), which is exactly what Protagoras does for truth tout court (as opposed to only practical reason).
Now, we might ask, on this view, are things we regret "bad?" I think an honest reading would have it that a late-night shot of tequila is "good for us when we want to drink it" and only becomes "bad" when we wake up hungover. Cheating is good when one does it, and becomes bad if we later regret it, etc. We may, of course, desire things based on how we think we will feel in the future (such a view doesn't preclude "thinking ahead"). However, if goodness just is sentiment then it must shift as our sentiment does. Otherwise, it would be the case that extra-subjective facts about what we will desire determines what is good (and so not sentiment and appetite, but facts as they relate to sentiment and appetite). That's a small but crucial distinction. We either go with the horn that makes "good" just "whatever is currently desired," or we start allowing goodness to rest in fact-related causes of sentiment/appetite (which presumably relate to "what man is" and "what rational creatures are," i.e. telos).
Quoting Ludwig V
They are pretty similar. I'd say the later view is just the old one with the reality/appearances distinction collapsed, often paired with a denial that there can be any consistent relationships between how the world is and how man is, and what any individual man will find most fulfilling (i.e., a denial of human nature). I tend to find denials of human nature farcical, because they invariably have to be walked back with so many caveats as to simply reintroduce the idea of a nature in some modified form. It is clear that man is a certain sort of thing. We do not expect that our children might someday soon spin themselves into cocoons and emerge weeks later with wings, because this is not the sort of thing man does. We know that we will fall if we leap off a precipice, and we understand that we are at no risk of floating away into the sky when we step outdoors. Things possess stable natures; what they are determines how they interact with everything else. That doesn't wash out individuality, it allows it to have some sort of ordering so that it isn't arbitrary. But again, theology lurks in the background here. Man was seen as in the image of God, and a view of divine liberty had emerged where liberty is most fully revealed in inscrutable arbitrariness.
The desire to know truth for its own sake is the most obvious. The desire for truth over falsity, which is a desire and so evaluative, is IMO a prerequisite for any rationality at all. Elsewise, we ought only affirm what we otherwise feel like affirming. A "good argument" and "good evidence" would otherwise just mean "arguments and evidence that affirm what I am already predisposed (by whatever irrational sentiments I just so happen to have) to affirm." To have "good argument," "good reasoning," and "good evidenced" ordered to the end of truth/knowledge, and not only accidentally related to it, presupposes the appetite for truth qua truth. Otherwise, we ought only affirm truth over falsity whenever it just so happens to fulfill or leads towards an unrelated, irrational desire we happen to possess.
So too for practical reason:
That is, it seems implausible that the nihilist/anti-realist would want to be ignorant of the Good if it truly exists. To be sure, it is often the case that we are unhappy about what we discover to be true. In some cases, we might even prefer not to be informed about the details of certain events. However, it does not seem plausible that a person would prefer to be deluded when it comes to the fundamental nature of the world and their relation to it. But this is of course, indicative of an appetite to know what is best itself, a sort of open-ended, rational appetite for knowledge and goodness as such.
It is also precisely this open-ended desire that makes it possible for us to transcend current beliefs and desires, to always question them. Likewise, it is what allows for any sort of coherent second-order volitions, such that we desire to have or not have certain other desires. Such a capacity is essential to any sort of rational freedom, since otherwise we would just be mechanically pursuing whatever desires we just so happen to have started out with (with reason as a wholly instrumental computational tool that simply tries to find its way towards these predetermined ends). Any sort of rational freedom requires an ability to judge which desires are worthy of pursuing, and also which should be uprooted (this is the whole idea of virtue being the development of habits of acting rightly, but also of desiring to act rightly, such that one enjoys what one does).
Now, the fact that a truly self-determining freedom requires rational appetites doesn't prove that such a thing must exist. Perhaps we lack it. However, the desire for freedom itself (freedom as good in itself) does suggest exactly this. No doubt, some men deny that they have any appetite for truth, what is "truly good," or freedom. Yet such an appetite surely exists in many. It's precisely what has motivated men across the ages to do things like to give up all their wealth, foreswear sex and children, and retreat into the desert to live as Sufi ascetics, or what motivates Marxist atheists who deny an afterlife to embrace suffering and anonymous deaths in order to further the struggle towards "what is truly best."
Quoting Count Timothy von Icarus
Ive never understood metaphysics and I dont know enough about the doctrine to dissect this. But it looks as if metaphysics and logic reflect each other here and that someone who accepts the doctrine of transcendentals agrees that there is a distinction that is at least very similar to the modern fact/value distinction.
Quoting Count Timothy von Icarus
So I guess you dont buy the argument. In that case, it is irrelevant.
Quoting Count Timothy von Icarus
Im open to examples.
Quoting Count Timothy von Icarus
A lot depends on the details. What is the goodness of actions parasitic on? If goodness is primarily descriptive, like theoretical statements, how come it can move us to action, as in his paradigm example, Dry food is good. But the key questions are 1) whether good is univocal, like red or changes its meaning according to context, like real or exists or large and 2) whether Aristotle (and Aristotelians) are right to posit a Single Supreme Good and 3) the role of those things (activities) that are good in themselves or good for their own sakes, like theoretical reason, music and friendship.
Now Ive mentioned real, I need to say that I dont feel a great need to answer the philosophical issue, in relation to values or anything else. There are certainly values of several sorts, moral values included; the questions are about their nature, their role in our lives. I dont see that recognizing a distinction between facts (which also exist) and values brings that into question in any way.
Quoting Count Timothy von Icarus
An explanation of what you mean by in strictly descriptive ways, possible including examples would help enormously.
Quoting Count Timothy von Icarus
I agree with you. Weve been using both descriptive and evaluative, not to mention fact and value and is and ought on the assumption that we have a common understanding. Which may well not be true. But the context of our discussion is morality and ethics, so that kind of evaluation is obviously the focus. That should help a bit.
Quoting Count Timothy von Icarus
Yes, this is difficult. One could well say that the difference between description and evaluation is the use made by sentences in a context. Then we would need to say that descriptive statements are statements whose use in standard contexts is descriptive and similarly for evaluative statements.
But we need to think, for example, about what kind of arguments or evidence supports or undermines a descriptive vs an evaluative statement (use of a sentence), and what kinds of role they play in our language and life.
Quoting Count Timothy von Icarus
I thought I was trying to articulate a logical distinction. What is the general fact/value distinction as distinct from the logical fact/value distinction?
Quoting Count Timothy von Icarus
Maybe so, but thats a different issue, isnt it?
Quoting Count Timothy von Icarus
There are certainly important differences between the two. But if they cut across the fact/value distinction, how are they helpful?
Quoting Count Timothy von Icarus
Well, if x is y, then you do well to answer the question Is x y? in the affirmative. But asserting that x is y just because you believe it is, well, a bit odd. Am I supposed to assert everything I believe. How often? What happens if I dont?
Asserting what one believes is one activity that expresses ones beliefs. But then, any action for which x being y is a good reason expresses ones beliefs just as well. When x being y is not relevant, it doesnt come up.
Quoting Count Timothy von Icarus
Oh, I see, this is about the rational appetites. Well, Ill acknowledge a desire for truth. But I dont think there is anything special about that desire. Like others, it can be excessive or deficient. Like others, it has to take its place among our other desires and values. A being that was devoted to truth and nothing else would not last long in this world; I dont think I could recognize it as a human being.
But theres another point about this. Like many others, I think that our emotions and feelings (sentiments) can be rational thats why they can be assessed as irrational. But their connection with actions mean that they are reasonable, as opposed to logical. It is reasonable to fear the lorry that is hurtling towards us and reasonable to get out of its way. In that context, the emotivists account of morality looks rather different, doesnt it?
Quoting Count Timothy von Icarus
Perhaps so, but how is it relevant?
Quoting Count Timothy von Icarus
Perhaps so. But I think we should evaluate the idea for its own sake, in our context, rather than anyone elses. Rejecting an idea just because of its original context seems a bit like prejudice to me. Actually, what I was trying to say was something vaguer, more like statements of value can be major premisses in a practical syllogism or statements of value (and so of desire) explain the motivations for action in a way that statements of fact do not.
Right, and that's what I've been driving at: it seems that you think IBE's are the only option, and IBE's do not constitute knowledge.
Quoting Relativist
If there is no pole of knowledge then I don't see how one IBE can be better than another (because no IBE can better approach that pole).
Similarly, if we know what ice is then we have a pole and a limit for the coldness of water. If we don't know what ice is, then the coldness of water is purely relative, and there is nothing to measure against. I would argue that knowledge is prior to IBE, and that IBE is parasitic upon knowledge. Thus if you make IBEs the only option, then there is nothing on which an IBE can be parasitic upon or subordinate to, and this undermines IBEs themselves.
(@Ludwig V)
This is a bit tangential, but John Henry Newman has some interesting argumentation vis-a-vis Hume, law, and will:
Quoting Newman, Grammar of Assent, Chapter 4
Part of what Newman is doing here is arguing that, in the more primary epistemic sense, law has to do with will and not with nature. He is turning Hume on his head, and will continue to do so.
Thank you for that quotation.
I guess Hume will survive being turned on his head. It is no more than what he tried to do to the "Schoolmen". But Newman's argument is reminiscent of Berkeley's, and, presumably, gets to a very similar conclusion. I'm not greatly bothered by his argument about natural the uniformity of nature, since I don't consider it to be a truth, but a methodological presupposition. governing the search for order in the world.
No. I said IBEs are usually the best we can do. Whether or not they constitute knowledge is irrelevant to my point.
Quoting Leontiskos
Here are some questions about which rational answers can be given (IBEs), but the answers do not constitute knowledge:
Have aliens visited Earth?
Is Christianity true?
Was the 2020 Presidential election stolen from Trump?
Did Epstein commit suicide, or was he killed?
Is autism caused by vaccines? By Tylenol?
Is my name actually "Fred"?
Quoting Count Timothy von Icarus
Well, no, it doesn't.
"God did it" is one way to end the causal chain; there are of course others. That chain is set up by framing physics in terms of causal sequences of events.
And that is not what science describes.
You make use of two merely rhetorical moves, an accusation of straw manning, and a slide to historical theology. Sophisticated topic-changing that appears learned while dodging the philosophical issue.
FIne.
Right, but over and over I have been inquiring into whether there is anything other than IBEs, and over and over you keep shying away from that point.
For example:
Quoting Relativist
Earlier you gave this as an example of knowledge that is not an IBE, and now it is an IBE and not knowledge.
So it looks like you hold that there is no knowledge; only IBEs. That's what I've been pointing up from the beginning.
Here's where I stated my position on knowledge:
Quoting Relativist
That answered your question about whether there is anything other than IBEs. A deduction from other beliefs is another form of justified belief besides IBE.
These questions were evidently important to you, but they were a tangent to my points about IBEs being of much more practical significance because it pertains to critical thinking in everyday life.
Quoting Leontiskos
You had rejected my assertion that my belief that "my name is Fred" constitutes knowledge. I wasn't interested in debating the point (because it's irrelevant to MY issue). So the second time, I was only asserting it to be a justified belief.
How one defines knowledge is a triviality, because it's just semamtics. One can define it with or without considering Gettier conditions, and with varying degrees of applying it - all for the purpose of making claims that one has "knowledge", or not. That's trivial.
My topic is critical thinking, and - because most rational beliefs are IBEs, it is of more practical significance to do a good a job of this. It also provides an approach to debating an issue.
Here's something you said that I'd like you to explain:
Quoting Leontiskos
Does your "tentpole" comment refer to the mere fact that knowledge exists, are you suggesting IBEs that aren't based on knowledge are all equivalent, or something else entirely?