Negative numbers are more elusive than we think
Most people today are comfortable with the idea of negative numbers, so much so we teach it to our children in their primary years. Yet negative numbers didn't really become "accepted" among mathematicians in the Western world until the 19th century. I would like to argue that there are good reasons to still be wary of the way we treat negative numbersnot in the sense of mathematical rigor, but in our intuitions and, potentially, our philosophical treatment of such entities.
My first argument is that our intuitions of what negative values mean, and especially the operations between them, are sloppy and imprecise. Take the ubiquitous example of a count of apples. It's obvious and natural to us what it means for me to have a positive number of apples, it's something we can count. It's less obvious what it might mean for there to be an amount which is less than nothing. The first instinct most people have to apply negative numbers to such a situation is to introduce debt. Debt is a fine use-case of negative arithmetic, but offering it as a tangible realization of negative numbers in a realist sense, the sense in which we count positive numbers of apples, is rather insidious.
The main difference I perceive is that negative numbers require a context within which to function, unlike positive amounts which I seem to be able to measure or count in any situation. That context could be your bank balance, any sort of relative measurement such as sea level or thermometer, or perhaps direction when it comes to velocity. I think that this required context does make negative numbers at least seem one step removed from the naturalness of the positives.
The next, and more egregious example of bad intuition when it comes to negatives, is on the operations between them, primarily multiplication/division. Addition and subtraction isn't so bad; finding the difference between two temperatures, is a natural example of subtraction between positive or negative values for example. But things get tricky when you try to interpret multiplication by a negative, particularly through the context of multiplication as repeated addition.
The usual way people intuit this is either by taking forward/backwards steps while facing to the right/left on the number line (each option chosen by the sign of each number) or by considering gains/losses of credit/debt. Hopefully I don't have to fully describe the reasoning. Suffice to say, there is a huge disconnect and logical gap between these formulations and what the behavior multiplying by a negative really is. And that is, the flipping behavior of the signs.
To demonstrate my point, consider this: in the usual examples of negative numbers in nature (temperature, debt, sea level) when does the ×(-1) operation occur? For example, sea level may go up and down (add/subtract) in increments, but does the sea level ever flip from above to below? If you take out $100 dollars from your account of +$50, you may end up with -$50 dollars, but this wasn't a ×(-1) operation, that was a subtract $100 operation, which happens to yield the same result.
My claim is this: our ancestors, who thought mathematics had true things to say about the world, had good reasons to be wary of positing negative numbers as ontologically equivalent to positive numbers. It wasn't until they shifted their perspective of mathematics from "truthbearer" to "useful tool" (roughly) that negative numbers started to become accepted. And for good reason, they clearly work and make mathematics more pleasant to work with. However, it's easy for us to then get ahead of ourselves and dismiss valid concerns about such things, simply because our sloppy reasoning happens to yield correct results.
What is the lesson, then? Can recognizing this help progress math further? I don't think so; mathematics gets along just fine without requiring our intuitions to be satiated. Perhaps its simply an exercise in clear thinking.
My first argument is that our intuitions of what negative values mean, and especially the operations between them, are sloppy and imprecise. Take the ubiquitous example of a count of apples. It's obvious and natural to us what it means for me to have a positive number of apples, it's something we can count. It's less obvious what it might mean for there to be an amount which is less than nothing. The first instinct most people have to apply negative numbers to such a situation is to introduce debt. Debt is a fine use-case of negative arithmetic, but offering it as a tangible realization of negative numbers in a realist sense, the sense in which we count positive numbers of apples, is rather insidious.
The main difference I perceive is that negative numbers require a context within which to function, unlike positive amounts which I seem to be able to measure or count in any situation. That context could be your bank balance, any sort of relative measurement such as sea level or thermometer, or perhaps direction when it comes to velocity. I think that this required context does make negative numbers at least seem one step removed from the naturalness of the positives.
The next, and more egregious example of bad intuition when it comes to negatives, is on the operations between them, primarily multiplication/division. Addition and subtraction isn't so bad; finding the difference between two temperatures, is a natural example of subtraction between positive or negative values for example. But things get tricky when you try to interpret multiplication by a negative, particularly through the context of multiplication as repeated addition.
The usual way people intuit this is either by taking forward/backwards steps while facing to the right/left on the number line (each option chosen by the sign of each number) or by considering gains/losses of credit/debt. Hopefully I don't have to fully describe the reasoning. Suffice to say, there is a huge disconnect and logical gap between these formulations and what the behavior multiplying by a negative really is. And that is, the flipping behavior of the signs.
To demonstrate my point, consider this: in the usual examples of negative numbers in nature (temperature, debt, sea level) when does the ×(-1) operation occur? For example, sea level may go up and down (add/subtract) in increments, but does the sea level ever flip from above to below? If you take out $100 dollars from your account of +$50, you may end up with -$50 dollars, but this wasn't a ×(-1) operation, that was a subtract $100 operation, which happens to yield the same result.
My claim is this: our ancestors, who thought mathematics had true things to say about the world, had good reasons to be wary of positing negative numbers as ontologically equivalent to positive numbers. It wasn't until they shifted their perspective of mathematics from "truthbearer" to "useful tool" (roughly) that negative numbers started to become accepted. And for good reason, they clearly work and make mathematics more pleasant to work with. However, it's easy for us to then get ahead of ourselves and dismiss valid concerns about such things, simply because our sloppy reasoning happens to yield correct results.
What is the lesson, then? Can recognizing this help progress math further? I don't think so; mathematics gets along just fine without requiring our intuitions to be satiated. Perhaps its simply an exercise in clear thinking.
Comments (88)
I think this answers your own question, Jerry. What is the square root of -1? We haven't a bloody clue, so we call it "i" to disguise our ignorance. Funny thing is, engineers use "i" all the time to build suspension bridges and skyscrapers which safely carry thousands of us every day. It's just our way of recognising that the universe is cleverer than we are. We are relatively stupid beings, but (paradoxically) we are intelligent enough to discover interesting things we can't understand.
That skates over the philosophical problems of counting with natural numbers.
[quote=Russell]"But," you might say, "none of this shakes my belief that 2 and 2 are 4." You are quite right, except in marginal cases -- and it is only in marginal cases that you are doubtful whether a certain animal is a dog or a certain length is less than a meter. Two must be two of something, and the proposition "2 and 2 are 4" is useless unless it can be applied. Two dogs and two dogs are certainly four dogs, but cases arise in which you are doubtful whether two of them are dogs. "Well, at any rate there are four animals," you may say. But there are microorganisms concerning which it is doubtful whether they are animals or plants. "Well, then living organisms," you say. But there are things of which it is doubtful whether they are living organisms or not. You will be driven into saying: "Two entities and two entities are four entities." When you have told me what you mean by "entity," we will resume the argument.[/quote]
[quote=Wittgenstein]However many rules you give meI give a rule which justifies my employment of your rules[/quote](Remarks on the Foundations of Mathematics [RFM] I-113).
We might end up saying - "This is just how we count - and anything else doesn't qualify as 'counting' as we do it". If we can get no further with justifying counting with natural numbers then we can take the same dogmatic 'Just How We Do It' approach to negative numbers.
But numbers don't refer. They are a way of doing things. They are for making sure you haven't lost any of your goats, for sharing the bread out evenly, for tracking the debt on your credit card, for measuring the rise in global warming.
Perhaps your discomfort coms from a misplaced reification of numbers.
As for multiplying by a negative, it's not hard to find examples.
______________________
What I find intriguing is that there seems to be a group of folk who cannot move past the statement of the rule to its implementation, int he way explicates.
This seems to be a common characteristic of @Metaphysician Undercover, @Harry Hindu and perhaps @Bartricks.
There is a way of understanding a rule that is not stating it but is seen in making use of it.
Think of 2 representing the height of a mound of dirt and -2 representing the depth of a hole beside it. Perhaps the unit of length is a stick used to dig the hole in the first place.
An early geometer might find irrational numbers easier or more 'real' than negative numbers, because surely the diagonal of the unit square has a length and a negative length is absurd.
Maybe this helps :
You admit
-50 = 50 -100
Rewriting (using only multiplication of positive quantities),
50 - 100 = (1)(50) - (2)(50)
By the distributive property (hope you're OK with that),
(1)(50) - (2)(50) = (50)(1 - 2)
And finally (if you agree that 1 - 2 = -1),
(50)(1 - 2) = (50)(-1)
So subtracting twice the given amount is multiplication by -1.
Friend, there are many interesting questions and debates involved with the foundations of math : the nature of infinity, Russell's paradoxes, Godel's incompleteness theorem, the various schools of mathematical thought, etc. The existence of negative numbers is not one of them.
Plato's famous admonition "Let no one ignorant of geometry enter here" should have include parenthetically "(or arithmetic)".
In this case, I suggest that you think of zero as potential. If you're counting apples, 0 represents the potential for some apples, but no actual apples. When we allow the potential for apples to be a real representation of apples, zero apples, we can build an equality system around that potential, such that any number of apples can be negated to zero with an equal negative amount. That's a basic equation.
However, zero takes a much more complex position when numbers are used for order (ordinals) rather than for quantity (cardinals). Since it must be positional within an order, it cannot represent a complete lack of order. But if we give it a position within an order, it becomes prior to the first, which is really incoherent. Then any proposed negative order is just an exercise in incoherency.
Quoting Jerry
Multiplying and dividing with negative numbers is a bit tricky. There are differences depending on the convention employed, as the concept of "imaginary numbers" demonstrates.
@Banno seems to think it's just a matter of following whatever set of rules serves one's purpose. But that's ridiculous, we can't just choose our rules depending on the consequence we desire. When incompatible, or contradictory, rules exist within the same field of study (mathematics), then there is a problem of incoherency. And using contradicting rules depending on what is desired, is simply wrong.
This is an example of introducing context to make sense of negatives, which I described here: Quoting Jerry
Now it could be the case that regular counting has its own context, which I feel is eluded to indirectly by , although I don't yet understand the meaning of the quotes they provided. To quickly reiterate, it's not that I think negative numbers can't refer to things in nature, it just seems like extra steps are needed to make them make sense, which makes them somewhat different from positive numbers.
Recall that in many logical systems, if a contradiction is derivable, i.e if 'zero' in that language is proved to exist, then every well-formed formula in that language and its negation are derivable via the principle of explosion, which implies that the well-formed formulas of an enumerable and inconsistent language are isomorphic to integers with additional structure, i.e they form an abelian group.
Of course, in mathematics 'zero' isn't normally used to mean contradiction (in physics and accounting the opposite is often true), and we don't regard the integers to be unhealthily inconsistent. So the analogy between logical and numerical negation might at first glance appear to be syntactical rather than semantic, but they nevertheless have strong semantic similarities, for both numerical and logical negation are interpretable as denoting the control of resources by an opponent in a two-player game.
The difference is, the integers and their equations were invented chiefly for the purpose of expressing draws in games (such as balanced production and consumption), whereas logic with the principle of excluded middle was invented for the purpose of expressing games without draws.
My point isn't quite that there aren't applications of multiplying by a negative, physics has it all over the place, and computer programs can also make use of them heavily. My point is more so about how some of the intuitions of the rules don't match the applications. Yes, we can interpret (-5) * (-1) as a "$5 debt" being "lost", hence $5 credit, and that rule gives us the correct value, but it doesn't match the usual "flipping" interpretation of multiplying by a negative. Furthermore, that "flipping" interpretation of a negative doesn't occur in the other usual examples of temperature, sea level, height of dirt as describes.
The point is that we are using sloppy intuition to justify the rules of negatives, intuition that clearly didn't convince mathematicians of the past, and perhaps there's some value in recognizing that.
In game semantics, the flipping refers to changing the perspective from which the game is viewed. Say, in the game of chess, where a theorem denoted W represents the winning positions for white and ~W the winning positions for black. There isn't anything transactional implied when changing sign.
What "intuition" are you looking for? And what is this "flipping" interpretation you seem to see?
You keep providing perfectly fine interpretations of performing operations with negatives, then immediately recoiling in fear. Why? Why do negatives give you the creeps?
Some folks like to think of math as the study of patterns. Consider the pattern of values on the right side of the equal signs, then complete the last equation :
-5 x 4 = -20
-5 x 3 = -15
-5 x 2 = -10
-5 x 1 = -5
-5 x 0 = 0
-5 x -1 = ___
(Hint : the values are increasing by a constant amount)
Oh, and my earlier comment can be improved upon - I was tired and writing in haste. Nothing wrong with the math, but here's a better explanation : Multiplying by signed numbers is identical to repeated additions or subtractions, but only if we start from 0. Your mistake was starting with a balance of $50. So subtracting $100 (2 x 50) seemed the same as multiplying the initial amount by -1 which made no sense.
If you can't handle negatives, you better avoid irrationals or complex numbers. You're clearly not ready for those. And don't even look up transcendentals. Ooh, the mind reels.
:lol:
[math]\pm \sqrt 4 = \pm 2[/math] cm.
+2 is the real solution and -2 is, in high school math, an extraneous solution
However, I've always wondered about a (mathematical) universe that contains a most intriguing square with sides = -2 cm. A mirror dimension perhaps.
[quote=Wikipedia]Anadi has no beginning, but has an end ([math]- \infty[/math])[/quote]
In our world then, 0 is the smallest number in geometry.
I get it, and I agree that negative numbers are something like one metaphor away from the counting numbers.
I think it's better to talk about synthesizing otherwise incompatible intuitions as we (naturally) want to be able to find products and not only sums involving negatives. We use different metaphors, each effective in its space.
More general point: I think even the counting numbers require a context. We just learn them so well and use them so much that we no longer feel their strangeness or notice the context.
They're widely accepted for the simple reason that they're useful in solving equations like 5 - 6 = x and they don't cause catastrophic contradictions, at least none that I'm aware of.
Yes, backwards is one metaphor, and another one if flipping. Consider [math] f^{-1} [/math] for the inverse of [math] f [/math], undoing it, like a 180 degree rotation (which is its own inverse). Math is surprisingly crammed with metaphors, poetry.
an instance of '2' can be any of (2,0), (3,1), (4,2) , ...
Here, the numbers in a pair (a,b) can be thought of as denoting the scores of two players A and B.
Negation switches the scores the other way around
-2 := any of { (0,2) (1,3), (2,4) ,.. }
Zero represents tied results where A and B's scores are identical, and these results lie on a 45 diagonal line (call it the 'zero line') running through the centre of the positive quadrant of euclidean space, dividing the quadrant into two non-overlapping 'victory zones', one for each player.
The magnitude m of a general score (a,b) is it's distance from the zero line, and measures by how much the winning player won by. Hence we can view this as the score of an adversarial zero-sum game of tug-of-war between A and B, with rope length m, along the axis perpendicular to the zero-line.
Compare to the case of 'Complex Number Games'. In contrast,
i) A game with scores (a,b) is written a + j*b, where j is the imaginary unit.
ii) Either or both of a and b can be positive or negative, which means A and B face a common opponent C.
iii) B's score is perpendicular to A's due to multiplication by j, which means that A and B might play cooperatively.
iv) The magnitude n of the score (a,b) is the Euclidean length, i.e. sqrt( a^2 + b ^2). This represents the total reward with respect to an n-square-sum three player game.
v) The phase angle of the result determines how the reward is distributed among A, B and C.
vi) The imaginary unit j serves as negation for three-player games, dividing the 2D Euclidean space of real-valued score outcomes into the following quadrants (where a quadrant is taken to include it's clockwise-next axis and excludes zero):
{A doesn't lose and B wins, A loses and B doesn't lose, A doesn't win and B loses, A wins and B doesn't win}
Multiplying any of these quadrants by j yields the next quadrant to the right (using circular repetition).
Good point! In Physics, changes in value can only proceed "forward" (positive) one-step-at-a-time. But in meta-physical*1 Mathematics, we can imagine the number-line as a whole, and see both forward (future) and backward (past) at a glance. Likewise, we can imagine Time as a number-line, allowing us to follow it back to the beginning of time . . . and beyond. That's why Physicists can only work on the here & now, while Cosmologists & Sci-Fi-ers can speculate on Multiverses-without-beginning and Many-Worlds-without-location. Such conjectures are mathematical concepts instead of physical observations. :smile:
*1. Meta-physics, in this context refers to abstract mental processes, instead of concrete material objects. Hence, has nothing to do with ghosts or spirits. Numbers, ratios, & relationships are mental concepts, not physical things. So, they can act in ways that are physically impossible, such as to go backward & forward in time, outside the momentary Now. To infinity and beyond . . . .
NUMERICAL VALUES EXTEND TO INFINITY IN BOTH DIRECTIONS
Danke kind person!
To reiterate, negative numbers are simply a backward extension of pattern that is seen in positive numbers via the subtraction operation.
In physics, since it's essentially materialistic, negative numbers appear where direction matters e.g. in velocity (in one direction it is +, in the opposite direction it is -) or broadly speaking in vector quantities.
Another point to note:
Many (children's) books on mathematics go out of their way to provide an intuitive explanation for numbers & operations performed on them.
So -4 × 2 = -8 is easily grasped as adding -4 twice (-4 + -4 = -8), negative numbers simply being a different kind of number).
However, from the books I read -4 × -2 = 8 is rather difficult to comprehend intuitively. What does adding -4 negative two times mean? It's just a pattern that's all and nothing in our everyday experiences can be used to convey the meaning of this particular calculation to children and adults alike.
[quote=Ms. Marple]Most interesting.[/quote]
Flipping (reflecting) alias rotating (turning) by [math]\pi[/math] radians is a good geometric way to grasp what negative numbers are.
:up:
Yeah, like flipping directions of travel. It's just convention which direction we pick out with the minus sign (like picking which side of the road we all drive on.)
We could in a sense treat negative numbers as demonic/diabolical/evil numbers for this reason, oui? Are there any physical constants apart from the charge of an electron that are negative?
For a long time folks tended to think of real numbers as magnitudes or the lengths of lines. Squaring [math] x [/math] was actually drawing a square. 'Quadrature' was similarly boxing up area.
Cognitive scientists believe that children are ready to learn negative numbers by the age of 11 or 12. That's sixth grade in US schools.
There are a host of websites dedicated to explaining multiplication of negatives to children of this age. The very first one that came up when I googled was something called "How to Adult". Here are a few ways they explain it :
I like this one :
I guess if it tickles you to contemplate negatives all day, have at it. There are just so many deeper notions in math to ponder, it seems a bit silly to me.
Try something else/new. I don't get it! :groan:
Sorry, I'm done. A basic math course might help.
Then again, it might not.
No problemo! Bonam fortunam homo viator!
If -4 x 2 equals -8, then -4 + -4 ought to equal zero. "-4 x 2" means negative four taken twice, and that is negative eight. But "-4 + -4" means negative four added to negative four, and to add necessarily takes you in the positive direction, while subtraction necessarily takes you in the negative direction. Therefore adding negative four to negative four ought to bring you to zero.
Quoting Agent Smith
What you say here is somewhat incoherent. When you multiply negative four by two, you simply take negative four twice, you do not add negative four two times. Multiplication does not involve addition, whatsoever, it is a distinct operation. It may be that you were told that multiplying is a matter of adding the number a specified number of times, but it is really not a case of adding. To take something twice, or three times, or four times, is not the same thing as adding. That multiplication is not a simple operation of addition becomes more evident when you do powers, or exponents.
[quote=Ms. Marple]Most interesting.[/quote]
Neti, neti! Please continue mon ami, please do! Take me/us for a ride in your fancy car. I'm so thrilled.
You know there is a problem with multiplying and dividing negative numbers, which results in imaginary numbers, adding another whole level of complexity to numbers. What does it really mean to take a number a negative number of times, 2 x -1 for example. That's a negation of an operation. So two times negative one ought to equal zero, because we've taken two and negated it once, to make zero. If we negate two twice, 2 X -2, we ought to have negative two. What if we take negative two twice though, "-2 x 2" ? That looks like -4. So does "2 x -2" mean something different from "-2 x 2"? Multiplication isn't necessarily reversible.
That's a totally different take on negatives. Is it possible to construct a coherent/consistent system of operations ( +, -, ×, ÷) on negatives using that?
I don't know, I'm not a mathematician, and I wouldn't want to try. Here's a couple things to keep in mind though. The symbols "-" and "+" mean something different when they are used to indicate negative and positive numbers, from what they mean when they indicate operations, addition and subtraction. Also, the numerals mean something different when they are used to represent an order (ordinals), from what they mean when they represent a quantity (cardinals).
When we use numerals to represent quantity, zero allows for the potential for a quantity of the specified type of thing, as none of that thing. This allows that a negative quantity of the thing has valid meaning, as the potential to negate a specified positive quantity. But when we use numerals to represent an order, zero doesn't receive any coherent meaning. In general, "1" would represent the first of an order, and it doesn't make sense to place zero as prior to the first because this would negate the order altogether, as no order. Nor would it make sense to place zero as prior to the order, as the potential for that order, because then it cannot represent a part of that order.
The logical thing would be to use zero as the dividing point between the order, and the reversal of the order. So zero would, in a sense, represent the potential for the order, and also the potential for the inverse of the order. But it cannot appear within the order as part of the order, neither forward nor backward. Now the negative numbers would represent an inverse order which is exactly opposite to the positive order. It would be impossible for operations on the positive side to cross over to the negative side, or vise versa, without correcting for the reversal of the order. The means for correction being specific to the type of order being represented.
So for example, take "order" in the most general sense. Two minus three seems to imply a crossing of the order's boundary. But we cannot allow that, without setting up the conditions for the order's reversal. What is three places prior to the second place? This is one step before the first (the first being the beginning). Notice the order now, 2, 1, -1. The -1 signifies one place in the inverse direction, one step before the beginning. Zero cannot occupy a place here because that would annihilate the order altogether. What happens with "2-2"? That means two places before the second place. And this negates the order altogether, giving zero a place, but only when the order is completely annihilated altogether.
I think the common convention is to just give zero a place in the order, no different from ten or any other number. But this denies any real separation between negative and positive numbers, making them all just a part of an endless order, upward and downward. No beginning to the order. Then, when operations are carried out, the numerals tend to represent quantities, and the real meaning of zero relative to quantity (as described above) is lost, because of zero's faulty positioning as part of an order. So there is a conflation of numbers representing order (in which zero makes no sense as part of), and numbers representing quantity, where zero of a specified quantity makes sense. That's a type of equivocation.
However, as noted, it could also be the case that positive numbers also do operate within a context, just a more invisible one. What I want to consider is that there are actually two different mathematical concepts that are being conflated when we look at the nature of positive numbers. And it is that of magnitude and signedness.
My claim is that when we do ordinary math like counting, we aren't actually operating on "positive" numbers per se, but rather unsigned numbers, or magnitudes. And that is the context we operate in normally, that of magnitude. However, when we want to consider negative values, we introduce a new context, that of signedness, and this is the "weird" context that makes negative numbers seem one step removed from unsigned numbers.
Of course, when we do math, we don't really make such distinctions between unsigned or signed numbers: numbers are always signed, and so, distinct from their oppositely signed counterparts. That is to say -1 isn't 1 with - sign, -1 is a completely distinct entity from +1. What I wonder then, is if there's any merit to make such distinctions, perhaps from a philosophical perspective. Clearly, the math works out and doesn't care about our intuitions. But if we can get a finer grasp on the nature of such entities, it could inform our philosophical considerations.
Actually, the Hindus about 628 introduced negative numbers to represent debts. Positive numbers represented assets. Euler, in the latter half of the 18th century still believed negative numbers were greater than infinity.
(I can't wait to see all the action when you guys move on to FRACTIONS :scream: )
That the Hindus did commendable work on negative numbers is part of mathematical canon. Thanks for the memory refersher!
:up:
As someone mentioned elsewhere, negative numbers are typically built within set theory as equivalence classes of pairs of natural numbers, so they are very much one level up.
So -2 := { (2,4), (3,5), (4,6),...}.
It's also possible to declare that every number in a given system has an additive inverse. Then the negative sign is like a function f that transforms a number into its additive inverse, so that, for instance, -x = f(x) and x + f(x) = 0 for all x. Note that -x also has an additive inverse, which is f(f(x)) = x.
-2 × -3 = 0 - (-3) - (-3) = [(+6) + (-6)] - (-3) - (-3) = +6 + [(-6) - (-3) - (-3)] = +6 + [(-3) - (-3)] = +6 [subtract -3 twice from 0]
:snicker:
Can I ask you where you got this from? I know Euler played fast and loose with infinite series, but I can only find this bit about negative numbers mentioned on an obscure Wikipedia comments page. Since Euler is one of the greatest minds mathematics has ever seen, this seems like an odd mistake.
I've read that about Euler in more than one source myself, so it's out there if you decide to hunt it down. It's not that Euler was stupid, but maybe the reverse. Things were that unsettled then. Functions, continuity,...not strictly defined yet...
That Euler and other great mathematicians thought such things was the whole point of this thread in the first place. Is there no insight to be gained by understanding why the idea of a negative eluded such minds for so long? Also, although the rationals contain the integers, fractions are simpler as a concept, just given that they've a far longer history in mathematics. So this fraction meme you guys are doing is backwards.
Quoting Real Gone Cat
I forget whether or not it was Euler who made that claim, but mathematicians also argued against them in terms of ratios. It seemed ridiculous to them that the ratio of a greater to a lesser (1 : -1) could be the same as a lesser to a greater (-1 : 1). I can't precisely pinpoint the mistake being made there, though there obviously is one.
Quoting Pie
For me, these kind of constructions raise a lot of questions about the sort of ontology of mathematical objects. That certain entities are "prior" to others in these formalisms, does it have any meaning to how we view physical reality? Like how magnitudes (positive numbers) are natural, but signed values seem synthetic. Also:
Quoting Pie
:up: Had Euler really never heard of debt before? And would our examples of holes and sea level and temperature convince him otherwise?
I venture that he'd have had no problems understanding that intuition. The issue was probably multiplication and the shape of the number line ?
"God made the (positive) integers." That feels right to me, but in the end we have to settle on formal systems...or sacrifice the norms that make mathematical conjectures relatively unambiguous in the first place.
Found a reference citing the sequence
[10 / (1/n)] as n -> infinity,
with the claim that it eventually becomes negative.
(don't know how to code math symbols, and don't have the time to look it up).
If that sequence is due to Euler, it's a classic case of Euler's lack of rigor regarding infinite sequences and series. Yes, as n -> infinity, 1/n -> 0 and the sequence approaches infinity.
But n -> infinity does NOT imply that 1/n will become negative at some point. That's not the way limits work! 1/n never becomes negative, and so the sequence itself never becomes negative.
What stories like this point out is that there was a considerable lack of understanding regarding infinite sequences and series at that time. Even so great a mind as Euler's made a hash of it. And the nonsense continues to this day. A claim that infinity = -1/12 was made recently on this very forum. In tracking down the source, I came across an explanation that started with the ridiculous statement that
1 - 1 + 1 - 1 + 1 - ... = 1/2. That's insane! I often teach that particular series to Calc II students as an example of a divergent series. We should know better now
To me it's not that surprising. I'm assuming you've studied real analysis. Imagine trying to do that without the axioms of [math] \mathbb{R} [/math].
To format math, just use "math" where you'd otherwise use "quote" (in the proper brackets) and proceed with the usual Latex code.
Which gives rise to the obviously correct supposition that mathematics is empirically based. Possibilist and fictional but still empirically biased.
:up: re the math symbols.
What surprises me is that Euler was so far ahead of his contemporaries in most areas, but seemed to have weird blind-spots from time to time.
I still don't see the confusion over negatives and their operations, but then I "do" math every day. Oftentimes familiarity makes it difficult to see how others must view the same. Thinking about this particular topic, one of the clearest explications (for me) comes from business - accounting to be precise :
It's well-known that profit = revenue - cost (clearly, revenues are positive, costs are negative). So finding new revenue is positive for a company (adding a positive is positive). Losing revenue is bad for profit (subtracting a positive is negative). Adding a cost is also bad (adding a negative is negative). Finally, removing a cost increases profit and is positive for the company (subtracting a negative is positive). Removing multiple costs (multiplying negative by negative) is also a positive.
I don't know if it helps, but that's (probably) my last comment on this. Maybe I'll start a topic proving square circles exist. Hmm ...
Mathematical Thought From Ancient to Modern Times, Morris Kline, Oxford University Press, 1972
:up: Thanks
Had an exchange with Pie about this. As you are obviously aware, Euler was one of the greatest minds mathematics has ever known, but infinite sequences and series were not always his friends. I'm not an Euler scholar, though I have several works discussing his many contributions. I must have missed this one.
A very good question. Might this understanding help in current studies in QM? Are there blind spots in our conceptual apparatus that prevent us from comprehending quantum entanglement?
On the other hand, would this be yet another philosophical journey into the past, analyzing what others did centuries ago, but with no relevance to the modern world?
I wouldn't count on broad support from the mathematical community for such a quest.
It seems odd that if I say there's a square with an area of 4 cm[sup]2[/sup], one value for its side viz. -2 is treated as if it were nonsense. A square with a side length -2?! Pffft!
And yet, when introducing children to the concept of negatives, we use a line to wit the number line.
In one sense, geometry's anti-negative numbers and in another sense it is pro-negative numbers!
What up with that? Anyone have any ideas?
In Euclidean geometry, there is no such thing as a length magnitude of -2. Negation only indicates the direction of the magnitude in relation to a coordinate system. Hence it isn't surprising that by convention constants aren't signed.
To paraphrase and restate what I said earlier, the evolution
Whole Numbers -> Naturals -> Integers -> Rationals-> Reals -> Complex Numbers
accommodates increasingly general uses of arithmetic, which in my opinion and following Wittgenstein's general philosophy, is best understood in terms of games of increasing generality .
The starting intuition that makes the Whole Numbers so compelling initially, coincides with the picture theory of meaning and the reference theory of meaning: Whole numbers are used to denote the process of counting, whereby a number is assigned to a particular object without consideration as to how the object relates to other objects or how the object is used; relative to this semantics, the concepts of 'zero' objects and 'negative' objects make no sense. Also, recall that the whole numbers and integers have the same cardinality. So in the context of counting, they are equivalent.
The Naturals mostly cling to this early intuition, but introduce a 'zero object' to accommodate the concept of balance, say when using weighing scales, and also to denote the situation that exists prior to counting anything.
The previous introduction of zero motivates the construction of Integers with additive inverses, which leads to rejecting the earlier intuition outright; instead of using whole numbers to refer to entities, they are used to represent interactions between two entities, whereby an equation can express the net result of their interactions. So the shift from Nats to Ints marks the shift from denotational semantics to inferential semantics; but this is strictly in the context of exactly two interacting parties, which is denoted by the fact that the negation operator exactly reverses the direction of a given interaction in respecting the law of double negation, e.g -(-1) = 1.
In a three player game, say between Alice, Bob, and Carol, then from the perspective of Carol an interaction has 2 dimensions, namely a vertical dimension whose positive and negative values respectively denote Carol giving to and receiving from Alice, as well as a horizontal dimension representing Carol giving to/receiving from Bob. Thus Carol has 4 combinations of directions to consider, which implies that negation for three player games must respect a law of quadruple negation, motivating the construction of complex natural numbers.
The rationals generalise the integers by providing denotational semantics for divided objects, e.g a cake eaten by two agents, and the reals generalise the concept of divided objects to the concept of processes of dividing, albeit in a flawed way. The complex numbers over the field of reals accommodate everything previous.
Fractions are pure nonsense, from the get go, so we'll probably never get there.
Quoting Agent Smith
This is the very problem I referred to. The numberline shows us an order, and this order gives zero a place. But zero has no place within an order, because it would mean that there is a position of no order within that order, which is contradictory. Set theory suffers this problem which I discussed extensively with fishfry, who insisted that a set with no order is a coherent concept.
In common usage though, negative numbers are used to represent quantitative values, and here zero has a justified meaning. So it is the equivocation in usage, between "negative numbers" representing quantitative values, and "negative numbers" representing positions in an order, which causes a problem.
I think it's fair to assume that mathematicians prefer concepts to generalize as smoothly as possible, but it's just not always clear how to make a metaphor work smoothy in every possible context. I think we agree that negative numbers are intuitive for adding and subtracting. The tricky part is multiplication. If one gets to the point of realizing that multiplication by [math] i [/math] is a rotation by [math] \pi /2 [/math ], then one can have a pretty sensible Cartesian plane.
To fix the conceptual issue with taking square roots, you can just embrace the idea that magnitudes are positive and ignore the negative numbers altogether. When teaching algebra, it's sometimes convenient to ignore the complex numbers and speak of no (real) solutions. Complex solutions may not make sense in the practical context, and it's the same with negatives.
Consider dual numbers, which are useful to autodifferentiation...but not much else, so far as I'm aware. You can train neural networks more intuitively with these, but they are not as computationally efficient as backprop.
https://en.wikipedia.org/wiki/Dual_number
:down:
You are getting lost in metaphors and intuitions, as if a checkmate is illegitimate because involved the bishop was never baptized.
:up:
Consider also the joys of being a crank. If I can make a case that all the geniuses got it wrong, then what's that make me ? "All of math is a contradiction [,and only the great Me can see it]." To be fair (sounds like we've both taught/teach math), students (who want a grade to get a degree to get a job) often actually struggle, thankfully mostly with a humility that makes it possible to correct them.
So -4 = [math]i \times i \times 4[/math]. A u-turn of whatever +4 is; repeat the offense and it's back to square one! :lol:
Nice!
As is obvious to you, zero is hard, but not impossible, to grasp. Remember the calendar starts from 1 AD and so, technically, this year is 2021 Anno Domini! Go figure!
Even then, what seems to be an important pattern caught my eye. Numbers were initially 1 dimensional (the number line) i.e. the reals. Then they became 2 dimensional (the complex plane). Do you discern a pattern? It appears that our number system is as of yet still incomplete!
I don't quite understand the counter to analyzing past mathematicians views on this though. Why else do we study philosophy, especially the ancients? Historical inventory for sure, but we often learn things ourselves by studying their thoughts (of course, sorting the good ideas from the bad). Furthermore, history does repeat itself. It seems we accept negative numbers now on a similar footing as whole numbers, but complex numbers are still pretty hotly debated as to whether we should consider them as real as the real numbers. Whether or not it's an important issue that requires support from the mathematical community, that's why I'm on a random philosophy board and not writing letters to my local university or something.
:up:
Note that you can also go the other direction too using [math] -i [/math].
https://en.wikipedia.org/wiki/Octonion
I think this supports my case that the lack of formalization of real analysis was part of the problem. Of course we can't divide by zero, but we can take limits, which is more reliable when they've actually been strictly defined. Weierstrass is one of my heroes.
:up:
I dabbled in nonstandard analysis for a week or two, but induction on hyperreal integers was so unintuitive that it felt like cheating. In case you haven't looked into it, a hyperreal is an equivalence class of sequences of real numbers which are equal on an ultrafilter (a weird kind of subset of [math] \mathbb{N } [/math], and you can't actually construct one of these ultrafilters but only prove they exist somewhere out there.) Anyway, there are all kinds of infinities and infinitesimals in this system-- and it's just as solid as the ordinary real numbers logically. Cool..but I couldn't take it seriously, lost heart. Too weird, too unreal.
You are correct. This is revealed on page 253 from a work by Wallis in 1655. My reference is later in the book, page 593, concerning what Euler thought in 1750+.
Quoting Jerry
Good point. Apokrisis mentioned this in a previous post, regarding Penrose's fascination with complex numbers.
Most mathematicians seem to just take zero for granted, with zero understanding of what "zero" means. But of course, as I explained, the meaning of "0", as it is commonly used by mathematicians, is ambiguous.
How true. It means nothing to me. :sad:
How abso-fucking-lutely fascinating! :up:
Only in the sense that they have so many exact, formal systems that successfully employ zero that you'd want to know which successful specification of the concept was context relevant.
It's like the cat calling the potty blank when metaphysicians chide mathematicians for ambiguity
If your goal is deception, ambiguity is a very useful tool. Therefore successful employment of the term does not indicate that the term is not ambiguous.
Edit: Whoops, gettin' old. There are two points that are exceptions: zero and infinity.
Zero is an essential singularity of this function.
It is mind-blowing if you're into that stuff, but I'd say it's not at all surprising.
Cool fact.
I never seriously explored complex analysis, but I'm dimly aware of some spectacular theorems.
What's :scream: to one is just :yawn: to another, eh? I quite like that! The most likely reason is that, sensu amplo, some of us are from another frigging planet/time! Been there, done that! :meh:
It seems important to distinguish difficult from boring, oui mes amies?
:snicker:
One of the strangest elementary features of the complex plane is the point at infinity. No matter which direction you go, if you keep moving outward, like beyond an expanding set of circles centered at zero, you approach a "point" at infinity. This is "true" since whatever is out there corresponds via projection to the north pole of the Riemann sphere. :cool:
There is no point at infinity in the complex plane. That point is by definition outside the plane. To allow it in is to break the rules of the structure. There is no north pole in the Reimann sphere. This is a simple result of the incommensurability between the curved line and the straight line. A tangential line can never actually touch the curved line at a point, because the curved line requires multiple points to express its curved nature, in relation to the straight line. So there cannot be a "point" on a curved line, in the same way there can be a "point" on a straight line. Likewise, there is no centre point of a classical two dimensional circle, as indicated by the irrationality of pi. The one dimensional and the two dimensional are fundamentally incompatible.
This is why zero, like infinity, has no place within ordinal numbers, and must be excluded. "Order" is something other than the numbers which represent it, and at those supposed points, zero and infinity, which mark the ends of the order, the represented order is excluded.
Quoting Metaphysician Undercover
I admire your certitude. It must be nonplussing to watch the world of science evolve using a flawed intellectual mechanism. :confused:
There's a lot more than one flawed intellectual mechanism out there. And what has happened is that these flawed intellectual mechanisms have led us to dead ends in the evolution of the world of science.
Dead ends are where our attempts to understand can go no further due to the faulty principles being applied, like the dead end which has been reached in quantum mechanics. Dead ends are an integral part of evolution because evolution is a process based in trial and error. The dinosaurs got bigger and bigger, but bigger wasn't necessarily better. In the case of the evolution of scientific understanding, we get the opportunity to look back and find those faulty intellectual mechanisms, and how they led us in the wrong direction.