Reading "The Laws of Form", by George Spencer-Brown.
Laws of Form responds to a long-standing mathematical paradox relating to infinite series of numbers. At the end of the nineteenth century, the founder of set theory, Georg Cantor, found that the infinite is itself differentiated, with different infinite series coming in different sizes. Moreover, if one counted all the infinite series, one would again find an infinite series, but one whose number, including itself when counted, must be larger than any countable (cardinal) number (Davis, 2000, p. 67). This problem led Russell to consider paradoxes in logic and the membership of sets: Extraordinary sets are self-including, such as a set of all things not sparrows. This set itself also belongs to all things not being sparrows. Ordinary sets, on the other hand, have no such self-referentiality, for instance the set: all things that are sparrow which, clearly, does not include the set itself. But what about a set containing all ordinary sets? Would that not at once have to be larger than the number of all the sets it contains, as it, itself, is one such set (Davis, 2000, p. 67)? As Russell (1919, p. 136) states:
The comprehensive class we are considering, which is to embrace everything, must embrace itself as one of its members. In other words, if there is such a thing as everything, then, everything is something, and is a member of the class everything.
Whitehead and Russells (1910) Principia Mathematica proposes a stopgap intervention by excluding paradoxes from the domain of logic: sets cannot be members of themselves!.
Spencer Browns biography placed him directly into this debate, having worked with the two foremost logicians of the time, Russell and Wittgenstein. His solution to the problem was formed when he worked on practical electrical and engineering assignments.
https://livrepository.liverpool.ac.uk/3101665/1/Spencer%20Brown%20submission%20(1)%20(1).pdf
The above is not the book, but something of a biographical note.
Here is the Book:
http://www.siese.org/modulos/biblioteca/b/G-Spencer-Brown-Laws-of-Form.pdf
No special rules. Read, comment, ask questions, as thou wilt. We might get somewhere, and we might not. In your own time, then...
Comments (138)
I don't know if it's really up my alley, what with its background in mathematics, but I feel that the more mathematically-literate members of the community might have an interest in it. But I will endeavour to absorb some of it, hopefully some spark of inspiration might be communicated.
A poor idealist's Tractatus?
1. The pdf I linked won't allow quotes.
2. My keyboard does not have the cross symbol.
Not fatal, but annoying.
For 2 I think we could use brackets, thus:
[ ], [[ ]], [[ ] [ ]], [a], [[[a] [[b] [c]] [ ]] Not very clear, and it might be better to alternate square and curly by depth, thus:
[{ } { }], [{[a] [{b} {c}] { }]
I still don't like it much, any better ideas? Does mathjax or whatever it is have the solution?
Quoting Banno
Boolean logic is developed from the very simplest foundations, and then extended with imaginary values. But I don't think anyone ever solved an actual problem using Tractatus...
Come along for the ride. Maybe you can help us get as far as you did, Maybe we can help each other get a bit further...
I think if you download said pdf and open it in an acrobat application, you can do a lot more with it, like copy text from it etc. I happen to subscribe to Adobe Acrobat Pro for about 30 bucks a month as Im a tech writer (and its hellishly expensive to actually buy) and its a tool of the trade, but there are other PDF apps that allow you to copy text, which you might not be able to do inside your browser. Also, if you download a PDF and save it, you will find that MS Word can open it thereby converting it to text (although I dont know how it would cope with all the special characters and typesetting in this particular text).
In respect of the Cross symbol, there *might* be some combination of characters that stands for it, although thats totally off the top of my head. Worth researching it though.
Im still um-ing and ah-ing about whether to really try and get into it, as my back-list is perennially full of things I should have read already. But that video series is golden - apart from the splendid voice, hes a discovery in his own right, seems an exceedingly erudite and learned gentleman in the literary arts.
[math]\left. {\overline {\, a \,}}\! \right| [/math]
From MathType. Place math and /math, enclosed by square brackets on either end and try: \left. {\overline {\, a \,}}\! \right|
I'm not a logician, mathematician, or electrical engineer, but I am somewhat informed on the philosophical concept of Form. Especially as it applies to essential or causal Information --- To Enform : the act of creating recognizable forms : designs ; patterns ; configurations ; structures ; categories. Generic Information begins in the physical world as mathematical ratios (data points ; proportions, 1:2 or 1/2) in a starry sky of uncountable multiplicity. Hence, we begin by clumping cosmic complexity into symbolic zodiac signs relating to local significance. In an observing mind, that raw numerical data can be processed into meaningful relationships (ideas ; words). Or, in a mechanical computer, those ratios are analyzed reductively into either/or (all or nothing) numerical codes of digital logic : 100%true vs 0%true. This is probably the most elemental form of categorization, ignoring all degrees of complexity or uncertainty.
The Wikipedia article on Brown's book, Laws of Form, notes a primary requirement for the human ability to know (grasp intellectually) any Form in the world, first "draw a distinction"*1. Rather than sketching an arbitrary encirclement, this precondition seems to assume that the categorizing mind is trying to "carve nature at its logical joints". First a particular "form" (thing) must be selected (differentiated) from the universal background (the incomprehensible multiplex) of manifold Forms (holons) adding-up to a complete system (universe ; all-encompassing category). A holon (e.g. steak) is a digestible bit or byte from a larger Whole Form (e.g. cow), a comprehensible fragment. Human Logic requires a rational (ratio-carving) knife & fork for its comestibles. But, is the world indeed inherently logical in its organization, or do we have to use the axe-murderer approach : whack, whack?
Semiotician Gregory Bateson defined Information as "the difference that makes a difference"; referring to personally significant meaning in the subjective mind. Plato's theory of Forms defined them, not as phenomenal objects, but as noumenal categories of thought : "timeless, absolute, unchangeable ideas". Aristotle went on to classify human thoughts into distinguishable categories*2. More recently, modern neuroscientists have attempted to discover how the human brain filters incoming sensations into recognizable "classes of things"*3 (e.g. dog vs cat ; apple vs orange). Each of those categorical Forms is a meaningful distinction for the purposes of a hungry human mind.
Brown's book is over my head, but the notion of logical categories seems to be necessary for understanding how the human mind works as it does. And that need for pre-classification may provide some hint as to why we tend to overlay the real world with an innate template, in order to begin to understand its complexity of organization. First, we draw a circle around a small part of the whole system. Then, with manageable pieces, we can add them up into broader categories, or divide them into smaller parts, right on down to the sub-atomic scale, where our inborn intuitive categories begin to fall apart, becoming counter-intuitive. Hence, the weird notion of Virtual Particles. Is there a natural limit on our ability to encapsulate? Or can we go on imagining novel Forms forever? :smile:
*1. Laws of Form :
"The first command : Draw a distinction"
https://en.wikipedia.org/wiki/Laws_of_Form
Note --- In mathematics the distinctly-defined categories, of things that logically go together, are called Sets. However, so-called set theory paradoxes are not necessarily logical contradictions, but merely counter-intuitive. Does that mean the human mind can imagine sets or categories that don't fit into the brain's own preformed pairings?
*2. Aristotles Categories :
Hence, he does not think that there is one single highest kind. Instead, he thinks that there are ten: (1) substance; (2) quantity; (3) quality; (4) relatives; (5) somewhere; (6) sometime; (7) being in a position; (8) having; (9) acting; and (10) being acted upon
https://plato.stanford.edu/entries/aristotle-categories/
Note --- Perhaps the "single highest kind" of category is the universe itself.
*3. Category Learning in the Brain :
The ability to group items and events into functional categories is a fundamental characteristic of sophisticated thought. . . . . Categories represent our knowledge of groupings and patterns that are not explicit in the bottom-up sensory inputs.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3709834/
Note --- Our incoming sensations are typically randomized by repeated interactions & reflections (e.g echoes). So the brain/mind must sift out the grain from the chaff. Hence, evolution seems to have winnowed the winning organisms down to a few with the "right stuff" for correctly categorizing the fruits & threats of the game of life. Those inputs may include novel Forms that our ancestors have never encountered before in eons of evolution. So how can we make implicit Logical patterns explicit enough for categorical assimilation?
First, we have the text, and we have the videos from Wayfarer above, and I at least am not going to very much attempt to further expound or teach as such, nor to actually use the system - I am not competent to.
The meat of the book is a formal system. Spencer-Brown might have claimed,"The" formal system. The reason for saying that is that he starts from as near to nothing as possible, and from this almost nothing, manages to 'prove' many of the axioms of other formal systems in common use. And this is one of the difficulties of the book, that a laborious effort is undertaken to show the very simplest most obvious things that we have taken for granted since forever. One tends to read along thinking one has understood, and then one reaches a blank incomprehension at some point...
Formal systems always begin like the voice of God, commanding the world into existence: "Let there be light!" So the text intends you to create a universe in your mind of a particular form, and although it necessarily does so through an already shared language, it intends you to keep all the distinctions of your experience and language that you use to read the text separate and outside the new universe that you create according to the text. First prepare a blank space in the mind, with no thought in it: and begin.
But this is already Chapter 2! There is no way it can be the first distinction, because already in chapter 1 we have distinguished what a distinction is, and produced a pair of axioms.
Of course, it has to be confessed that we are not gods, and that our minds are not blank; so we have already made a distinction in the mind between the ordinary mind full of thoughts and distinctions and the blank space of the mind in which we are going to construct this formal system. We insist on the 'continence' of that distinction, that we will keep out all our everyday thoughts, and we maintain that continence by calling our new distinction 'the first'. And that distinction is mentioned again as the unwritten mark under which all this formality subsists.
What appears on the blankness of the paper, or in the emptiness of the mind is a mark, that is a name, a boundary, and an instruction all at once because those things cannot be distinguished. And all we have to help us is our axioms.
Axiom 1. Philosophy[of science] and philosophy[of religion] are philosophy.
Axiom 2. Philosophy of philosophy is not philosophy.
So in the philosophy of science one asks 'what is science' and tries to answer, and in the philosophy of religion, one asks , what is religion, and tries to answer, but in the philosophy of philosophy, if one asks what is philosophy, one has put into question the process of putting things into question, and silence is the best one can hope for. (you don't have to agree, I'm just giving shape to the way the axioms work with a familiar example.)
"And the evening and the morning was the first post."
A code with just a single undifferentiated signal carries no information. We can conceive of this as a single 1 observed/received an infinite number of times, or an infinite series of 1s. They both convey the same information, which is no information. There is no variance, all observations are identical and thus contentless.
Thus, pure undifferentiated signal, like Hegel's pure immediacy, pure undifferentiated being, collapses into nothing. Being and non-being are opposites and yet pure being collapses into nothing.
But this nothing is not empty, we have all being contained here, just devoid of determinateness. So this is like a sound wave on an oscilloscope. It's not the absence of a wave, but rather a wave of infinite frequency and amplitude. As we approach the limit of frequency, the peaks and troughs get closer and closer together, until eventually they are in the same place, cancelling each other out. This is a silence, but one that is [I]pregnant[/I].
The move to difference is what gives us more. From reading 1, over and over, to the combination of 1 and 0. Or as Hegel has it, being sublates nothing and we get the world of becoming, where being is constantly passing away into nothingness.
Something I'm stuck on, from a first reading of the first two chapters, is the distinction between letting and calling. I think I have to read "Let" as "Call a function" or something like that. It's naming an instruction rather than naming a distinction.
EDIT: Actually, thinking that through -- calling a function, more generally, a relation, would just be a distinction with a map.
I get a different shape? More like, axiom 1 is about saying a thing (e.g. "Romeo!")... which, if you do it again, is only to reinforce that first statement (or state-naming!).
While, axiom 2 is about changing sides on the issue ("rather, thou art some other, that smell as sweet")... which, if you do it again, is only to undo that first change. And probably end up where it started.
Assertion and negation, basically?
'Let' is a command from on High. This how it shall be henceforth. 'Let x be the number of angels that can dance on the head of a pin. 'Let' happens outside the formal system to create it. 'Call' is an action that happens inside the the system. You can call the distinction into being by making the distinction, that is by writing the sign. and If you write it twice in a row you call and recall.
and the distinction is:
Quoting bongo furyQuoting bongo fury
It's completely abstract. It is cross (the boundary) and cross back, assertion and negation, on and off, 0 and 1, but what is important is that there is but one sign, that is the crossing of the boundary, the boundary itself and the mark of distinction that names the difference between the two sides of the boundary.
The significance of my little example is that it is close to home. This is a philosophy forum, and so we ought to have a very clear idea of what philosophy is, and therefore what it is not. But that turns out to be intractable and interminably controversial. But applying the idea of 'nesting' as 'negation' allows me to say very simply what philosophy is not, and why it is so difficult to be clear about in normal discourse.
The usual problem, and the problem with your picture, is that we tend to give already an equal meaning to the negative and positive. Here, there is no symbol for zero, and the nearest we can get is the something of something, or a blank space.
The other sense that is important in all this is the distinction between the observer and the observed. This give a sense of the inequality of meaning and meaninglessness that is fundamental. This formal system is all about self-reference, and thus to make a distinction is to put oneself on the map. The mark in this sense is like graffiti on the toilet wall. "Kilroy was here." Significantly, Kilroy never indicates where he was not.
Leon-Konrad---Roots%252C-Shoots%252C-Fruits-%2528paper%2529.pdf
ibid.
[* The original presents here the equations denoted the form of condensation and cancellation respectively that can be found on page 5 of the Laws of Form, that I cannot reproduce here.]
Alright, that helps. So we have our meta-language which we're speaking now, and that differs from the formal system being created with the use of the meta-language.
Reading Chapter 1-2 (for some reason I'm finding them linked as I read this the first time -- like I can't talk about chapter 1 without chapter 2, and vice versa) again I can see the opening of 2 as a re-expression of Chapter 1, like The Form needed to be explicated before talking about forms out of the form, and the form takes as given distinction and indication which it also folds together as complementary to one another.
But then I get stuck right after "Operation" is introduced. "Cross" is a name for an instruction. Instruction, from just a bit before, leads to the form of cancellation. But what is the connection between states and instructions? Reading "Operation" again I'm reminded of the First canon "what is no allowed is forbidden". The name operates already as an instruction.
And then I get stuck on "continence", even though that was part of the opening. "Continence" is the name of the only relation between crosses, and that relationship is such that the cross contains what's inside, and does not contain what is not inside of it.
But this is where I really got lost entirely: What is going on from "Depth" to "Pervasive space", or are these concepts that, like the first chapter, will become elucidated by reading chapter 3? Like a puzzle unfolding?
This is talking about the
[math]\left. {\overline {\, a \,}}\! \right|[/math]
So s(sub"0") is the blank page surrounded by an unwritten cross. So in this example there are 2 crosses which pervade a which is then named c. In this case that would be the pervading space.
Quoting unenlightened
So, following along with the axioms in chapter 2...
[][] = []
and axiom 2
[{}] = .
?
I tried messing with 's bit of code and looking for tutorials but got lost in the information web. If there's a easy link to figuring out how to embed multiple crosses, @jgill, I'd be happy if you could pass it along because it does look prettier, and if I can figure out the syntax it's probably not that hard to embed multiple crosses.
Part of me is wondering if we can read Axiom 1 as the line above, and axiom 2 as the crossing of the line above. So when we, while using the form of the meta-language to parse order, draw a segment from left to right that is the law of calling. And when we draw a segment perpendicular to the calling that is a crossing. So if we cross again we negate, but it's easier to see that when we embed the original cross within a series of crosses rather than a series of lines coming off of the original calling.
This makes sense at a purely formal level because they complement one another -- the calling and the crossing are perpendicular but simultaneously need one another in order to be a calling or a crossing. In a sense the perpendicularity of the crossing removes some of the form of space of the meta-language, but not quite because the space of this formal system is defined by the cross rather than by a set of axioms describing space. Perpendicularity can be defined by reference to the cross, rather than the other way about, and from that we can name the space "Cartesian" if we take space to have an infinite series of crosses. (not Euclidean, that would be harder, or at least different, I think) (a bit speculative here.... just trying to think through the ideas towards something familiar) ((Also -- it'd probably have to be two orthogonal and infinite cross-spaces to define the Cartesian plane))
Then Chapter 2 is the use of the axioms to draw a distinction -- a form taken out of the original form of calling-crossing. Which, from chapter 1, is perfect continence.
Is it right to read "construction" as what's happening in the rest of the chapter? That's the impression I get -- if distinction is perfect continence then drawing a distinction will accord with what is given -- distinction and indication. (interestingly, comparing 1 and 2, we can interpret the cross as a kind of circle, but with the space-properties of this formal, rather than a geometric, system)
But what we get is the space cloven by the first distinction is* the form, and that all others are following this form. The space is cleaved by a cross indicating/distinguishing, but distinction is the form by which we can indicate an inside or an outside. In a way we could look at the cross as a mere mark rather than an intent. It would have content but it would not be a* used signal.
The notion of "value" is really interesting to me. The value is marked/unmarked, at the most simple. The name indicates the state, and the state is its value insofar that an expression indicates it. And then with equivalence we are able to compare states through the axioms. At this point I think we can only hold equivalence between the basic axioms, which turns out to have an inside and an outside, and gives a rule for "condensation" and "cancellation". So in a way the value is just what is named at this point, but there's still a distinction to be had between marked and unmarked due to the law of crossing canceling rather than reducing to the original name.
Then the end of the chapter is what follows from everything before. "The end" as I'm reading it starts at "Operation" -- this is where we can now draw a distinction, having constructed everything prior, and it entails some properties about the system being built such as depth, shallowness, and a need to define space in relation to the cross.
*added in as an edit, was confusing upon a re-read
Yes, I deliberately started at chapter 2, because that's the point at which something happens. I could liken it to a new game we have to unpack - fun for all the family, and you're trying to understand the rules while I'm looking at the pieces. It's a construction set of nesting boxes, and the rules set out what you can do and what you can't do. Some of what is going on is making sure that you don't have a box half in and half out of another box, or a box that is inside a box it is also outside. That's continence - like brackets, you can have any number of them inside another brackets and any number of brackets within brackets, but they mustn't overlap { [ ] [ ] [ {} ] } is ok, but { [ } ] is incontinent - the inner square bracket is leaking out of the curly bracket.
Forget about the 'a' for the moment, that comes later when we do algebra. For now, its computation/arithmetic we have the mark that we are also reading as a boundary between a marked and unmarked state and also calling a cross (c) and interpreting as an instruction to cross the border. It's no more confusing than switching a switch to switch things around. :wink:
Quoting Moliere
Depth is easy, It's how many boxes within boxes we're at. count the lines you have to cross to get out. Each line is a c for cross, each space is an s. this is just housekeeping - labelling the shelves in the cupboard.
but perhaps the way to understand is to read through first, and then go back and worry at the terms when you have a grasp of the 'idea of the game'. and all this 's' and 'c' is just a way of talking about
The idea of the game, at first, anyway, is that the stop light is on when the train is in the tunnel and off when the train is not in the tunnel. Mark, or no mark. And that game is what comes next.
*EDIT: I shouldn't get cocky, I just started.
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a+b \,}}\! \right| \,}}\! \right| \,}}\! \right| [/math]
\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a+b \,}}\! \right| \,}}\! \right| \,}}\! \right|
put in the math boxes on either end, as before.
[math]\left. {\overline {\, a \,}}\! \right|[/math]
[math]\left. {\overline {\, \left. {\overline {\, * \,}}\! \right| \,}}\! \right|[/math]
[s]*Been messing with it to try and figure out how it works, but updating the quote to reflect the code I'm using -- right now I'm uncertain why there's a gap between the top line and the cross-line in the embedded cross[/s] I Think I got it now.[s]I'm going to respond to this post to make it appear more user friendly though.[/s] See the post below for better instructions.
I notice that if I do not put anything but a space where the "*" presently is that I get a negative symbol popping up, and also I'm still uncertain where that gap is. My hope, in the long run, is to offer strings which people can simply copy-paste with clear delineations for plug-and-play. If I'm running across a limitation rather than just messing up then perhaps "*" could serve as a blank space? But that kind of ruins the effect too.
Yeah that makes sense, given how chapter 1 didn't even begin to make sense without chapter 2. I'll keep along. I'm still figuring out the accounting, and how to make the crosses pretty.
That would be really useful. I had a little go at getting an empty cross and failed miserably, but that sort of confirms the thesis that the world has fallen in love with symbolising the unmarked state and naming the nameless. 0 And thanks @Jgill for your assistance.
This is the difference between GS-B and Boole, there is no 1: here, everything takes place in The Hole in the Zero a largely irrelevant but excellent science fiction story from the same era.
(It inflates my petty smugness a wee bit that the implementation of the crosses utilises a series of brackets in roughly the same way I suggested we might do, but thought better of because the result was unreadable, even if the structure was right.)
Put the following code in between a bracketed math, then the code, then a bracketed \math
\left. {\overline {\, * \,}}\! \right|
It took me a second to get the syntax but I read this as: Start at the left. Use the function "overline". Within the squiggly brackets the first "\," can be read as "start expression underneath the overline" and the "\," on the right hand side still inside the squiggly bracket can be read as "end expression underneath the overline", then we close what's underneath the overline with a closing squiggly, then we close the function we called "overline" with the second squiggly, and then "\!" can be read as "This is the end of the expression which started from the left", and then \right starts the ability to write on the right hand side, and we place "|", the alternate character on the "\" key, so that there's a long line written on the right hand side.
Reading it from the middle upward through the crosses:
1. \left. {\overline {\, * \,}}\! \right|
2. \left. {\overline {\, * \,}}\! \right|
3. \left. {\overline {\, * \,}}\! \right|
4. \left. {\overline {\, * \,}}\! \right|
And the others, while it's easy to get lost in the syntax as I did in my first attempt, are expansions upon this first function such that we put our single overline function with a right bracket into another version of itself, and on and on. I'll just post the code I used, though, because I think the above probably serves as a good enough users guide for copy-pasting the code.
[math]\left. {\overline {\, * \,}}\! \right|[/math]
The code used within the math brackets:
\left. {\overline {\, * \,}}\! \right|
[math]\left. {\overline {\, \left. {\overline {\, * \,}}\! \right| \,}}\! \right|[/math]
The code used within the math brackets:
\left. {\overline {\, \left. {\overline {\, * \,}}\! \right| \,}}\! \right|
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, * \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math]
The code used within the math brackets:
\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, * \,}}\! \right| \,}}\! \right| \,}}\! \right|
And to construct the crosses in the Fourth Canon in chapter 3:
[math]\left. {\overline {\, \left. {\overline {\, * \,}}\! \right| \left. {\overline {\, * \,}}\! \right| \,}}\! \right|[/math] [math]\left. {\overline {\, * \,}}\! \right|[/math]
Which I did by copying the first code with a single cross, and then in place of the "*" I put the copy of the original code twice right where the original "*" was in the first code with a single cross. Then I just copied the code again in a separate Math bracket to have it sit alongside
EDIT: Or the code --
\left. {\overline {\, \left. {\overline {\, * \,}}\! \right| \left. {\overline {\, * \,}}\! \right| \,}}\! \right|[/math] [math]\left. {\overline {\, * \,}}\! \right|
There's something similar to this and using nested sets as representatives of numbers, I think. But then the value isn't numerical, but is rather the marked or unmarked state at its simplest. The first theorem of Chapter 4 points out that these initials are a starting point for building more complicated arrangements and the simple arithmetic of the crosses is what's needed to make sense of the calculus of the crosses.
I'm going to try and work out the proof here by arbitrarily using this arrangement as "a" --
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, s(sub(d)) \,}}\! \right| \left. {\overline {\, * \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math]
s is contained in a cross.
All the crosses in which s(sub(d)) is within are empty other than the space in which s(sub(d)) is in. ("*" counting as the unmarked space)
The arrangement chosen uses both cases --
Case 1 -- there are two crosses that are empty underneath a cross next to one another such that s(sub(d)) could have been in either cross. They're equivalently deep.
Case 2 -- the crosses surrounding the two deepest crosses are alone within another cross
So using the steps of condensation and elimination:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, s(sub(d)) \,}}\! \right| \left. {\overline {\, * \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math] --> [math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, s(sub(d)) \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math] Condensation
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, s(sub(d)) \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math] --> [math]\left. {\overline {\, s(sub(d)) \,}}\! \right|[/math] Elimination
And by the definition of Expression from chapter 1: "Call any arrangement intended as an indicator an expression" we can draw the conclusion that any arrangement of a finite number of crosses can be taken as the form of an expression. (since we're indicating the marked or the unmarked state)
Wouldn't it be better to spend your time learning a more widely used version of predicate calculus?
The obscure and the strange is one of those things that just nabs my attention. Also I had some notions back when learning baby logic that this book seems to run parallel to. Notions which after writing them down I threw out because they seemed nonsensical, but hey -- there was something interesting about how the calculus managed to deal with the notion of the philosophy of philosophy as an unmarked state rather than a marked state.
Yeah, understood; hence my previous reference to Hegel.
But there are all sorts of issues. The parsing makes it look as if we only need negation, but that's not so. And there's an odd slide in the Appendix from propositions to individuals. And fuck knows what is happening in chapter eleven, where moving out of a plane is equated with bending time... or something.
I think there are good reasons that the book did not catch on.
I haven't read past the introduction, but perhaps this video conveys something of relevance?
Very much a guess.
I'm guessing I'll be skeptical when I get to those passages, but no matter the text it's a good idea to read it with multiple people.
Ah, ok. Like I said, it was very much a guess.
I thought there might be some relevant analogies.
It's been feeling too much like coming home from work to go back to work. I talk to electrical engineers all day long. :joke:
Though that's interesting that the book is close enough to work to actually feel like work.
Of course it would, if application is what you are looking for. My sister used to work for the electricity board on their very early computer that ran their pay-roll and bill producing accounts system, as a programmer in machine language. Not much call for that these days. But it's still how the machines operate. This book ends at the point where it links up with all the familiar systems of boolean algebra and predicate logic and set theory. If your philosophy is "shut up and calculate", a perfectly reasonable position, this book and this thread are not for you.
Don't waste your time telling us we're wasting our time with it.
Yes. somewhere in the introduction/preface he says that this all developed backwards to the way it was written as a way of trying to understand why what they were already doing in practice worked. It's quite usual in philosophy: you build your castle in the air, and then go back afterwards to grub around for some foundations for it.
So we should kind of do the same; pass lightly for now over chapter 3, and I am going to pass lightly over the first 4 theorems too, as GSB satisfying himself that the rules and notation does what he wants and doesn't do what he doesn't want.
Again, I find it helpful to think of the left margin as a power source, and the right side as a light that will be either on or off. Thus an empty cross is a switch that is on, and...
[math]\left. {\overline {\, a \,}}\! \right|[/math] is a switch that is on if 'a' is not on, and off if 'a 'is on.
T5, T6 and T7 are more housekeeping; necessary but boring.
For T8, I am going to start with...
[math]\left. {\overline {\, a \,}}\! \right| a[/math] which we can think of as two circuits in parallel on one circuit 'a' operates a switch, and on the other it is the circuit. So if 'a' is on, it turns the switch off and connects via the direct route, and if 'a' is off it connects via the switch.
In T8, this identical arrangement which is always on, turns the switch it controls off. Light goes off!
T9 is also important. T8 and T9 together form the basis of everything that follows, so I'm going to give T9 a post of its own post, later.
[math]\left. {\overline {\, \left. {\overline {\, pr \,}}\! \right|\left. {\overline {\, qr \,}}\! \right| \,}}\! \right| [/math] = [math]\left. {\overline {\, \left. {\overline {\, p \,}}\! \right|\left. {\overline {\, q \,}}\! \right| \,}}\! \right| r[/math]
Using my circuit analogy, on the left, p & r are parallel paths, and so are q & r. So if r = [math]\left. {\overline {\, * \,}}\! \right|[/math] then p & q are redundant, and 'light is on'. On the other hand if r is empty, it can disappear, leaving the expression on the right. So we have the parallel circuits on the right, of the p&q expression and a solitary r to cover both possibilities.
Not as complicated as it looks.
J1. [math]\left. {\overline {\, \left. {\overline {\, p \,}}\! \right|p \,}}\! \right| [/math]= .
J2. [math]\left. {\overline {\, \left. {\overline {\, pr \,}}\! \right|\left. {\overline {\, qr \,}}\! \right| \,}}\! \right| [/math] = [math]\left. {\overline {\, \left. {\overline {\, p \,}}\! \right|\left. {\overline {\, q \,}}\! \right| \,}}\! \right| r[/math]
T8 & T9 now become J1 &J2 the foundations for some new developments, after a bit more housekeeping.
C1. [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right| = a[/math]
I had to struggle to follow with this one. I found the condensed version clearer, and by going through the steps and noting down the substitutions for each line, I just about got there. Except I don't understand why not use the two substitutions for a, as was done for earlier proofs? Anyone?
I can't help much with the details of the calculus he presents in LoF but have deeply explored its metaphysical implications. His calculus describes the logic of 'non-dualism', hence his quoted references to Lao Tzu. This is a fundamental description of reality and, equivalently, a fundamental theory of sets, and as such it solves Russell' and Cantor's problem of self-reference.
Basically, it states that there is no such thing as the 'set of all sets'. Rather, sets would reduce to the blank sheet of paper on which the Venn diagram is drawn. For an information theory this would be the
information space, whether in psychology or cosmology. ,.
In his way the 'Perennial' philosophy solves all metaphysical problems, including the reduction of the many to the one. You could say Brown's book explains thee reason why problems of self-reference do not arise for the philosophy of the Upanishads, Buddhism, Taoism and so forth, allowing it to be fundamental without giving rise to paradoxes.
Later in life Brown became a close friend of Wei Wu Wei, the renowned nonduality teacher, and perhaps this indicates that he knew his stuff. In his phone call he stated he was a buddha and I had no reason to doubt him other than the fact he mentioned it.
Make up your mind whether you think it is too boring or too interesting.
If you can't keep quiet, get involved! Uninformed and self-contradictory criticism sounds like mere prejudicial insult.
Good luck with LoF. When I was getting started on it I found this essay useful (by the president of the Jungian Society in the USA). Robin Robertson, SOME-THING FROM NO-THING: G. SPENCER-BROWNS LAWS OF FORM http://www.angelfire.com/super/magicrobin/lof.htm
This extract makes the connection between Brown's approach and philosophy. (I suspect that by 'consciousness' here Robertson means intentional consciousness, since for Brown consciousness is not emergent but is the birthplace of form.). . .
Anyone who thinks deeply about anything eventually comes to wonder about nothingness, and how something (literally some-thing) ever emerges from nothing (no-thing). A mathematician, G. Spencer-Brown (the G is for George) made a remarkable attempt to deal with this question with the publication of Laws of Form in 1969. He showed how the mere act of making a distinction creates space, then developed two laws that emerge ineluctably from the creation of space. Further, by following the implications of his system to their logical conclusion Spencer-Brown demonstrated how not only space, but time also emerges out of the undifferentiated world that precedes distinctions. I propose that Spencer-Browns distinctions create the most elementary forms from which anything arises out of the void, most specifically how consciousness emerges.
.
when we're done with this book, we can maybe look at
http://homepages.math.uic.edu/~kauffman/VarelaCSR.pdf
And perhaps it might start to convince @Banno that we are not a cult.
Quoting unenlightened
Hrm I'm not following the analogy here for T8 very well. How would the analogy work for the worked example of T8:
[math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|a \,}}\! \right|[/math]
?
Two circuits in parallel on a single circuit I follow. So "a" is an arrangment of wires between a battery with a switch on the circuit such that the lights which are wired in parallel both turn off in the worked example of T8, as you say.
So just visualizing a simple circuit diagram, 'a' is on when it turns to switch off -- does that mean the switch is not connected to the parallel wiring? Where is the switch in the diagram, in parallel with the lightbulbs or on the outer circuit?
Or am I just breaking the analogy in trying to concretize your rendition here?
EDIT: Mostly thinking through the analogy here. No need to reply. The below post serves better as a question since it has a diagram.
[math]\left. {\overline {\, * \,}}\! \right|[/math]
Quoting unenlightened
OK so "r" is the switch on the outer ring -- and if it is marked, or reduces to the marked state in the arithmetic, then the light is on because the switch is closed. And if it is not marked, then the light is off because the switch is open, but the marking of p and q is still there to be the wires or something like that.
I think I'm getting lost on the map between the arithmetic and the circuit diagram. I can stick with the arithmetic so far, though -- in the abstract.
EDIT: Outer/inner ring diagram, with ASCII -- for fun and profit:
___+/-___
r00000000|
!00000000|
------p------
!00000000|
!00000000|
------q------
?
(you'll have to read "0" as empty space, and "r" is that first little squiggly on the upper left hand side -- it's supposed to be a switch in my hypothetical)
Also -- I can just move on with the text itself. I realize this is an analogy.
Regrettably, this is the kind of article that goes over my head. I have to leave the technical details to mathematicians and stick with basic principles. .
By the way, can you tell me why I often don't get a 'quote' option and have to copy/paste replies? Occasionally I do but usually not and it seems odd.
"a" is a circuit, that operates the cross (switch) it is under. If "a" is live, it switches the circuit it is under off. This is how a cross under a cross cancels out - the inner switch switches the outer switch off and there is no circuit. That is the situation if "a" = unmarked - we ignore it and are left with a switch that has turned off a switch. So no circuit. But if "a" connects, it switches off the switch it is under in both cases, so both switches are turned off. either way the whole is off.
This is more than just an analogy, it is the application which he was working on when he developed the system. I think it's worth trying to get hold of, particularly when it comes to the really difficult section that introduces time. If you are at all familiar with such things, it is quite commonplace for an electrical switch to be electrically operated, for example by means of an electromagnet physically pulling a lever.
In the formalism, a cross is a switch that might or might not be switched off by a circuit 'inside it, and might or might not switch off a switch it is 'inside', if it is on. All crosses are on unless something (an inner cross) turns them off.
I know how you feel. It looks pretty daunting. But I'm hoping to at least get a feel for how the formal system can apply to living systems. Maybe...
You should get a quote button whenever you select some text in a post. I don't know why you wouldn't, unless you are on a phone and the button is coming up off-screen somewhere. You can do it the hard way: [ quote=aDude] some text [ /quote] without the space after the open brackets, but it's not as good because it doesn't have the post number that can take the reader to the original post.
Cool. I'm more familiar with the Physics 2 stuff than the practical stuff, and it's been more than a minute since I've studied that. I think I'm tracking better now with your explanation, and I had a gander at this website to get a grasp on the concrete side a bit better.
Aha. Thanks. I get it now.
It's the use of R1 that's confusing me. I understand that having derived an expression which is equivalent to the unmarked state we can substitute the unmarked state for said expression, but when I do so it seems like there should still be an "a" left over.
Or re-reading the use of R2 I'm not following again. It seems we have to
Let p = [math]\left. {\overline {\, a \,}}\! \right|[/math]
And by R2 that means the initial J1 becomes
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|\left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] = . (2)
Then we start with the conclusion in the next step?
So we start with C1:
[math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] = [math]a[/math]
And substitute the unmarked state from (2) into C1 --
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|\left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math][math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] = [math]a[/math][math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|\left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math]
And then subsitute [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] for [math]a[/math] in the next step? (that seems obviously fatal, but I'm not sure how else to do it)
EDIT: I really feel like that can't be it. I mean I get that we're making a logic, but a logic that assumes its own conclusions to demonstrate relationships is usually only done in a reductio or something like that. (though we haven't gotten to negation or truth yet, so...) It just seems kinda squirrely.
EDIT2:
Quoting Moliere
Actually.... then they'd be exactly equal in form too. There is something very confusing about substituting for the unmarked state*. I did it on both sides of the equation, like you'd do for a variable in algebra, but I think maybe Brown did it only on one side of the equation. This relates to another confusion I had put aside, but the notion of the unmarked cross maybe relates?
*Like, if we can do that can we constantly substitute any amount of crosses which equate to the unmarked state into any unmarked part of an expression?
Go to the bottom of page 31 Where it says,
We are going to change the left side, [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] at the top into the right hand side at the bottom via the steps shown, using J1 and J2 and nothing else.
the first step is to put p= [math]\left. {\overline {\, a \,}}\! \right|[/math] into the J1 formula. and stick it in front of [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] which we are allowed to do because it sums to the unmarked state, and so changes nothing.
We now have something that looks like the right hand side of J2 if we set r =[math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] You should be able to see what the p and q substitutions are, and the result is what is written. (This is the most difficult line to follow)
Step 3 uses J1 again to remove the left side of the expression, leaving just the right hand half, which is:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|a \,}}\! \right| \,}}\! \right|[/math]
That's half way through the proof. With me so far?
That helps. Thanks. I'm with you up to this point now.
The last step I'm struggling with because it seems like I have to Let p = [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] -- is it allowed to switch what p equals in the middle of a demonstration?
and then use J2 the right way round this time, to take the second and fourth 'a's outside the whole expression using r = a substitution.
and the last step uses p = [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] to eliminate the whole left side, leaving "a". QED.
And actually I had that thought, but then I thought -- well of course we can Let p = whatever we want. It's the form that matters. If I wanted to make sure I was tracking things correctly I could introduce another variable, like s, and let it equal [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right|[/math] and the form would still work out.
Thanks for working that with me.
C7, 8, & 9, I'm still struggling with.
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a \,}}\! \right|b \,}}\! \right|c \,}}\! \right|[/math]
By C1 -- [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right| = a[/math], which we apply to the token [math]\left. {\overline {\, b \,}}\! \right|[/math] to obtain
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, \left. {\overline {\, b \,}}\! \right| \,}}\! \right| \,}}\! \right|c \,}}\! \right|[/math]
Then by J2:
[math]\left. {\overline {\, \left. {\overline {\, pr \,}}\! \right|\left. {\overline {\, qr \,}}\! \right| \,}}\! \right|[/math] = [math]\left. {\overline {\, \left. {\overline {\, r \,}}\! \right|\left. {\overline {\, q \,}}\! \right| \,}}\! \right|r[/math]
Let p = a, q = [math]\left. {\overline {\, b \,}}\! \right|[/math] and r = c then distribute from the Right-hand side to the left hand side.
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, ac \,}}\! \right|\left. {\overline {\, \left. {\overline {\, b \,}}\! \right|c \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math]
Then by C1, Let a = [math]\left. {\overline {\, ac \,}}\! \right|\left. {\overline {\, \left. {\overline {\, b \,}}\! \right|c \,}}\! \right|[/math] and reflect from the left-hand side to the right hand side to remove the top two crosses to obtain... well, exactly what I just wrote.
End of demonstration for C7.
[math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, br \,}}\! \right|\left. {\overline {\, cr \,}}\! \right| \,}}\! \right|[/math] (1)
Call C1: [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right| = a[/math]
Let C1's a = [math]\left. {\overline {\, br \,}}\! \right|\left. {\overline {\, cr \,}}\! \right|[/math]
Reflect from the right hand side to the left hand side to place two crosses:
[math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, br \,}}\! \right|\left. {\overline {\, cr \,}}\! \right| \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math] (2)
Call J2: [math]\left. {\overline {\, \left. {\overline {\, pr \,}}\! \right|\left. {\overline {\, qr \,}}\! \right| \,}}\! \right|[/math] = [math]\left. {\overline {\, \left. {\overline {\, p \,}}\! \right|\left. {\overline {\, q \,}}\! \right| \,}}\! \right|r[/math]
Let J2's:
p = b, q = c, and r = r and collect r from the left hand side of J2 to the right hand side of J2:
[math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, b \,}}\! \right|\left. {\overline {\, c \,}}\! \right| \,}}\! \right|r \,}}\! \right| \,}}\! \right|[/math] (3)
Call C7:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a \,}}\! \right|b \,}}\! \right|c \,}}\! \right|[/math] = [math]\left. {\overline {\, ac \,}}\! \right|\left. {\overline {\, \left. {\overline {\, b \,}}\! \right|c \,}}\! \right|[/math]
This one took me several guesses. What helped me was to see that the form of C7 has c on both sides of the two separate crosses on its right hand side, and so C7's c must equal (3)'s [math]\left. {\overline {\, a \,}}\! \right|[/math] since the conclusion has [math]\left. {\overline {\, a \,}}\! \right|[/math] collected into two separate crosses.
Once I saw that then I Let C7's a = [math]\left. {\overline {\, b \,}}\! \right|\left. {\overline {\, c \,}}\! \right|[/math], and I transposed (3)'s [math]\left. {\overline {\, a \,}}\! \right|[/math] to the right hand side so that it fit the form of C7 more apparently. Then plugging it in sure enough I got C8:
[math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, b \,}}\! \right|\left. {\overline {\, c \,}}\! \right| \,}}\! \right|[/math] [math]\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|[/math]
Sometimes it feels like the demonstrations are purposefully harder than need be -- to get you in the habit of switching out variables for one another. After getting this far the substitution rules made more sense upon reading them -- they were formal statements of what we're doing to check Brown's work that were needed to give meaning to "equality".
And I checked out what comes after C9, and can say that I find it confusing. This is the first appearance of "integration" that I could find by checking the names of each transformation from before, and I don't understand what the part with the series of "is changed to" symbols [s]are[/s] arranged means. "The unmarked state is changed to the unmarked state is changed to the unmarked state is equal to the unmarked state is changed to the unmarked state" is the literal translation of the first example, and I don't know what he's getting at with it.
I'll probably at least work out C9 by the time you get back, but probably wait from there. Have a good week!
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, b \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, x \,}}\! \right|r \,}}\! \right|\left. {\overline {\, \left. {\overline {\, y \,}}\! \right|r \,}}\! \right| \,}}\! \right|[/math] = [math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, b \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right| \,}}\! \right|[/math]
Just looking for patterns I noticed how the only transformation occurs on the final two depth=2 crosses, so I simplified to looking at those final two crosses alone:
[math]\left. {\overline {\, \left. {\overline {\, x \,}}\! \right|r \,}}\! \right|\left. {\overline {\, \left. {\overline {\, y \,}}\! \right|r \,}}\! \right|[/math]
Which to fit the form of C1, as the text suggests, I set this whole expression = a, and from C1 I add two crosses onto the expression:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, \left. {\overline {\, x \,}}\! \right|r \,}}\! \right|\left. {\overline {\, \left. {\overline {\, y \,}}\! \right|r \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math]
Which gets us to something close to J2. From J2 let p = [math]\left. {\overline {\, x \,}}\! \right|[/math] and let q = [math]\left. {\overline {\, y \,}}\! \right|[/math], then the expression resembles the left hand side of J2. Converting to the right hand side of J2, but substituting back into the original expression we obtain:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, \left. {\overline {\, x \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, y \,}}\! \right| \,}}\! \right| \,}}\! \right|r \,}}\! \right|[/math]
Which from C1, but this time going from the left hand side to the right hand side to remove two crosses above both X and Y, and plugging this expression back into its place from the original expression we get the first step:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, b \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right| \,}}\! \right|[/math] (C9.1)
EDIT:
About an hour later, some random thoughts I'm having while working through these:
One of the things I'm thinking of is how we're showing, from the rules set out so far, how one expression equals another expression. But even though we're using variables I remain uncertain that we have really marked out the domain of expressions such that these are proofs. And further it seems that we could show some other expression could equal one of the other conclusions, like C1. Or, at least, it's not clear to me that these hold as proofs in the same way that numerical expressions with variables have proofs in them, or that other logical systems have proofs in them, like De Morgan's Laws.
To make a list of what we are able to do so far: substitution, the marked state, the unmarked state, variables, equality, and step-wise transformation from J1/J2 (and all demonstrations from J1/J2). The demonstrations are claiming to be a calculus of the marked or unmarked state, but how to delimit that space such that these demonstrations are proofs, in that they hold for the whole space of all expressions? Are there no other expressions other than the marked/unmarked state, or is there a value in-between marked/unmarked? Or is the law of the excluded middle an assumption of the calculus such that we also can conclude that?
One of the differences I'm seeing between this logic and the other two I listed (algebra, Boolean logic) is the lack of negation. There is no negation in this system. There is marked/unmarked, but no negation of the marked/unmarked, which I wonder if that somehow ties into making a consistent system of symbol manipulation. It makes me think that we can think of the unmarked state as more in analogue with 1/1, like it can perpetuate anywhere within an expression in the same way that (1/1) can always be added to either side of an algebraic expression.
Still meandery. One thing these exercises is providing me is a vantage from which to see how logical systems work in the abstract, or at least a vantage to reach for that perspective. ALSO, back of the mind thought, if negation never shows up then perhaps Godel's Incompleteness Theorem will not apply here. (Back of the mind for so many reasons... but I've noticed that the system may not be powerful enough to represent arithmetic, and that's why I have the thought)
Also interesting to note how the proofs of J1/J2 work by showing all cases: under the assumption that p,q, or r is such and such we show the whole expression is equal in all possible cases. This is important, I think, because it may be the case that at some additional variable point we would be unable to check by the method of all cases (which reminds me of truth-tables' check for validity, actually), and so one wonders if a multitude of variables could lead to undecidable, or multiply decidable expressions such that they could lead to both the marked/unmarked state. I think this is the thing that would have to be secured to count these demonstrations as proofs -- we have the demonstration, but is it possible for the demonstration to turn out the opposite value? Like with C1, is it possible to come up with an expression that reduced a-cross-cross to the unmarked state without a from the transformation rules? It's a niggling thought at the back of my mind, and it would be hard to find such an expression, and I may just be completely off base. But hey, sharing the thoughts in the spirit of the thread.
Noticing that the form closely matches C8... (Note, I thought I had it in typing it out but then noticed I'm making a mistake, so the first part is a failed attempt at demonstrating this step, but after the break I figured out my mistake and demonstrate the step from C9.1 to C9.2)
Let C8's...
a = [math]\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|[/math]
Because that's the only expression which appears under both crosses in C8's transformation to the right-hand-side.
b = [math]\left. {\overline {\, b \,}}\! \right|[/math]
c = [math]\left. {\overline {\, a \,}}\! \right|[/math]
r = [math]\left. {\overline {\, r \,}}\! \right|[/math]
Then plugging these values into the right hand side of C8 we get:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, b \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math][math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, r \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math]
And then we remove the double brackets using C1 and re-arrange the expressions to obtain (though it looks more like 5 times rather than thrice?) (Actually I'm making a mistake here... I'm noticing that there's an added cross in the final conclusion that I'm not accounting for... hrm, hrm, hrm...)
[math]\left. {\overline {\, ba \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|\,}}\! \right|[/math][math]\left. {\overline {\, r \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right| \,}}\! \right|[/math] (C9.2) (but not obtained by the above procedure)
**************
Something I like to demonstrate in my postings here is that I'm constantly changing my mind, or noticing mistakes -- my thought is that the finished product never looks like how you get there, and I think that this forum is at its best when sharing our process of thinking in all of its messiness, in all of its faults. So I'm keeping the mistake up above, while working out the correct demonstration here (and simply adding to the post so as not to clog the front page too much) (Also why I'm fine with repeating myself, or going over old ground again)
And I figured out my mistake. The reason I had extra crosses, and needed to perform C1 more than thrice, is that the solution should be--
Let C8's...
a= [math]\left. {\overline {\, xy \,}}\! \right|r[/math]
b = [math]\left. {\overline {\, b \,}}\! \right|[/math]
c = [math]\left. {\overline {\, a \,}}\! \right|[/math]
r = [math]\left. {\overline {\, r \,}}\! \right|[/math]
Then plugging into the right hand side of C8 we obtain:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|\left. {\overline {\, \left. {\overline {\, b \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, a \,}}\! \right| \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|\left. {\overline {\, \left. {\overline {\, r \,}}\! \right| \,}}\! \right| \,}}\! \right|[/math]
And then we can remove the double crosses, three times (C1 thrice), to obtain:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|ba \,}}\! \right|\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|r \,}}\! \right|[/math] (C9.12)
Which easily re-arranges to:
[math]\left. {\overline {\, ba\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right| \,}}\! \right|\left. {\overline {\, r\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|\,}}\! \right|[/math] (C9.2)
(C9.12) from above shows how C2 can be used to obtain C9.3 more easily.
Let C2's a= unmarked state, and b= r for the right-hand expression in C9.12
[math]\left. {\overline {\, ab \,}}\! \right|b[/math] = [math]\left. {\overline {\, a \,}}\! \right|b[/math] (C2)
Plugging the right hand expression of C9.12 into the right-hand side of C2, while keeping [math]\left. {\overline {\, xy \,}}\! \right|[/math] as is...
[math]\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|r[/math]-->[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right| \,}}\! \right|r \,}}\! \right|[/math] (C9.13)
We remove the double cross with C1 and put back into the original expression to get C9.3:
[math]\left. {\overline {\, ba\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right| \,}}\! \right|\left. {\overline {\, rxy \,}}\! \right|[/math] (C9.3)
Looking at the differences between C9.3 and C9.4 I thought that only the left-hand cross was involved in the transformation, but then saw that we're using C2 and the only way to get r-x-y-cross underneath that cross is by setting C2's b = r-x-y-cross, and a is everything else underneath the left-hand cross. It's a transformation from the right-hand-side of C2 to the left-hand-side.
[math]\left. {\overline {\, ba\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|\left. {\overline {\, rxy \,}}\! \right| \,}}\! \right|\left. {\overline {\, rxy \,}}\! \right|[/math] (C9.4)
This last step took some guesses for me. What worked was using C7. I could see that we really just want to get rid of some of the terms underneath the left-hand cross of C9.4. I pretty much just guess-and-checked my way to a solution.
C7, but inverted to show the direction of transformation:
[math]\left. {\overline {\, ac \,}}\! \right|\left. {\overline {\, \left. {\overline {\, b \,}}\! \right|c \,}}\! \right|[/math] = [math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, a \,}}\! \right|b \,}}\! \right|c \,}}\! \right|[/math]
Let C7's a =xy, b =xy, and c = r. Ignoring, from C9.4, and maintaining b and a from C9.4 (but erasing the cross that all of this is contained in to make it less bulky)
[math]\left. {\overline {\, ab\left. {\overline {\, xyr \,}}\! \right|\left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|r \,}}\! \right|\,}}\! \right|[/math] --> [math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, \left. {\overline {\, xy \,}}\! \right|xy \,}}\! \right|r \,}}\! \right|ab \,}}\! \right|[/math] (C9.5)
Which is close to where we want to get to, we just want to remove the redundant part at depth 3 and 4.
For that I went to C2:
[math]\left. {\overline {\, ab \,}}\! \right|b[/math] = [math]\left. {\overline {\, a \,}}\! \right|b[/math]
Taking just the expression:
[math]\left. {\overline {\, xy \,}}\! \right|xy[/math]
Let C2's a = unmarked state
Let C2's b = xy
It becomes
[math]\left. {\overline {\, * \,}}\! \right|xy[/math] (C9.6)
Then take this expression into C3:
[math]\left. {\overline {\, * \,}}\! \right|a[/math] = [math]\left. {\overline {\, * \,}}\! \right|[/math]
Let C3's a = xy and C9.6 reduces to a single cross.
Plug C9.6 reduced back into C9.5:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, \left. {\overline {\, * \,}}\! \right| \,}}\! \right|r \,}}\! \right|ab \,}}\! \right|[/math]
Using C1 we reduce the double crosses over the unmarked state and plug this back into C9.4 and rearrange we obtain C9's conclusion:
[math]\left. {\overline {\, \left. {\overline {\, r \,}}\! \right|ab \,}}\! \right|\left. {\overline {\, rxy \,}}\! \right|[/math]
And so we have C9:
[math]\left. {\overline {\, \left. {\overline {\, \left. {\overline {\, b \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, a \,}}\! \right|\left. {\overline {\, r \,}}\! \right| \,}}\! \right|\left. {\overline {\, \left. {\overline {\, x \,}}\! \right|r \,}}\! \right|\left. {\overline {\, \left. {\overline {\, y \,}}\! \right|r \,}}\! \right| \,}}\! \right|[/math] = [math]\left. {\overline {\, \left. {\overline {\, r \,}}\! \right|ab \,}}\! \right|\left. {\overline {\, rxy \,}}\! \right|[/math] (C9)
Which, yup, I can get the conclusion, but it certainly didn't involve using C6 -- and I tried to do it that way a few times but concluded it couldn't be used because there's just not that form present in the expression of interest. So a bit of a guess-and-check based on what we're trying to obtain got me through the demonstrations.
There's something odd about letting b = xy and a = xy, but I don't really see a reason within the logic presented so far to not allow two different variables to be assigned to similar names at different parts of an expression so I just went with it. But it does seem kind of funny. (Also noting how Theorem 12 states that I should use T10 in place of J2, and I didn't really use J2... I was just looking for a way to make the equation work. So that derivation may be wrong, or at least wasn't what GSB had in mind -- but it seems to follow still)
I have now figured out what he meant by "integration". I missed that C3 was named integration, and so this is him pulling out one of the sub-steps in the demonstration of C3 and calling it "integration".
That leaves the bit where he says "Thus if we consider the equivalence of steps..." which I'm still scratching me head on. I understand that the steps have an order to them, and so reduce depending upon the order of transformations. But his conclusion that "therefore the unmarked state is equivalent to a step" doesn't make sense to me. He demonstrated that taking steps is inconsistent and so he concludes that the unmarked state is a step rather than not putting steps into the calculus -- which is what I'd do since steps had never been introduced as a member of the calculus. We've been working through the marked state, the unmarked state, equivalence, and transformation through substitution. The steps are semi-arbitrary in that we can set any part of an expression equal some other conclusion as long as the forms match up. Why on earth would I take a step as a part of the calculus and set it equal to the unmarked state?
I'm tempted at this point to just grant chapter 7. I checked T10 through the proof by example of J2 and it works out, but since I didn't get to C9 the same as GSB, and it looks close enough for me, I'm fine with just allowing it and moving on. After working through those examples the text is reading a lot easier and I can kind of see how it's good enough for me. :D -- it was fun to work through the nitty gritty puzzles, but I'm starting to think "OK, I'm ready for a point now"
Seventh Cannon is what made me think the above -- it pretty much lays out what we've been doing in the demonstrations, but allows you to use the algebra now.
The bits on time: we get the conclusion I was thinking of, which is interesting to me!, that there are undecidable expressions (now that we have functions that go to infinity).
One thing I'm thinking is you could just posit another space-dimension to accommodate GSB's "cross in a plane", but I'm ok with saying this is space-time instead.
The oscillator function makes me want to take back what I said earlier about negation. It seems like we're close to negation with it because of the relationship between the two spaces inverts, but it's never named.
But I can say I've officially lost the plot at page 62's "Time in finite expression" -- I don't understand Figure 3, or most of the figures after that. The only one I get right now is page 65's rendition of E1.
But also I kind of continued past when I normally would just because I was hooked. I've officially finished the book now, but without as much math ;). Still chewing on the ending.
While I've admitted ignorance to certain parts of GSB's demonstration, I'm not sure about the conclusion here. Not that it's wrong, only that I'm uncertain that it's earned.
Observers and such haven't really shown up until this point. He's asking us to interpret ourselves as an "m" outside of a circle where the circle is the forms around us. But this would be the simple subject, if it can be reduced to an "m"? Or no? It's not clear, because "observer" shows up at the very end.
All the same I think I like "a distinction drawn in any space is a mark distinguishing the space" -- to mark a space one must mark. Even "the unmarked state" has been used so far as equal to variables, and so works, in a sense, as a marked space would (in a different way from the way space pervades expressions)
But I'd say that GSB sees something I don't, at the end. And I suspect it's because he's an idealist. He can see that the first distinction makes the observer interchangeable with the mark because he believes that, at base, this is all mind-stuff and the forms we see are mentally constructed? Or something along those lines.
But all I see is a mark, and a man who wants to be that mark.
Good recommendation @unenlightened, and thanks for the prodding and motivation. I would not have finished the book without your help.
Quoting Moliere
GSB tries the extra space dimension himself, with the idea of a tunnel, but it doesn't quite work, because as soon as the boundary is undermined, it becomes 'incontinent'. he still needs time to keep the distinction clear. But if we go back to switches and circuits, everything is understandable. There is a very simple circuit that works as a buzzer or operates an electric bell, and at the heart of it is a switch that operates itself.
This is not the switch one operates to make the buzzer buzz, or the fire alarm ring, but an internal switch, that, as it operates the hammer on the bell, also switches itself off so that the hammer immediately falls back, and switches the circuit on again. The circuit cycles on and off indefinitely. We have electro-mechanical feedback; we have time, because any number of spacial dimensions cannot do the job of the same circuit being on and off - only time as change resolves the contradiction and maintains the continence of the distinction.
But I'll go back a bit, not to all those theorems , that are just extensions of what we already have, but to this: [quote= P.42]Indicative space
If So is the pervasive space of e, the value of e is its value to So. If e is the whole expression in So, So takes the value of e and
we can call So the indicative space of e.
In evaluating e we imagine ourselves in So with e and thus surrounded by the unwritten cross which is the boundary to S-1.[/quote]
A formal system is always imaginary, but normally, one imagines oneself outside the system commanding, evaluating, operating the system from outside, that is from "S-1". But here, that is ruled out, because outside and inside are the form of the first distinction. 'Value' is always relational, and always 'a difference that makes a difference'. To put it another way, there is no absolute value and no absolute outside, one is always in one's world, that one creates in distinguishing.
Now the physicist himself, who describes all this, is, in his own account, himself constructed of it. He is, in short, made of a conglomeration of the very particulars he describes, no more, no less, bound together by and obeying such general laws as he himself has managed to find and to record.
Thus we cannot escape the fact that the world we know is constructed in order (and thus in such a way as to be able) to see itself.
This is indeed amazing.
Not so much in view of what it sees, although this may appear fantastic enough, but in respect of the fact that it can see at all. But in order to do so, evidently it must first cut itself up into at least one state which sees, and at least one other state which is seen. In this severed and mutilated condition, whatever it sees is only partially itself. We may take it that the world undoubtedly is itself (i.e. is indistinct from itself), but, in any attempt to see itself as an object, it must, equally undoubtedly, act* so as to make itself distinct from, and therefore false to, itself. In this condition it will always partially elude itself.
[/quote]
Quoting Moliere
Yeah but, no but...
I have a problem with putting it like this, because it seems to be making a distinction between what the world is composed of, and What it might have been composed of, or might have been thought to be composed of... But that cannot be. One could at least equally say that the world is decomposed of distinctions. "In the beginning was the Word."
There is a sense in which there cannot be a world unseen, and a sense in which there obviously can and must be before seeing can arise. There must be physics before there can be physicists, but physicists are nothing other than that physics. But the first distinction is made by the first cell, and then the first re-entry of the first distinction into itself by the first language speakers, and then...
[quote=Krishnamurti]The Observer is the observed.[/quote]
[quote=The Grateful Dead]Wake up to find out that you are the eyes of the world.[/quote]
I would not say that the world is composed of eyes, but it has eyes, and we are those eyes.
There's one last bit that I would still like to get a more firm handle on, which is the second half of Ch.11, on memory, counting and imaginary values. The book is incredibly compressed at this stage, and a whole new notation introduced if not more than one. I have a half understanding of it, and my next post will attempt to convey as much as I can of that half.
Interesting vis-a-vis the original thread that sparked this one. Is there a "logic-like reality that exists outside the minds of individuals," etc.
Sometimes I wonder if the discoveries of the early-20th century should have been taken as a warning against strict bivalance and "truth as objectivity," rather than as an argument for deflating truth (as they generally were).
I.e.:
G.W.F. Hegel, the Phenomenology Sec 74
The idea of imaginary numbers as oscillations is interesting too. I have always seen them described as a number line running orthogonal to the real number line instead. Imaginary numbers are interesting in general because they seem to be a move to admit some sort of paraconsistency into mathematics for pragmatic expedience. I assume they have since been grounded in mathematical logic somehow? I just recall from mathematical histories that they were initially accepted on the grounds that "it works, don't it?" as with zero as well.
Interesting quote from Varela, who expanded Brown's system to include self-reference as a third mark, a move made to make it more usable with biology, where self-reference is central.
Reading Terrance Deacon's Incomplete Nature right now, it's clear this thread has been developed a great deal, but not resolved. Deacon tries to explain how purposefulness emerges by reintroducing Aristotle's formal cause via-thermodynamics and an explicitly process-based, as opposed to substance metaphysics.
All very interesting, but damn hard to wrap one's mind around. I do wonder why it is that it has taken so long for the process view to take over. Is it necessarily less intuitive, or is the problem that we drill a sort of naive corpuscularism, a substance metaphysics, into kids for the first 14-18 years of their education? It certainly seems less intuitive. I sort of buy into Donald Hoffman's argument that we evolved to want to focus on concrete objects (thus excluding the "nothing").
Side note: It's interesting that Brown was working on network issues. I've seen some articles on information theoretic/categorical models of quantum mechanics that attempt to explain physics as a network. This in turn, allows us to recreate standard QM in a different language, but also explains entanglement in a more intuitive network-based model (or so the author claimed, I did not find anything intuitive about the paper lol). I do find the idea of modeling reality as networks or possibility trees interesting though. But again, it's easier to conceptualize the network as a fundamental thing, rather then that the network simply is a model of process and relation, which seems to be the true basic entity!
Off the top of my head I can't think of many easily observable examples of feedback outside of social interactions, so I think it is pretty natural that people don't tend to have intuitions that are informed by observing feedback systems. (I suppose the notion of karma suggests somewhat the idea of a feedback sysyem.) I don't see it as so much a matter of our educational systems, as a matter of our lacking the perceptual and cognitive systems to see the feedback occurring in things around us.
I look into some types of feedback systems routinely, and have intuitions conducive to understanding feedback systems to a greater degree than most, but it takes expensive instrumentation for me to be able to observe the relevant processes. I'd have to think about how better intuitions about feedback systems could be cost effectively instilled during K-12 education.
I didn't mean feedback necessarily, just the view that process might be seen as fundemental, not substance. That what a thing is might not be best defined by "what it is made of." For example, heat is a measure of average motion, not a thing. Fire is a chemical reaction, not its own substance. So these are best understood as processes not things, but I was always given the view that at any deeper level substances must define reality.
But I suppose the basics of feedback loops are important too. I do feel like I was exposed to that early on. For example, we sweat because we're hot, we get cool from evaporation, and then we stop being hot - negative feedback, or positive feedback too.
There's a flag I want to put on "first cell", but it feels too off topic.
Granting the first cell making a distinction, which I can agree with, it's interesting how the story can be used for a single developing organism -- a story from birth until here we are talking -- as well as the development of organisms. "then the first re-entry of the first distinction into itself by the first language speakers" helped click some of GSB to his wider, philosophical sense that I haven't been grasping (and, truthfully, I'm still feeling around about).
"The world has eyes" is a nice phrase. It feels mystical in that way that tries to make a reflective statement -- where we talk of the world, which is usually not ourselves, but then it fits us within the world as we see our own eyes in our minds-eye -- that is, through language (or at least with a great deal of assistance from language)
When you finish with your next post: What do you make of 's linked summary?
That helped me -- it's nice to have an interpretation that's been worked through by someone else. I didn't realize that GSB's algebra is formally equivalent to Boolean logic (although now that I'm saying that I'm beginning to ask myself, just what *is* formal equivalency? I've sort of just taken that assertion at face value from people more knowledgeable than I). Also I didn't pick up the similarity between self-reference and re-entry.
Quoting Count Timothy von Icarus
Yup, I find that part of what's fascinating in the book. Since the logic was developed in tandem with a practice I'm interested to know it from that practical angle more.
The above statement is true as I write, but may well be false as you read. Logic would prefer to be timeless and eternal, and has difficulty dealing with the unpleasantness of times changing.
Six days shalt thou labour and be false, but on the seventh day, thou shalt again be true.
Here's another related piece, fairly short and understandable.
http://homepages.math.uic.edu/~kauffman/TimeParadox.pdf
I agree that classical logic doesn't deal with time very well. That's part of what allowed Kant to distinguish between Logic As Such, and Transcendental Logic. As well as providing a conceptual entry into Hegel's philosophy.
To evaluate the difficulty of logic, on the whole, dealing with time I'd have to do more homework on logic. Just looking over this: https://plato.stanford.edu/entries/logic-temporal/ -- but it could be that GSB's logic would fit in here, and so "difficulty" is what's being dealt with in Laws of Form. In that case there'd be choices to make on which logic, and I'm not sure how I'd make a choice. (More homework necessary on my part, basically) ((EDIT: Though I should note that it's necessary on my part specifically because of what I'm interested in. I don't think because I'm wanting to bridge these things that means much about GSB's book -- it's more a me thing))
Quoting unenlightened
Another good read. It hits a lot of points of interest for me -- the liar's paradox is one of those I keep going back to, and I found the dual-functions which iterate back and forth in a time series really interesting, and it's interesting how Kauffman links all of these things back to GSB.
Anyone else work out this demonstration yet?
Are you looking at the 9th canon where he constructs an ever deepening series of nested a's and b's? Page 55 in my version?
If so, you just take the whole right hand expression of a & b as = r. and use J2 in reverse.
This is where I am:
[quote= p61.]In effect, when a, b both indicate the unmarked state, it remembers which of
them last indicated the marked state. If a, then f= m. If b, then f=n.[/quote]
This refers back to the recursive expression derived from the expansion on Page 55 :
E2. [Math]f=\left. {\overline {\, \left. {\overline {\, fa \,}}\! \right|b \,}}\! \right|[/math]
And also refers back to page 56 right at the bottom:
[math]\left. {\overline {\, \left. {\overline {\, fn \,}}\! \right|n \,}}\! \right| =[/math] m or n
This is extraordinary! A circuit made entirely of switches that has a memory!
Wow, if someone implemented something like that we could have computers and an internet!
Sorry, couldn't resist.
Yeah, tempting but stupid. Computer memory is not made of switches. But kudos for bothering to read the thread at all.
Well computer memory is implemented in a variety of ways these days, but any modern computer is going to have some memory elements implemented as flip-flops. A simple schematic of a flip-flop is illustrated below.
Note that the symbols labeled TR1 and TR2 represent transistors, which for practical purposes are switches.
Older (pre-solid-state) computers used electromagnetically controlled switches with contacts that opened and closed. (relays) This allowed a literal bug to crash a computer. Solid-state switches (transistors) are a big improvement.
It doesn't predate computers built from relays as the 'bug link' I posted shows. I would think Spencer-Brown would have been well aware of this, and wouldn't have believed himself to be presenting anything particularly novel in pointing out the possibility of memory implemented in switches.
As far as explaining... I'm not sure what you are asking me to explain. I haven't been reading along.(although I am curious about Spencer-Brown's thoughts on implementing complex math in Boolean logic) So I'm not in a position to explain much about what Spencer-Brown has to say. I could explain the workings of the flip-flop in the image I posted, but I don't have a good sense of how much background knowledge I'd need to provide, in order for you to find my explanation comprehensible.
https://lof50.com/?fbclid=IwAR0X6ywUWXj9i2peYoiUXxFxmEdv05n_HZWuxdfWc2rm-xGAdrxexcDZErY
Yup, that's the one! Thanks.
That worked. I already became stuck on the next step. :D -- but I figured it out by going back to the demonstration of C4 and using its steps rather than the demonstrated equality between the expressions.
Then it's pretty easy to see the pattern after that: it's the same pattern as before, only being iterated upon a part of the expression in order to continue the expansion.
I can say I'm stuck with your last reported place that you're at. At least, this morning I am.
Quoting wonderer1
This is part of my interest here -- something I've always struggled with is understanding the connection between circuits and symbols. I'm sure I don't understand how a circuit has a memory, still.
I'll take a stab at trying to convey it without going into too much detail.
Frequently memories are implemented in subcircuits which have a designed in bistability. An example of a bistable system would be a coin on a table. Assuming the coin can't be stood on edge, the coin on a table will have a stable state of either showing (outputting) heads or tails, true or false, 1 or 0.
Some sort of work (flipping the coin) will need to be done in order to get the coin/table system to represent the state opposite of what it is currently representing.
The flip-flop circuit shown below is loosely analogous:
Unfortunately, the image creators were a bit sloppy in the way they used text colors (and I'm too lazy to look for a better image) so imagine the text which says "+5 Volts" and "Zero Volts" to be black. (Those parts of the circuit are 'part of the table' and stay constant. The remaining red and blue text details the two different stable conditions the subcircuit can be in - red state or blue state.
The circuit shown has two inputs S(et) and R(eset) and two outputs Q and ~Q. (Typically only one of the two outputs might be used, since as long as the system has had time to reach stability the ~Q output state is the logical inverse of the Q output state.)
The two three-terminal devices (TR1 and TR2) are transistors. The terminals that exit the transistors horizontally (to the left or right) are the control inputs to the transistors. When a control input is at 0.7 volts or greater that transistor will be on and allow current to flow in the top terminal and out the bottom terminal resulting in the output to which the transistor is connected (Q or ~Q) being pulled towards 0V (captioned as 0.2V).
Two other particularly important elements for having a flip-flop are R2 and R3. R2 and R3 represent resistors. Note that R2 connects the Q output to the input of TR2 while R3 connects the ~Q output to the input of TR1. So each output has some control of the other transistor's input. As long as S and R are not connected to anything the transistor that is turned on will keep the other transistor turned off. Simultaneously a transistor being turned off (in combination with the resistor network) causes the other transistor to be turned on. So like a coin on a table the circuit will just sit in one of the two stable states, red or blue.
The S and R inputs can be be momentarily connected to 0 Volts in order to force a change from red state to blue state and after the input which was connected to 0 Volts is disconnected the flip-flop will stay in the state it was forced into.
I'm going to leave it there for now. Let me know if that helps, or what needs more explanation.
Yeah. Your point was recently hammered home for me on another forum by someone who wants to dichotomize everything into either physical things or abstractions, and can't understand physical processes as a category different from either.
I guess for me, feedback is important in making a system interesting, so I'm biased towards focussing on systems with feedback.
Interesting. You would think that a process view would tend to collapse the distinction between abstract and physical. Maybe not.
The story of a hole in a state of flow with an innumerable number of other holes towards ~Q: We start at 5 V and move through R1 to TR1 because the voltage at Q is lower than the voltage at ~Q (assuming we're already in a steady state), then we go through the unmarked resistor on the other side of the transistor, up through R3 and out ~Q. If you touch "Set" to the zero volts line than you ground the flow causing the voltage to switch over to R4-T2-R2-Q.
Based on the website I linked it looks like Q and ~Q are out of phase with one another. So the memory comes from being able to output an electrical current at inverse phases of one another? How do we get from these circuits to a logic? And the phase shift is perhaps caused by subtle manipulations of the transistor?
To keep the explanation relatively simple it is easier to mostly ignore current flows and look at voltage levels in various places. However, I think it will help if I go into more detail about the type of transistor depicted in the flip-flop schematic, and the way current flows through the transistor. So to that end, let's looks at the left half of this image:
This image also helps introduce the names for the terminals of the transistor which are symbolized with B(ase) E(mitter) and C(ollector). The purple arrows indicate the way current can flow through the transistor with the width of the arrow illustrating that the current flowing into the collector is larger than the current flowing into the base. All of the current must exit out of the emitter. Typically the base current is around one hundredth of the emitter current. However, current can only flow into the collector when there is current flowing into the base. Therefore a small base current acts as an input controlling the larger collector current.
Another factor having to do with the physics of the semiconductor device that the transistor supervenes on, is the fact that the base voltage needs to get up to ~0.7 volts before current will flow into the base, and therefore before current will be able to flow through the emitter.
So to get back to simplistically modelling things in terms of the voltage levels on different wires. We can think of the transistor as a device where, when the voltage at the base of the transistor is 0.7 volts or higher, a switch is closed between the collector and the emitter, allowing current to flow through the transistor from collector to emitter.
Getting back to the flip-flop schematic...
The schematic is marked up in accordance with modelling things in terms of static voltage states where we don't need to be concerned with current flows and what happens on a dynamic basis. For now at least, we just want to look at how the circuit acts as a one bit memory. That can be understood by recognizing the fact that when the Set input is grounded to 0 Volts the red markups indicate the voltages on the wires they are near. When the Reset input is grounded the blue markups apply.
There are two other states that are of interest, which are not detailed on that schematic. These two states are the different memory states that the circuit can be in when both Set and Reset are disconnected from ground. What the voltage state of the flip-flop is, when both Set and Reset are disconnected, depends on whether Set or Reset was last tied to ground. In other words, the voltage at Q reflects the flip-flop's memory of whether Set or Reset was last connected to ground.
I'm going to have to leave off there for now. I'll respond to more of what you wrote later
Also, there is another scenario to consider, which is what happens when both Set and Reset are connected to ground yielding Q=~Q, and how Set and Reset being disconnected from ground simultaneously is like flipping a coin. But as Q=~Q might suggest, that's a state that is best avoided for sound logic.
And that helps me understand how it has a memory -- when you come back to it it'll be in one state or the other, so there are two possible states for the circuit to be in when at equilibrium.
And I can now see how they are switches thanks to your explanation, which was a bit of a mystery to me before.
Very good! :up:
I'll respond to the rest of your previous post later today.
Phase isn't a particularly useful concept for thinking about the relationships between Q and ~Q. In the case where both Set and Reset are grounded, both Q and ~Q will be at 5 Volts rather than one being at 5V and the other being at 0.2 Volts. Also, when considering the transitions from one state to another things get messy for a time and thinking of Q and ~Q as having a phase relationship breaks down.
As for, "How do we get from these circuits to a logic?"...
So with a flip-flop we can use as a one bit memory, we have what we can think of as a logical variable. Additional circuitry can take the Q output of multiple flip-flops and perform logical operations. The result of the logical operation can then be stored in another flip-flop, for use at a later time.
At this point it is pragmatic to jump up a level in abstraction and think in terms of logic gates instead of transistor circuits. So we can have an AND gate and brush consideration of transistors, resistors, and power supplies under the rug. We can simply think of an AND gate as a device with two inputs which treat voltages above 2.5 Volts as a logical 1 (true), voltages below 2.5 Volts as a logical 0 (false), and output the logically appropriate voltage level on the output.
The following image shows schematic symbols for logic gates of various kinds and their truth tables:
Such logic gates can be strung together to yield whatever logical function is needed. For example a one bit adder:
A and B could be the outputs of two flip flops representing the two bits to be summed. Cin represents "carry in" and can be connected to the "carry out" of another adder. S will have an output logic level representing the sum of A and B given the state of Cin. Cout will have an output level which can be connected to the Cin of a different adder.
By connecting such logical blocks together we can create something useful. For example we could have three 32 bit registers. (With each register just being a collection of 32 flip-flops.) Two of those registers could have 32 bit binary numbers that we want to add together. The third register could have its flip-flop inputs connected to the S outputs of a 32 bit adder chain and thus we would have the ability to take two stored 32 bit numbers and add them and store the sum in the output register.
Now so far I've glossed over the dynamics of changing states. That is much too complicated to try to cover in any detail. With digital logic, typically a 'clock' is used in order to be able to ignore the short term dynamic transitions of flip flops and logic gates from one stable state to the next.
The SR flip-flop schematic I showed is about as bare bones as a flip-flop can get. The flip-flops in a microprocessor are typically more complex D flip-flops which have a D(ata) input terminal and a CLOCK input. D flip-flops work by changing their output state (Q) to match the D input state when the clock signal transitions from a logic 0 to a logic 1. So with all of the flip-flops tied to the same clock signal, all of the transitioning can be synchronized. As long as the clock frequency is slow enough, all of the dynamic transitioning that occurs after the last clock edge has time to settle to a stable state before the next clock edge.
Quoting wonderer1
Of course, though, this is what I want :D
I think what I'm wanting to settle, for myself, is whether or not the circuits are in turn being interpreted by us, or if they are performing logical operations. What makes Q and ~Q different other than one is on the left side, and the other on the right side? Do we just arbitrarily choose one side to be zero and the other side to be 1? Or do the logical circuits which have a threshhold for counting do it differently?
To my mind the circuit still doesn't really have a logical structure anymore than a stop light has the logical structure of Stop/Go without an interpretation to say "red means stop, green means go". So are we saying "Q means 1, and ~Q means 0"?
I'm not clear on what you want clarification of, but let my respond to the rest of your post and then let me know what might still be unaddressed.
Quoting Moliere
The SR f!ip-flop circuit is symmetrical, so it is somewhat arbitrary which output is chosen to be Q and ~Q. However, the Set pin is defined as the input that can cause Q to produce a 1 (5V) output. So one could swap Q and ~Q, but to be consistent with the conventions for SR flip-flops one would also need to swap which input is labeled S and which R. So like the stoplight it is a matter of convention.
Also, flip-flops themselves don't perform logical operations. They just serve as memories that can be used to provide inputs to logic gates (or combinations thereof), and store outputs from logic gates.
OK, cool. That was what I was thinking, but realized I didn't know. Given the topic of the book -- a kind of proto-logic prior to logic, or from which logic emerges (with a practical basis in sorting out electrical work and inventing a logic for that) -- it seemed important to me.
Got it. This is a memory, and not an operating circuit. So it holds a 1 or a 0, and it's by convention that a side of the flip-flop is treated as a 1 and the other as 0, and it behooves us to be consistent because then we can start doing cool things like reducing our number system to binary and having circuits perform operations faster than we can.
[quote=wiki]Latching relays require only a single pulse of control power to operate the switch persistently. Another pulse applied to a second set of control terminals, or a pulse with opposite polarity, resets the switch, while repeated pulses of the same kind have no effects. Magnetic latching relays are useful in applications when interrupted power should not affect the circuits that the relay is controlling.[/quote]
Yeah, a latching relay could be used to implement a one bit memory, and may be more helpful in visualizing things discussed in the book.
I'm really just inching along in this chapter. Every page presents a problem. Now I feel I have a handle on how we get to infinite expansions but I'm stuck on Re-entry on page 56-57 (I realize now before I was citing the pdf pages rather than the book pages. These are the book pages)
So I looked back at the rule of dominance because that's how we're meant to determine the value of infinitely expanding functions for the various values of a or b being either m or n. I went back to the rule of dominance because it seemed like basic substitution didn't work. But as I apply the rule of dominance I'm getting different values at each step of E1 in chapter 11.
[math]\left. {\overline {\, \left. {\overline {\, fa \,}}\! \right|b \,}}\! \right|[/math] = [math]f[/math]
[math]\left. {\overline {\, \left. {\overline {\, fm \,}}\! \right|m \,}}\! \right|[/math] = [math]n[/math] (1)
[math]\left. {\overline {\, \left. {\overline {\, fm \,}}\! \right|n \,}}\! \right|[/math] = [math]m[/math] (2)
[math]\left. {\overline {\, \left. {\overline {\, fn \,}}\! \right|m \,}}\! \right|[/math] = [math]n[/math] (3)
[math]\left. {\overline {\, \left. {\overline {\, fn \,}}\! \right|n \,}}\! \right|[/math] = [math]m[/math] or [math]n[/math] (4)
So I've tried three different things in trying to get all the equations to equal what they state here:
1) substituting the marked state for m and the unmarked state for n while expanding each instance of "f" with one more iteration so you'd have, in the case of (1): m-cross-m-cross-m-cross-m-cross. Since you have an even number of crosses all embedded within one another you get n -- the unmarked state. But then if I try this on (2): m-cross-n-cross-m-cross-n-cross: we have an even number of crosses embedded within one another so it should reduce to the unmarked state, but (2) reduces to the marked state.
2). The rule of dominance. You begin at the deepest depth which would be "f" in each case and alternate putting m or n next to the next depth-level. So starting with (1):
[math]\left. {\overline {\, \left. {\overline {\, fm \,}}\! \right|m \,}}\! \right|[/math] (1)
[math]\left. {\overline {\, \left. {\overline {\, fmm \,}}\! \right|m \,}}\! \right|[/math] (1.1)
[math]\left. {\overline {\, \left. {\overline {\, fmmn \,}}\! \right|m \,}}\! \right|[/math] (1.2)
[math]\left. {\overline {\, \left. {\overline {\, fmmn \,}}\! \right|mm \,}}\! \right|[/math] (1.3)
[math]\left. {\overline {\, \left. {\overline {\, fmmn \,}}\! \right|mmn \,}}\! \right|[/math] (1.4)
[math]\left. {\overline {\, \left. {\overline {\, fmmn \,}}\! \right|mmn \,}}\! \right|m[/math] (1.5)
and by the rule of dominance I get m because that's what sits in the pervading space.
But that's not the right way to apply the rule, then, since we must get n from the procedure for (1). Which brings me to:
3) Substitution from the Sixth Cannon where :
[math]\left. {\overline {\, m \,}}\! \right|[/math] = [math]n[/math]
[math]\left. {\overline {\, n \,}}\! \right|[/math] = [math]m[/math]
But then for (3) I get m, because n-cross equals m and m-cross equals n and that leaves m after substituting.
While writing this out I came up with a 4th possibility: just mark the next m or n, as you'd do with the rule of dominance, and take the value in the outer space. But then (5) is equal to m, and not m or n, except in a fancy way of interpreting "or" which I don't think is what's going on.
So, as I said, I'm inching along and every page presents a problem. :D This is as far as I got this morning. (EDIT: Changed the number-names of each step to conform with the thread)
You cannot begin at the deepest level because re-entry makes it infinite. So you simply evaluate each case as it stands, and the f drops out in every case except the last one, where everything else drops out. It is because the f doesn't drop out that there are still the 2 possibilities for its value.
Wait, I think i see what you are doing - treating each line as an equation, and then substituting the right back in for f.
You don't want to do that! Each line is a result for a combination of an and b. There is no working shown, and almost none to do. so for (2):
[math]\left. {\overline {\, fm \,}}\! \right|[/math] =[math]\left. {\overline {\, m \,}}\! \right|[/math]
and the re-entered f can be ignored.
Yup. The first line states what f is and that's how I was treating it.
Quoting unenlightened
[math]\left. {\overline {\, fm \,}}\! \right|[/math] = [math]\left. {\overline {\, m \,}}\! \right|[/math]
Cool. So with this solution the trouble I have is with (4). n-cross-m-cross is three crosses and so should equal m, but (4) equals n.
Right before these lines GSB states:
And the rule of dominance doesn't care about the infinite depth it cares about S-sub-0, the pervading space. For solution 2 I was treating "m" as the marked space and putting an "m" then "n" alternatively as I filled out the expression. For solution 4 I'm ignoring m and n and simply marking S-sub-0 with the next letter that follows. That works for (1) through (4), but (5) it would simply be equal to "m" rather than m or n by this procedure.
But this is me explaining my failed evaluations trying to figure out how to get to a successful one. (I've actually typed out a few of these puzzlers before only to find the solution at the end and delete the puzzler in favor of the solution... but this time I was still stuck at the end of my post)
And if I follow GSB's outline for (5) and apply it to (4) I'd say we have two expressions that evaluate to m or n -- because if f equals m, then you get mn-cross-m-cross, which is m-cross-m-cross, which is an even number of embedded crosses that gives you n, but if f equals n then you get nn-cross-m-cross, which is an odd number of embedded crosses so you get m.
The other three work out that way.
Basically I'm treating all 5 as test cases for understanding how to evaluate an expression's value which has re-entry and each time I try to use one of these solutions one of the equations comes out differently from what's written in the book.
So these are my failed evaluations which I'm sharing because this time by typing it out I haven't figured out how to do it right.
(4). [math]\left. {\overline {\, \left. {\overline {\, fn \,}}\! \right| n \,}}\! \right|[/math]
=[math]\left. {\overline {\, \left. {\overline {\, f \,}}\! \right| \,}}\! \right|[/math] (because n= . )
= f.
And this means that when a = n and b = n, f can = n or m. And thus we have the shape of the flip flop circuit.
Ahhhhh.... OK I think it clicked now. I figured out my mistake. I was treating "m" in (3) as embedding the crosses to its left, but in fact it's alongside the crosses rather than embedded and so I was just doing the evaluation incorrectly. Then upon finding the wrong answer I tried to come up with various other possible ways to evaluate, which I've already shared at length, and now I see how they simplify.
So due to Theorem 2 I can easily simplify (1) and (3) -- an empty cross next to any combination of expressions simplifies to the empty cross, and m is an empty cross. (within the cross, so cross-cross = the unmarked state)
That leaves (2) and (4).
For (2):
[math]\left. {\overline {\, \left. {\overline {\, fm \,}}\! \right|n \,}}\! \right|[/math]
Let f = m, and mm = m, therefore we are left with the marked state.
Let f = n, and nm = m, and we have the same.
But for (4) when we do this the evaluation comes out m or n.
Thanks for the help. I made a small mistake along the way and it resulted in a lot of confusion.
This morning I find myself going back. In particular as I proceeded I started to pick up on a pattern in the writing: between theorems and conclusions.
Going to Chapter 4: Theorems are general patterns which can be seen through formal considerations of the Initials. Also axioms are used. In going back to get a better feel for the distinctions between these terms I'm also picking up on that Canon is never formally defined -- it's like a Catholic Canon in its function. Also I'm picking up on why identity is the 5th theorem -- if the calculus was inconsistent then you could come up with x =/= x. And, going back over, I'm starting to see the significance of theorem 7 -- it's what let's us build a calculus through substitution, which theorem's 8 and 9 provide the initials for that calculus in chapter 5.
This is all inspired by the paragraph immediately where I left off:
And reviewing back up to Chapter 5 is about as far as I got this morning. I'm attempting to disentangle the procedures of Chapter 6 from Chapter 8 to give myself a better understanding of what's missing and needed to understand the next bits in Chapter 11.
Halfway down P.65 at "Modulator function" I just stop following. E4 is just too complicated and too big a jump for me, and I cannot recognise its translation into what is looking much like a simplified circuit diagram. I can just about see the translation of the example without re-entry bottom of P.66 and top of P.67. I have been hoping that someone could help me out at this point with the translation of E4, and then its further implementation using imaginary values.
Perhaps a way to put it, and to go back to an earlier distinction: Theorems are statements in the meta-language about what we can do in the object-language, and consequences are demonstrations within the object-language.
And the statement of GSB's I'm trying to understand:
To use arithmetic we have to have a complete knowledge of where we are in the form, it seems, but the calculus manages fine because, well, we are dealing with variables at that point?
Something subtle in there that I'm not fully picking up.
I'm still a little confused about why this doesn't effect Chapter 6, but I think I understand why Chapter 8 is denied at least. And I'm ready to push on as well.
So the flip-flop circuit gives us a reason to posit imaginary values -- at this point I think "imaginary" is a bit of a term of art. So far the mathematics presented have been reliant upon forms and their inter-substitutibility. But here we have a form that, just like imaginary numbers in our everyday algebra, demands another number. (or, in this case, form, since numbers aren't really a part of the domain of interest) -- in a way we have to look at "p" in Figure 1 as having both the marked and unmarked state, and so we introduce imaginary values to indicate "the value of this expression is dependent upon the value of a function, and the value of a function depends upon time -- it is either the marked or unmarked state, but the calculus (with the assistance of the arithmetic, at least) cannot give you the answer at this point"
Also I think I understand what GSB is on about with respect to the oscillator function -- in p-cross-p if p is marked then the function is marked, and if p is unmarked then the function is unmarked. So there are some functions which even as they oscillate they are still continuously valued.
.... well, and that's as far as I got this evening. :D
Looking at E4 at this point I think we have to be able to follow along with a given expression's oscillation patterns, sort of like what I did with p-cross-p in the above, such that we can tell, over time, how often it will be marked or how often it will be unmarked or if it will always be one or the other.
I think that's right.
[quote=P 57]We see, in such a case, that the theorems of representation no longer hold, since the arithmetical value of e' is not, in every possible case of a, b, uniquely determined.
[/quote]
Reentry produces kind of fractal/recursive infinity. The last case of P.56 that you had problems with produces an indeterminate value, and this means that when there is reentry, the calculus cannot always reach a determinate answer. Could be flip, or could be flop for example, or could be oscillating. The algebra still works, but the re-entered expression 'f' cannot be substituted by mark and then by no mark and then, 'there is no other case'; now there are other cases.
EDIT: I say that but the next part is opaque to me this evening. Might have to poke around the conference website to see if they have already done work on this chapter. It's not explaining itself as well as the previous chapters have, or at least I feel stupider while reading it. ((Heh, OK, the journal they have set up doesn't have any issues in it. So maybe the conference will be the way to go: "Hey, uh, what's he talking about here and how do you check the oscillating functions?" -- seems they have some videos from the last conference that might be worth checking out if I can't guess through to a possible insight tomorrow morning: http://westdenhaag.nl/exhibitions/19_08_Alphabetum_3))
[math]\left. {\overline {\, p \,}}\! \right|p[/math]
is using the form from before but to represent E3 instead of giving us an expression with a determinate value. The oscillator function is the solution to E3: it's both m and n, and by adding a dimension of time we're able to give a solution to E3 which goes back and forth, as Figure 1 shows. So now I'm seeing the square waves not as switches but as marks in time as E3 goes back and forth between its two values.
Also the bit on steps from Chapter 6 clicked this morning -- we can count steps but they don't cross a boundary so in a sense you can have as many steps as you want, and GSB uses that to generate the infinitely expanding functions of re-entry. But also because we're not crossing boundaries we're sort of in this place where, due to the dimension of time, we can begin to use steps in the process.... something like that. And now it's time for work, but I thought this worth sharing because most of what I've posted has been more confusing than elucidating, and for once I thought I had an elucidating thought.
But this is more or less me saying I think I need more homework to really work that out. My posts would just be guesses in the dark about rules I don't know, which would be even more confusing than the confusion I've already expressed :D
I'll pick through those videos from the previous conference to see if there are more worked out examples there. Else, LoF24 might be the best bet for understanding just how to operate on waves with the calculus.
It has been interesting to read along with this discussion. I get tantalizing hints at what the topic under discussion might be related to, but not enough to be able to say anything helpful for the most part.
I suppose it is a bit like trying to decrypt a foreign language.
There are gaps in my understanding of the book, still. I can say everything up to Chapter 11 mostly makes sense, now, but Chapter 11 is where the calculus suddenly changes -- and he spent 10 chapters making sense of the calculus before changing it all in one quick go.
For me it's mostly the logic that I find fascinating: it's a genuine logic that relies upon neither number or sentences (or truth!), and in the notes GSB even goes into connecting his logic to classical Boolean logic so there's even some sense in which we could say this is a "more primitive" logic. Or, at least, so the guess would go -- It'd be interesting if Boolean logic could, in turn, also derive GSB's LoF, giving a kind of "map" between both where you could simply choose which one you want to use.
So even if I don't quite grasp Chapter 11's operations, it's still pretty cool to be familiar with yet another example of a logic (as opposed to there being One True Logic, or some such).
Whereas, (I'm guessing here) for the memory circuit, there would be no spring, but instead 2 electromagnets that operate a dual switch that turns one on and the other off and contrariwise, vice versa.
And then we come to imaginary values, and my best guess is that it relates to the p expression you mentioned above.
[quote=P.61]
Suppose we now arrange for all the relevant properties of the point p in Figure 1to appear in two successive spaces of expres- sion, thus.
P'p
We could do this by arranging similarly undermined distinctions
in each space, supposing the speed of transmission to be
constant throughout. In this case the superimposition of the
two square waves in the outer space, one of them inverted by
the cross, would add up to a continuous representation of the marked state there.[/quote]
I'm too lazy to correct the expression - you know what it is...
Anyway, two undefined expressions one under a cross give 2 square waves exactly out of sync, give rise to what looks like a stable 'on' but isn't.
And that is where my understanding ends. How one gets from there to frequency doubling and halving is beyond me.
You mean added? When you mix two frequencies, you get a waveform that is a composite of four different ones: you get A, B, A+B, and A-B. This is superheterodyne theory. It's how AM radios work.
So if A is the radio carrier frequency, we want to piggyback audio frequency B on it by sending A and B through a mixer and then filtering for A+B.
Not sure if that's what you mean or not.
So it could very well be addition! That appears addition-like. But to actually mean addition I'd have to be able to parse E4 better. I can see the input and the output, but I don't really understand how E4 operates on the input to obtain said output.
It wouldn't surprise me if you could relate this to radio wave-forms, though one thing that'd be different is that we're dealing with square waves, and my understanding of radiowaves is that they are not square waves. (but I do understand that electronic circuits use square waves sometimes -- but my understanding is not in a real, practical sense. Only I've seen square waves being used as examples while looking at websites while trying to make sense of the book)
And actually, now that you're here, I've started seeing how it might be possible to make counting more explicit -- which relates to the thread on Kripke's skepticism.
Going back to the initial hook, I'd like to understand Chapter 8 a little better because of its relationship to your inference about the philosophy of philosophy being a reflection rather than a content.
What do mean by "marked?"
A square wave would be used when you want something to blink on and off, like the hazard lights on a car, or the turn signal.
¯¯|__|¯¯|__
and it becomes either:
¯¯¯¯|____
or
____|¯¯¯¯
And E4 is... not easy to render here, but the link to the book is on page 1 of this thread, and E4 is on book page number 66
I think this guy probably took a large amount of LSD before he started writing.
There's something there, but in a way that reminds me of my old calculus professor: he certainly knew what he was talking about, but he found it hard to dumb it down for the rest of us. We managed to make it through, but it wasn't because the professor was good at communicating what he obviously knew.
And here the topic is very abstruse -- we don't even have the familiar things like number to rely upon in thinking through the calculus. But that's exactly what makes it interesting to me.
So perhaps the answer to understanding chapter 11...
I can sort of see how the cross and variables could represent various electrical components. One of the thoughts I had about re-entry was how, since he's dealing with a very large electrical system he kind of can get away with treating a part of the electrical system as being dependent upon another part of the system in such a way that it's like it's infinite. Or he can summarize a large network of components which are the same in form, but however-many times over (I have no idea what even the ballpark estimation would be) through re-entry rather than having to write out every individual component which would make for a technically accurate but difficult to use map. With re-entry you can summarize a large chunk of components.
And in Appendix 2, page 117 GSB makes a note of how he believes the marked state summarizes a large chunk of the Principia Mathematica -- so I believe it's correct to read him as trying to compress details into something more user-friendly so he can think through the problems of the network (but then he's a mathematician, so he's also developing a math).
Though Quoting wonderer1
That's not out of the question. And I'd go further and say it wouldn't undermine the text either. One of the stories from science I like to tell is about how the structure of Benzene was guessed at by Kukele, at least so he tells the story, when he had a very vivid day-dream of a snake eatings its own tail. The moral being for a science the inspiration isn't as important as whether the idea "works" (in Benzene's case, unified a number of observations into a single theory of its structure)
I don't see any reason to think that one is under an altered state of consciousness to then think that they are unable -- I'd prefer to say differently abled. There are people who see things without drugs, after all, though we also cannot substitute rigorous thinking with the possibly profound experiences people sometimes report hallucinogens having. On this topic I've always found Aldous Huxley's The Doors of Perception to be good. .
In the notes GSB notes his belief about the relationship between logic and mathematics is, on page 101-102:
Which I find super interesting. It's kind of going into how math justifies itself, and in a way it seems GSB believes that logic is an applied mathematics, but that at bottom it all comes out of the void.
I am strongly reminded of Pirsig, here, when he talks about 'quality'. (Which is a fair candidate for the first distinction.) there's a bit in the Zen and the Art of Motorcycle Maintenance about how one judges the quality of an essay first, and forms the criteria for what 'makes a good essay' from the good essays rather than the other way round. And then, the English professor, it is, refuses to grade the students work on the basis that they know the quality of their own work already.
And yes, it was the age of LSD.
armature will return ro make contact at A and
If we spell out this cycle onto a causal sequence, we get the following:
If contact is made at A, then the magnet is activated.
If the magnet is activated, then contact at A is broken.
If contact at A is broken, then the magnet is inactivated.
If magnet is inactivated, than contact is made.
This sequence is perfectly satisfactory provided it is clearly understood that the if . . . then junctures are causal. But the bad pun that would move the ifs and thens over into the world of logic will create havoc:
If the contact is made, then the contact is broken. If P, then not P.
The if . . . then of causality contains time, but the if . . . then of logic is timeless. It follows that logic is an incomplete model of causality .[/quote]
This, in case anyone wonders, is why my next reading thread is
https://thephilosophyforum.com/discussion/14707/reading-mind-and-nature-a-necessary-unity-by-gregory-bateson