Tuesday, December 29, 2015

The Number One, Part Three

In an earlier post, "What it Means to be Number One", I presented Quine's definition of the number one, which is defined in terms of four basic mathematical constructs: class membership (⌜α ϵ β⌝), universal generalization (⌜(α)ϕ⌝), joint denial (⌜(ϕ ↓ ψ)⌝), and class abstraction (α^ϕ⌝).  It is six pages of dense text.  In another post, "The Number One, Part Two", I claimed that the length of the definition is "the result of giving the number one a precise, complete, logical definition without resorting to using any numbers except zero".  This is not quite true.  I've since realized that the size comes not from precision, completeness nor logicality, but from Quine's definition of negation, which is

⌜∼ϕ⌝ for ⌜(ϕ ↓ ϕ)⌝

This is definition is important because it enables his system to define all truth-functional connectives in terms of just one truth-functional connectives, thereby minimizing the number of basic constructs in his system.  But it does have the effect of making expanded definitions quite large.  You see, any time a negation is expanded, the expression which is negated (ϕ) is duplicated in the resulting expansion.  If that expression also contains negated expressions, then those expressions are quadrupled, and if those negated expressions contain negated expressions, they are octupled, and so on.  The result is an exponential relationship between the size of an expanded definition and the number of layers of negation in the definition.  Those of you who know computer science know that exponential relationships mean huge outputs for all but the smallest inputs.  Hence the 6-page definition of one.  If we expand all constructs contained in Quine's definition of one except negation, the result is not so big.  It is

x^∼(y)∼(∼(y ϵ x) ↓ ∼(α^(∼(α ϵ x) ↓ ∼(α ϵ α′^(∼(α′ ϵ α′′^((α′′′)(∼∼(∼(α′′′ ϵ α′′) ↓ α′′′ ϵ y) ↓ ∼∼(∼(α′′′ ϵ y) ↓ α′′′ ϵ α′′))))))) ϵ α^((α′)(∼∼(∼(α′ ϵ α) ↓ α′ ϵ x′^(∼((α′′)(∼∼(∼(α′′ ϵ x′) ↓ α′′ ϵ x′) ↓ ∼∼(∼(α′′ ϵ x′) ↓ α′′ ϵ x′))))) ↓ ∼∼(∼(α′ ϵ x′^(∼((α′′)(∼∼(∼(α′′ ϵ x′) ↓ α′′ ϵ x′) ↓ ∼∼(∼(α′′ ϵ x′) ↓ α′′ ϵ x′))))) ↓ α′ ϵ α)))))

which is not shockingly complicated.  Even if we expand statements of membership in and of class abstractions in this definition (which is something I did not do in "What it means to be..."), the definition of one is still just

x^∼(y)∼(∼(y ϵ x) ↓ ∼(∼(γ)∼(∼(∼(β)∼(∼((α)(∼∼(∼(α ϵ β) ↓ ∼(γ′)∼(∼(α ϵ γ′) ↓ ∼(α′)∼(∼(α′ ϵ γ′) ↓ (∼(α′ ϵ x) ↓ ∼(∼(γ′′)∼(∼(α′ ϵ γ′′) ↓ ∼(α′′)∼(∼(α′′ ϵ γ′′) ↓ ∼(∼(γ′′′)∼(∼(α′′ ϵ γ′′′) ↓ ∼(α′′′)∼(∼(α′′′ ϵ γ′′′) ↓ (α′′′′)(∼∼(∼(α′′′′ ϵ α′′′) ↓ α′′′′ ϵ y) ↓ ∼∼(∼(α′′′′ ϵ y) ↓ α′′′′ ϵ α′′′)))))))))))) ↓ ∼∼(∼(∼(γ′)∼(∼(α ϵ γ′) ↓ ∼(α′)∼(∼(α′ ϵ γ′) ↓ (∼(α′ ϵ x) ↓ ∼(∼(γ′′)∼(∼(α′ ϵ γ′′) ↓ ∼(α′′)∼(∼(α′′ ϵ γ′′) ↓ ∼(∼(γ′′′)∼(∼(α′′ ϵ γ′′′) ↓ ∼(α′′′)∼(∼(α′′′ ϵ γ′′′) ↓ (α′′′′)(∼∼(∼(α′′′′ ϵ α′′′) ↓ α′′′′ ϵ y) ↓ ∼∼(∼(α′′′′ ϵ y) ↓ α′′′′ ϵ α′′′)))))))))))) ↓ α ϵ β))) ↓ ∼(β ϵ γ))) ↓ ∼(α)∼(∼(α ϵ γ) ↓ (α′)(∼∼(∼(α′ ϵ α) ↓ ∼(γ′)∼(∼(α′ ϵ γ′) ↓ ∼(x′)∼(∼(x′ ϵ γ′) ↓ ∼((α′′)(∼∼(∼(α′′ ϵ x′) ↓ α′′ ϵ x′) ↓ ∼∼(∼(α′′ ϵ x′) ↓ α′′ ϵ x′)))))) ↓ ∼∼(∼(∼(γ′)∼(∼(α′ ϵ γ′) ↓ ∼(x′)∼(∼(x′ ϵ γ′) ↓ ∼((α′′)(∼∼(∼(α′′ ϵ x′) ↓ α′′ ϵ x′) ↓ ∼∼(∼(α′′ ϵ x′) ↓ α′′ ϵ x′)))))) ↓ α′ ϵ α))))))

which is more than anyone would want to try to write or memory, but still not embarrassingly long.  If we don't expand any of the truth-functional connectives, nor existential quantification, the result is something that it almost readable by a person who is familiar with symbolic logic:

x^(∃y)(y ϵ x (∃γ)((∃β)((α)(α ϵ β(∃γ′)(α ϵ γ′ (α′)(α′ ϵ γ′ ⊃ (α′ ϵ x (∃γ′′)(α′ ϵ γ′′ (α′′)(α′′ ϵ γ′′∼((∃γ′′′)(α′′ ϵ γ′′′ (α′′′)(α′′′ ϵ γ′′′(α′′′′)(α′′′′ ϵ α′′′α′′′′ ϵ y)))))))))) β ϵ γ) (α)(α ϵ γ(α′)(α′ ϵ α(∃γ′)(α′ ϵ γ′ (x′)(x′ ϵ γ′∼((α′′)(α′′ ϵ xα′′ ϵ x′))))))))

Saturday, July 11, 2015

Binary Operators in somerby.net/mack/logic

Binary Operators in somerby.net/mack/logic now have different precedences.  See here.

Sunday, June 21, 2015

Alternative Hexadecimal Digits

I've been collaborating with Valdis Vītoliņš on hexadecimal digits.  The result is a new set of digits:


They follow a design where the horizontal strokes represent 1, 2 and 4 in the binary composition of the number which each digit is supposed to represent.  The rules for constructing the digits are:
  • 0 is represented by a digit that looks like an 'o' or a '6'.
  • 8 is represented by a digit that looks like a miniscule rho or a 'P'.
  • Numbers 1-7 and 9-15 are represented by digits whose shape follows this plan:
We considered several possible sets of digits before settling on this one.  We choose this new set of digits because 
  1. We find it is the easiest to encode and decode.
  2. We find that pairs of these digits can be combined into readable ligatures.
Valdis has created fonts for the digits and ligatures, which I have incorporated into a branch of the Hex Editor plugin for Notepad++ .  It has all the features of the mainline Hex Editor plugin, but also offers the option of viewing hexadecimal data with the new characters in place of the traditional 0-9A-F.  If you'd like to use it, then download this zip file and run the setup executable contained therein:
The fonts look like this:

Sunday, May 24, 2015

Hexadecimal Digits AND Ligatures!


A few years ago, I posted a set of hexadecimal digits that I invented that could serve as an alternative to the customary 0-9 and A-F.  My idea was to make a set of digits that could be represented in the standard numeric LCD display where the digit somehow represented a binary version of the number it represents. See here.  Now someone has done one better.  In the blog post "We think about yotabytes, but can't handle just one byte", Valdis Vītoliņš describes a hexadecimal of his own invention based on the same idea which adds one innovation; he combines pairs of digits into ligatures, thereby creating a system of 256 symbols that can represent numbers 0 though 255.  That's one symbol for each possible byte value, and the numeric value for each symbol can be read out of its shape with a simple algorithm.  If you're a programmer <ahem>, software engineer, like me, who looks at hex dumps often, this is an intriguing idea.  I might actually try use this.

Sunday, May 03, 2015

Counterexample Feature Completed for somerby.net/logic

The "Counterexample" button now works for modal statements as well as non-modal statements in somerby.net/mack/logic.

Tuesday, April 14, 2015

Addtions to somerby.net/mack/logic

I added operators for strict implication and definite description to somerby.net/mack/logic.  I also added the beginnings of a feature that finds counterexamples for statements that are not necessarily true.  It only works for non-modal statements, but I think I can expand it to work for modal statements as well.

Saturday, March 28, 2015

Free Variables Are Back (partially)

I've added some support for free variables back into somerby.net/mack/logic.  They are now treated as constants denoting actual objects.  At the moment, they are not allowed in statements which contain modal operators, since I have not yet found a satisfactory way to deal with constants in modal statements.  Or at least a way that I'm willing to commit to.  I'm working on it.  I've been reading Possible Worlds, "Actualism" and "SQML" and other such things for instruction and insight.  I knew that there was more than one system of modal logic, but this reading has made me appreciate the need for me to define exactly what system of modal logic somerby.net/mack/logic implements.  So I'm working on that, too.

Friday, February 13, 2015

No More Free Variables (for now)

Formerly, somerby.net/mack/logic would accept statements with free variables.  It would bind them all with existential quantifiers.  That seemed to work; it yielded the results I expected for all of the statements I had tried - up until a few days ago, when I found it to fail for some statements.  One such statement is "a=b".  I would expect this to be a contingent statement; true or false depending on whether or not a and b denoted the same object, but the application decided it to be necessarily true.  It's not hard to see why if we expand the statement, as the application does, to be "3b,3a,a=b". There is always an a and a b which are identical in any nonempty universe; they are identical when they are the same object.

So, for now, the application rejects statements that contain free variables.  I'm working on finding a way to decide them correctly.  As soon as I've found one (and I've assured myself that it really is correct), I'll incorporate it into the application and allow free variables once again.

Saturday, January 24, 2015

Another Square of Opposition

In my former post, I showed Terrence Parsons' theory of Aristotle's Square of Opposition in symbolic form.  I also noted that the Square of Opposition holds up under the modern interpretation of the four forms of Term Logic if it is assumed a priori that the subject terms of the forms is non-empty (I am not the first to note this; see section 2.2.2 of iLogic).  So there are two interpretations of the four forms which affirm the Square of Opposition.  But are there others?

I found another interpretation of the four forms of Term Logic that affirms the Square of Opposition.  It's a parallel to Parsons' Square.  Parson constructs the square by taking the modern interpretation of the four forms, bestowing existential import upon Form A and denying existential import to the form on the opposite corner - Form O.  The interpretation I found is constructed by by taking the modern interpretation of the four forms, bestowing existential import upon Form E, and denying existential import to Form I on the opposite corner.  Here is a statement of it in symbolic form:

// Another Square of Opposition

// "All S are P", with no existential import
A <=> (x,Sx->Px)

// "No S are P", with existential import
E <=> ((x,Sx->~Px)&(3x,Sx))

// "Some S are P", with no existential import
// "If there are any S, some of them are P" might be a better way to state it.
I <=> ((3x,Sx)->(3x,Sx&Px))

// "Some S are not P" under the modern interpretation with existential import
// Since it has existential import, there's no need to state is as "Not all S are P".
O <=> ~(x,Sx->Px)

->

// Contraries
~(A&E)

// Contradictories
A ^ O
I ^ E

// Subcontraries
I | O

// Subalterns
A -> I
E -> O

I'd like know if anyone else has thought of it before.

This interpretation, like Parsons' interpretation and the modern interpretation combined with a non-empty subject term, affirms the Logical Hexagon:

// The Logical Hexagon:

// "All S are P", with no existential import
A <=> (x,Sx->Px)

// "No S are P", with existential import
E <=> ((x,Sx->~Px)&(3x,Sx))

// "Some S are P", with no existential import
// "Some S are P, if any S exist" is a better way to state it.
I <=> ((3x,Sx)->(3x,Sx&Px))

// "Some S are not P", existential import
// Since it has existential import, there's no need to state is as "Not all S are P".
O <=> ~(x,Sx->Px)

// The statement U may be interpreted as "Either all S are P or all S are not P."
U <=> ((x,Sx->Px)|(x,Sx->~Px))

// The statement Y may be interpreted as "Some S is P and some S is not P"
Y <=> ((3x,Sx&Px)&(3x,Sx&~Px))

->

// Subalterns: AI, AU, EU, EO, YI, YE
A->I
A->U
E->U
E->O
Y->I
Y->O

// Contraries: AE, EY, YA
~(A&E)
~(E&Y)
~(Y&A)

// Subcontraries: IU, UO, OI
I|U
U|O
O|I

// Contradictories: AO, UY, EI
A^O
U^Y
E^I

Thursday, January 01, 2015

A Second Theory of Term Logic

I added Term Logic to somerby.net/mack/logic for fun. While doing the necessary research, I discovered the logical Square of Opposition, which is kind of cool. Terence Parsons wrote an illuminating article on the Square. In it, he argues convincingly for an interpretation of 2-term propositions that affirms the Square of Opposition, and also convincingly that this interpretation is Aristotle's intended interpretation. I like the article so much that I've chosen to use this interpretation in my application, defining the four forms of propositions (SaP, SeP, SiP, SoP) just as he does. Even so, I doubt that this is the only coherent theory of Term Logic held by premodern logicians. Here I shall explain why. You can click on any of the symbolic statements in this post to test them in somerby.net/mack/logic.

When explaining why the interpretation of the O-form as "Some S is not P" did not cause problems for premodern logicians, Parsons dismisses the possibility that they assumed that the S-term was not empty, stating "Explicitly rejecting empty terms was never a mainstream option, even in the nineteenth century". But I'm not so sure. First of all, just because they did not explicitly reject empty terms does not mean they implicitly rejected empty terms. Second, they did not have to reject empty terms altogether to make this interpretation of O-form compatible with the traditional Square of Opposition. They only needed to assume a priori (and perhaps unconsciously) that the S-term was not empty whenever they were making an argument. This isn't a very rigorous thing to do, but it's a natural thing to do. Usually, if we are making assertions about some kind of thing, it's because some such thing exists and we want to say something meaningful about it. Reasoning about unicorns may have its uses, but they are not obvious.

Suppose that some pre-modern philosophers, like Boethius and Peter of Spain, did not interpret the propositional forms as Aristotle intended. Instead, they assumed a priori that the S-term was nonempty, and let the O-form have existential import, just as Boethius seemed to be doing when he translated "Some S is P". Then, instead of the Square of Opposition being this:

// Aristotle's Square of Opposition

A <=> ((x,Sx->Px) & (3x,Sx))
E <=> (x,Sx->~Px)
I <=> 3x,Sx&Px
O <=> ((3x,Sx&~Px)|(~3x,Sx))

->

// Contraries
~(A&E)

// Contradictories
A ^ O
I ^ E

// Subcontraries
I| O

// Subalterns
A -> I
E -> O

they believed the Square of Opposition was this:

// A Hypothetical Alternative to
// Aristotle's Square of Opposition

3x,Sx // Assume a priori that S is not empty.

A <=> (x,Sx->Px)  // (Existential import here would be redundant.)
E <=> (x,Sx->~Px)
I <=> (3x,Sx&Px)
O <=> (3x,Sx&~Px) // Assume O has existential import.

->

// Contraries
~(A&E)

// Contradictories
A ^ O
I ^ E

// Subcontraries
I | O

// Subalterns
A -> I
E -> O

The relationships of the Square hold in this interpretation as well as in Aristotle's.

And then there is the matter of the Principle of Obversion and the Principle of Contraposition. Parsons says that some medieval logicians advocated these principles, though they are both fallacious under Aristotle's interpretation of the four forms. The following is not necessarily true:

// The Principle of Conversion by Contraposition,
// with Aristotle's interpretation
// of the A-form and the O-form
((x,Sx->Px) & (3x,Sx)) <=> ((x,~Px->~Sx) & (3x,~Px))
((3x,Sx&~Px)|(~3x,Sx)) <=> ((3x,~Px&~~Sx)|(~3x,~Px))

This is not necessarily true, either:

// The Principle of Obversion,
// with Aristotle's interpretation
// of the A-form and the O-form

// Every S is P = No S is non-P (SaP <=> Se~P)
((x,Sx->Px) & (3x,Sx)) <=> (x,Sx->~~Px)

// No S is P = Every S is non-P (SeP <=> Sa~P)
(x,Sx->~Px) <=> ((x,Sx->~Px) & (3x,Sx))

// Some S is P = Some S is not non-P (SiP <=> So~P)
(3x,Sx&Px) <=> ((3x,Sx&~~Px)|(~3x,Sx))

//Some S is not P = Some S is non-P (SoP <=> Si~P)
((3x,Sx&~Px)|(~3x,Sx)) <=> (3x,Sx&~Px)

Why did some logicians make these mistakes? And why did other logicians like Peter of Spain endorse them? Maybe to them, they weren't mistakes. Under what we call the modern interpretations of the four forms, these principles are necessarily true.

// The Principle of Conversion by Contraposition,
// with the modern interpretations of the forms:
(x,Sx->Px) <=> (x,~Px->~Sx)
(3x,Sx&~Px) <=> (3x,~Px&~~Sx)

// The Principle of Obversion,
// with the modern interpretations of the forms:

// Every S is P = No S is non-P (SaP <=> Se~P)
(x,Sx->Px) <=> (x,Sx->~~Px)

// No S is P = Every S is non-P (SeP <=> Sa~P)
(x,Sx->~Px) <=> (x,Sx->~Px)

// Some S is P = Some S is not non-P (SiP <=> So~P)
(3x,Sx&Px) <=> (3x,Sx&~~Px)

//Some S is not P = Some S is non-P (SoP <=> Si~P)
(3x,Sx&~Px) <=> (3x,Sx&~Px)

Being necessarily true, they will still, of course, be true under an a priori assumption that the S-term is nonempty. So maybe there was a theory of term logic floating around Medieval Europe that looked like this:

3x,Sx

A <=> (x,Sx->Px)
E <=> (x,Sx->~Px)
I <=> (3x,Sx&Px)
O <=> (3x,Sx&~Px)

->

// Contraries
~(A&E)

// Contradictories
A ^ O
I ^ E

// Subcontraries
I | O

// Subalterns
A -> I
E -> O

If so, then they really did have a coherent theory of Term Logic which affirmed the Principle of Conversion by Contraposition and the Principle of Obversion. I can't be sure, since I haven't looked for evidence to the contrary, e.g. Peter of Spain discussing empty terms in Summulae Logicales Magistri Petri Hispani, but as far as I know, it makes sense. I guess I'll have to read some Medieval logic to find out. It's too bad I don't know Latin.