From Combinatory Logic to Proof Logic


By Jacques Bailhache ( jacques.bailhache@gmail.com )

Combinatory Logic is a theory which formalizes the foundations of information processing. It considers functions not as a set of couples (argument, result) but as a process which permits to get the result from the argument.
We can consider only functions with one argument, since functions with two arguments may be considered as functions which, when applied to the first argument, gives another function which, when applied to the second argument, gives the result, and so on.
The application of a function f to an argument a is written "f a", with the convention "f a b = (f a) b".
There are three possible case for a function with one argument : We can define three primitive functions called combinators corresponding to these three cases by : These are the rules defining the basic combinators, to which we must add the rules of equality : reflexivity, symmetry, transitivity, and the rules concerning equalities of applications, to get what is called the Equational Calculus for Combinatory Logic (CL) ( see for example http://plato.stanford.edu/entries/logic-combinatory/ )
We will start with a formalization of Equational Calculus for CL in which there are two kinds of propositions :
The semantic equality "=" must be distinguished from the syntactic (literal) equality which will be represented by "==".

The Equational Calculus for CL theory can be formalized by these axioms ( |- ... ) and rules ( ... |- ... ) :

The axioms concerning I are not necessary since the combinator I can be defined from combinators K and S by I = S K K.
This system may be extended by adding symbols Z1 ... Zn and associated axioms which formalize one particular theory. This gives what we will call Symbolic Combinatory Logic (SCL). So the first formalization of SCL is :
SCL terms are defined by these rules :

a1, ... ap, b1, ... bp are terms which depend on the theory we want to formalize ; " |- a1 = b1 " ... " |- ap = bp " are the specific axioms formalizing a particular theory.
This set of axioms and rules can be simplified.
The two rules " a = b |- a c = b c " and " a = b |- c a = b a " may be merged into one rule " a = b, c = d |- a c = b d ". This gives the following system :


Instead of having two kinds of propositions term(x) and x = y, we may represent the proposition term(x) by x = x. With this convention, we automatically get reflexivity ("a = a |- a = a"). The rule "term(a), term(b) |- term(a b)" becomes a particular case of the rule "a = b, c = d |- a c = b d" : "a = a, b = b |- a b = a b".
The new system becomes :

Concerning the rule "a = a |- I a = a", the restriction to premises with left and right sides identical is not necessary since the rule may be generalized to "a = a' |- I a = a'", and similarily for the rules defining K and S. So we get the following system ( see also here ) :

Symmetry and transitivity of equality may be merged into one rule, either left transitivity-symmetry : " a = b, a = c |- b = c ", or right transitivity-symmetry " a = c, b = c |- a = b ". From this rule, using reflexivity, symmetry can be easily deduced : " a = b, a = a |- b = a ", and from symmetry and transitivity-symmetry, transitivity can easily be deduced. For example : Reciprocally, transitivity and symmetry obviously imply (left and right) transitivity-symmetry by combining transitivity and symmetry.
The new axiomatization of SCL becomes (for example with left transitivity-symmetry) :

We will now give a name to each axiom and rule of this system :

A proof can be represented by a combination of axioms and rules, and it proves an equality between two terms. For example, the proof DFI TK proves I K = K, and the proof APL TK TI proves K I = K I.
More generally, any term can be associated to a corresponding proof which proves the equality of this term to itself by the correspondance : I -> TI, K -> TK, S -> TS, Z1 -> TZ1, ... Zn -> TZn, application -> APL.
The next step consists in identifying terms with corresponding proofs : I = TI, K = TK, S = TS, Z1 = TZ1, ... Zn = TZn, and application of a to b is identified with APL a b.
With this convention, each term x which is also a proof proves the equality between two terms a and b which we will call respectively left and right of x. So "x proves a = b" is equivalent to "left(x) = a and right(x) = b".
We can represent this system by a set of rules in the form " XXX : x1 = y1, ..., xq = yq |- x = y " which means that if z1 proves x1 = y1, ..., zq proves xq = yq then XXX z1 ... zq proves x = y.

We must also define left and right of LTS x y when left(x) differs from left(y). We may for example say that in this case LTS x y proves LTS x y = LTS x y.
When we implement such a system, it appears to be slow, because of the inefficiency of CL. Lambda calculus has better efficiency, but the inconvenient of arbitrary naming of variables. For example, lambda x . x and lambda y . y represent the same function but are syntactically different. De Bruijn indexes has both advantages of efficiency and avoiding the problem of variables naming, so we will use it. For more lisibility, Lambda Calculus ma be used as an external notation for De Bruijn expressions.
The principle of De Bruijn indexes is that lambda expressions are not associated to variable names, and variables are numberedd with an index representing the nesting level of the corresponding lambda expression. For example, "lambda x . x" is represented by "lambda . V0", "lambda x . lambda y . x" by "lambda . lambda . V1".
With this notation, K and S disappear as primitive terms, and new primitive terms are introduced for variables and lambda expressions. A new proof constructor must also be introduced to substitute variables when a lambda expression is applied to a term.
This gives this new system, which we will call Symbolic Lambda Calculus (SLC) (see also here) :

Variables might be represented by one primitive variable DB0 = DBV 0 and a constructor DBS to define other variables (DBV 1 = DBS DB0, DBV 2 = DBS (DBS DB0), ...)
This gives the following system :

Any theory can be reformulated with one symbol and one axiom. For example, we can define three symbols Z1, Z2, Z3 by : and the three axioms "|- a1 = b1", "|- a2 = b2", "|- a3 = b3" are equivalent to the axiom "|- DBL (APL (APL (APL DB0 a1) a2) a3) = DBL (APL (APL (APL DB0 b1) b2) b3)".
With one symbol and one axiom, we get the following system :

Proofs can be made shorter by defining a reduction function "red" which repeats substitution of APL (DBL x) y by the result of the substitution in x of variables whose index equals the nesting level in DBL terms by y., and introducing a constructor LTR defined by : It appears that with this constructor, the LTS and SUB constructor are not needed.
This gives the Simplified Symbolic Lambda Calculus system (see also here) :

We must now give a precise definition of the "red" function.
One idea is to define a function "red1" which does one step of reduction, and repeat the application of this function until no more reduction is possible, i.e. we get a x such that red1(x) == x.
The problem with this definition is that there exist some terms (such that the fixed point of identity for example) such that iterated steps of reduction falls into a loop. To avoid this problem, we can define a "red" function which stops when a previously got result is got again.
But there also exist other terms for which iterated steps of reduction gives a succession of terms of increasing size. This can be detected if at some time we get a term which contains a previously got term. We can stop reduction at this time to avoid looping.
Since in some cases reduction could be continued but must be stopped to avoid looping, the consequence is that if x = y, we do not necessarily have red(x) == red(y). This limits the conditions of application of the LTR constructor. These conditions can be generalized by modifying the definition of this constructor : The complete system becomes :
There is something missing with this system : if red(a) == b, we would like to be able to prove a = b.
This can be done by modifying the definition of LTR : Then LTR applied to a (which proves a = a) and a proves a = b.
The previous LTR x y can be obtained with LTR (LTR x x) (LTR x y). With this modification, we get the following system :

Here is a Haskell implementation of Simplified Symbolic Lambda Calculus with an example of theory with the specific axiom "SMB = APL SMB SMB" and some proofs derived in this system :
Sslc> test

The proof
  LTR (LTR AXM SMB) (APL SMB AXM) 
proves
  SMB
= APL SMB (APL SMB SMB)

The proof
  LTR (LTR AXM SMB) (APL AXM SMB) 
proves
  SMB
= APL (APL SMB SMB) SMB

The proof
  LTR (LTR AXM SMB) (APL AXM AXM) 
proves
  SMB
= APL (APL SMB SMB) (APL SMB SMB)

  APL (APL (DBL DB0) (DBL DB0)) (APL (DBL DB0) SMB)
reduces to
  SMB

  APL (DBL (APL DB0 DB0)) (DBL (APL (DBL DB0) (APL (DBL (APL DB0 DB0)) DB0)))
reduces to
  APL (DBL DB0) (APL (DBL (APL DB0 DB0)) (DBL (APL (DBL DB0) (APL (DBL (APL DB0 DB0)) DB0))))

  APL (DBL (APL DB0 DB0)) (DBL (APL (DBL (APL SMB DB0)) (APL (DBL (APL DB0 DB0)) DB0)))
reduces to
  APL (DBL (APL SMB DB0)) (APL (DBL (APL DB0 DB0)) (DBL (APL (DBL (APL SMB DB0)) (APL (DBL (APL DB0 DB0)) DB0))))

Sslc> 

Here is the Haskell source code :
module Sslc where
 
 import Data.List

 data Proof = AXM
            | SMB 
            | DB0 
            | DBS Proof
            | DBL Proof
            | APL Proof Proof
            | LTR Proof Proof
	deriving (Eq, Show)

 data Side = LeftSide | RightSide
	deriving (Eq, Show)

 shift :: Proof -> Proof -> Proof
 shift _ AXM = AXM
 shift _ SMB = SMB
 shift _ DB0 = DB0
 shift u (DBS x) = if u == DBS x then DBS u else DBS (shift u x)
 shift u (DBL x) = DBL (shift (DBS u) x)
 shift u (APL x y) = APL (shift u x) (shift u y)
 shift u (LTR x y) = LTR (shift u x) (shift u y)

 subst :: Proof -> Proof -> Proof -> Proof
 subst1 :: Proof -> Proof -> Proof -> Proof
 subst u a b = if u == a then b else (if DBS u == a then u else subst1 u a b)
 subst1 _ AXM b = AXM
 subst1 _ SMB _ = SMB
 subst1 _ DB0 _ = DB0
 subst1 u (DBS x) b = DBS (subst u x b)
 subst1 u (DBL x) b = DBL (subst (DBS u) x (shift DB0 b))
 subst1 u (APL x y) b = APL (subst u x b) (subst u y b)
 subst1 u (LTR x y) b = LTR (subst u x b) (subst u y b)

 cont :: Proof -> Proof -> Bool
 cont1 :: Proof -> Proof -> Bool
 cont x y = if x == y then True else cont1 x y
 cont1 AXM _ = False
 cont1 SMB _ = False
 cont1 DB0 _ = False
 cont1 (DBS x) y = cont x y
 cont1 (DBL x) y = cont x y
 cont1 (APL x y) z = (cont x z) || (cont y z)
 cont1 (LTR x y) z = (cont x z) || (cont y z)

 red1 :: Proof -> Proof
 red1 AXM = AXM
 red1 SMB = SMB
 red1 DB0 = DB0
 red1 (DBS x) = DBS (red1 x)
 red1 (DBL x) = DBL (red1 x)
 red1 (APL (DBL x) y) = subst DB0 x y
 red1 (APL x y) = APL (red1 x) (red1 y)
 red1 (LTR x y) = LTR (red1 x) (red1 y)

 red2 :: [Proof] -> [Proof]
 red2 [] = []
 red2 (x : l) = (red1 x) : (x : l)

 red3 :: [Proof] -> [Proof]
 red3 [] = []
 -- red3 (x : l) = if elem x l then (x : l) else red3 (red2 (x : l))
 red3 (x : l) = case find (\y -> cont x y) l of
	Nothing -> red3 (red2 (x : l))
	Just _ -> x : l

 red :: Proof -> Proof
 red x = case red3 (x : []) of
		[] -> x
		y : m -> y
 -- red x = if red1 x == x then x else red (red1 x)

 side :: Side -> Proof -> Proof -> Proof -> Proof
 side LeftSide a b AXM = a
 side RightSide a b AXM = b
 side _ _ _ SMB = SMB
 side _ _ _ DB0 = DB0
 side s a b (DBS x) = DBS (side s a b x)
 side s a b (DBL x) = DBL (side s a b x)
 side s a b (APL x y) = APL (side s a b x) (side s a b y)
 side s a b (LTR x y) = 
	let lx = side LeftSide a b x
	    ly = side LeftSide a b y
	in let rlx = red lx
	       rly = red ly
	   in if (lx == ly) || (lx == rly) || (rlx == ly) || (rlx == rly) 
	      -- then red (side RightSide a b (if s == LeftSide then x else y))
          then (if s == LeftSide then (side RightSide a b x) else red (side RightSide a b y))
	      else LTR x y
 -- if red (side LeftSide a b x) == red (side LeftSide a b y) then (side RightSide a b (if s == LeftSide then x else y)) else LTR x y

 axl = SMB
 axr = APL SMB SMB

 left = side LeftSide axl axr
 right = side RightSide axl axr

 proves x = do
  putStr ("\nThe proof\n  " ++ show x ++ " \nproves\n  " ++ show (left x) ++ "\n= " ++ show (right x) ++ "\n")

 reducesTo x = do
  putStr ("\n  " ++ show x ++ "\nreduces to\n  " ++ show (red x) ++ "\n")

 ident = DBL DB0
 auto = DBL (APL DB0 DB0)
 comp f g = DBL (APL f (APL g DB0))
 fix f = APL auto (comp f auto)
 ias = fix (DBL (APL SMB DB0))

 test = do
  proves (LTR (LTR AXM SMB) (APL SMB AXM))
  proves (LTR (LTR AXM SMB) (APL AXM SMB))
  proves (LTR (LTR AXM SMB) (APL AXM AXM))
  reducesTo (APL (APL (DBL DB0) (DBL DB0)) (APL (DBL DB0) SMB))
  reducesTo (fix ident)
  reducesTo (fix (DBL (APL SMB DB0)))  

Simplified Symbolic Lambda Calculus is theoretically satisfying but practically not easy to use with one symbol and one axiom. So for practical reasons I reintroduced the possibility of having several symbols and several axioms using a constructor EQU defined by EQU x y proves x = y.

Another useful constructor is RED which reduces its argument before computing its conclusion.

I also modified the definition of LTR :

is replaced by :

I also introduced a new syntax :

This theory seemed to me sufficiently different from the original Lambda Calculus to have a name that does not refers to it, and since all expressions are proofs, I called it Proof Logic.

I introduced new constructors which are not necessary but can make proofs easier : < x , y > which is the symmetric of { x , y } :

/x which does one step of reduction on the left side of the conclusion : \x which does one step of reduction on the right side of the conclusion : We also may want to display some intermediate result. To do this, we introduce the operator ? x y whose value is y but when we calculate the left part of its conclusion, it displays "Left of x is : " followed by this left part, and the same for the right part. The constructor $ reduces the left and right sides of the conclusion : The constructors { x , y } and < x , y > can be merged into one constructor x ; y also including transitivity : Here are the complete rules of Proof Logic : The left and right sides of the conclusion of @x does not depend on the left and right sides of x, but on the left and right sides of the reduction of x :