This is a great introduction to that most ancient and elegant model of computation. But I was a little sad to read:
> One can represent natural numbers in a simple way, as follows:
> Definition (ordered tuples, natural numbers) The ordered tuple ⟨a0 , … an⟩ of λ-terms is defined as λx [x a0 … an]. One then defines the λ -term ┌ n ┐ corresponding to the natural number n as: ┌ 0 ┐ = I and, for every k, ┌ k + 1 ┐ = ⟨F, ┌ k ┐⟩
which is a rather convoluted and impractical way to represent natural numbers compared to the Church numerals Cn which iterate a given function n times on a giving argument.
There are cases where it makes sense to use tuples to represent natural numbers, namely to simplify the definition of predecessor, subtraction, and comparison. But that uses 1-tuples rather than 2-tuples [1].
Replacing Church numerals by tuple numerals allowed a googologically VERY large number (based on the so-called Bashicu Matrix System) to be expressed in only 350 bits rather than 404 bits. [2]
I've got interested in formalization of mathematics recently and found the book Type Theory and Formal Proof by Nederpelt and Geuvers is a really nice one, taking you from the untyped lambda calculus and gradually extending the type system until you get the full dependent type theory and the Calculus of Constructions that underlies, e.g., the Lean language/proofchecker
Peter Smith (logic God) also recommends it in his logic study guide:
I just came in the comments to recommend that book myself! It has been awhile since I had really took a deep dive into the lambda calculus and I was pleasantly surprised with both how readable this book has been so far (still working through it) and just how much I still had to learn about type systems.
I also had previously thought I wasn't that interest in formal proofs because I was never super interested in traditional mathematical proofs, but once I realize formal proofs can be viewed as an extension of type checking it got me a lot more interested!
Cool! Until recently I also wouldn't have been that attracted by this title, but since learning more about proof theory and logic on one hand, and computation on the other, and beginning to grasp more the depth of the connection/isomorphism between them via the Curry-Howard correspondence, it turns out this book is pretty much exactly along the right path for me now. Next up will be getting a better grasp of how it all relates to category theory via the so-called Curry-Howard-Lambek correspondence...
I really like combinatory logic and lambda calculus, and I think it should be used to define common semantics for various languages and computer systems.
Lately, I was wondering about if it's possible to represent any logic by choosing a suitable base of combinators, and the sentences in the logic would correspond to beta-normal terms expressed in those combinators. What brought me to the idea is that we can represent certain substructural logics using a subset of B, C, K, W. Also, one can represent classical logic by using a different base than S and K, which allows for call/cc.
What would be intriguing, a choice of combinator base such that it could express terms in Church's typed lambda calculus. The composition of combinators would fail to normalize iff the corresponding composed term in typed lambda calculus failed to typecheck.
I think this could lead to somewhat simpler foundations of mathematics, instead of having types and various axioms, we would just say here's a base set of combinators and have fun combining them.
> The λ calculus is, at heart, a simple notation for functions and application.
I'm already confused! and application. Does that even make sense? It didn't get much better when I started on the first chapter. It seemed like an exercise in making simple algebra much more complicated, for philosophical reasons, or something.
I've no doubt at all that lambda calculus (whatever it is) is extremely important and valuable and if I put in the work and broke through the barriers I'd be embarrassed about my last paragraph.
But still, is there no way to introduce the subject in a slightly less obtuse and confusing way?
With "application" they mean "function application" (aka. calling a function).
But I feel you. I'm sure there is a less academic way to explain this stuff.
Do you know Lisp or Clojure? Then you already understand lambda calculus.
Do you know JavaScript? Then replace all those lambdas with arrow functions:
(λ x . x + 2) 1
becomes:
(x => x + 2)(1)
So lambda calculus "programs" are just a bunch of these nested immediately invoked anonymous functions. You could run that by just pasting it into the web console / NodeJS REPL.
Like Turing machines, this is a general model for computation. But it's conceptually much simpler and easier to reason about mathematically.
Each computation step comes down to rewriting the program. That's just like computing algebraic expressions on paper:
(1 + 2)x - 2x
= 3x - 2x
= x
sum up something there, multiply out here, ...
In the lambda calculus you can do the same but you have "applying a function" as a step that you can do:
If you're confused by a mathematical or technical text, it can help to simply accept that, and read on for a bit (I've heard from professional mathematicians that this is commonplace in their readings)
In this case, we find after the abstract and the contents list that the idea is that a function is a general expression like x^2+1, with a variable x. Applying this function to an argument, e.g. 2, results in the value 5.
Edit: So that should make sense of the idea of a function, on one hand, and the application of the function, on the other.
The idea behind lambda calculus was to come up with a certain primitive set of operations that were precise enough that they could be defined mechanistically (and this was before the days when computers existed that could carry these out mechanistically), but also were rich enough that they could be used in combination to compute any function hitherto devised
I wouldn't be surprised if there were a much more clear and concise way to introduce it, but on the other hand, the topic doesn't seem all that useful to someone without exposure to some fundamentals, so it seems fine to me if it has a bunch of jargon. Lambda calculus programs that actually do something useful are extremely weird and hard to read and write, even very simple ones, so that's not where its value comes from. Probably a better thing to learn in connection with stuff like interpreter and compiler design when it isn't weird to say "function application" and expect you to know what that is.
This is a great introduction to that most ancient and elegant model of computation. But I was a little sad to read:
> One can represent natural numbers in a simple way, as follows:
> Definition (ordered tuples, natural numbers) The ordered tuple ⟨a0 , … an⟩ of λ-terms is defined as λx [x a0 … an]. One then defines the λ -term ┌ n ┐ corresponding to the natural number n as: ┌ 0 ┐ = I and, for every k, ┌ k + 1 ┐ = ⟨F, ┌ k ┐⟩
which is a rather convoluted and impractical way to represent natural numbers compared to the Church numerals Cn which iterate a given function n times on a giving argument.
There are cases where it makes sense to use tuples to represent natural numbers, namely to simplify the definition of predecessor, subtraction, and comparison. But that uses 1-tuples rather than 2-tuples [1].
Replacing Church numerals by tuple numerals allowed a googologically VERY large number (based on the so-called Bashicu Matrix System) to be expressed in only 350 bits rather than 404 bits. [2]
[1] https://github.com/tromp/AIT/blob/master/numerals/tuple_nume...
[2] https://github.com/tromp/AIT/blob/master/fast_growing_and_co...
I've got interested in formalization of mathematics recently and found the book Type Theory and Formal Proof by Nederpelt and Geuvers is a really nice one, taking you from the untyped lambda calculus and gradually extending the type system until you get the full dependent type theory and the Calculus of Constructions that underlies, e.g., the Lean language/proofchecker
Peter Smith (logic God) also recommends it in his logic study guide:
https://www.logicmatters.net/tyl/
I just came in the comments to recommend that book myself! It has been awhile since I had really took a deep dive into the lambda calculus and I was pleasantly surprised with both how readable this book has been so far (still working through it) and just how much I still had to learn about type systems.
I also had previously thought I wasn't that interest in formal proofs because I was never super interested in traditional mathematical proofs, but once I realize formal proofs can be viewed as an extension of type checking it got me a lot more interested!
Cool! Until recently I also wouldn't have been that attracted by this title, but since learning more about proof theory and logic on one hand, and computation on the other, and beginning to grasp more the depth of the connection/isomorphism between them via the Curry-Howard correspondence, it turns out this book is pretty much exactly along the right path for me now. Next up will be getting a better grasp of how it all relates to category theory via the so-called Curry-Howard-Lambek correspondence...
A few years ago I wrote up a brief introduction to Lambda Calculus with Python¹ ideas while remaining light on mathematical notation.
¹ https://matthodges.com/posts/2022-07-17-exploring-lambda-cal...
I really like combinatory logic and lambda calculus, and I think it should be used to define common semantics for various languages and computer systems.
Lately, I was wondering about if it's possible to represent any logic by choosing a suitable base of combinators, and the sentences in the logic would correspond to beta-normal terms expressed in those combinators. What brought me to the idea is that we can represent certain substructural logics using a subset of B, C, K, W. Also, one can represent classical logic by using a different base than S and K, which allows for call/cc.
What would be intriguing, a choice of combinator base such that it could express terms in Church's typed lambda calculus. The composition of combinators would fail to normalize iff the corresponding composed term in typed lambda calculus failed to typecheck.
I think this could lead to somewhat simpler foundations of mathematics, instead of having types and various axioms, we would just say here's a base set of combinators and have fun combining them.
I'd like to boost this: https://blog.zdsmith.com/series/combinatory-programming.html
500
The very first sentence;
> The λ calculus is, at heart, a simple notation for functions and application.
I'm already confused! and application. Does that even make sense? It didn't get much better when I started on the first chapter. It seemed like an exercise in making simple algebra much more complicated, for philosophical reasons, or something.
I've no doubt at all that lambda calculus (whatever it is) is extremely important and valuable and if I put in the work and broke through the barriers I'd be embarrassed about my last paragraph.
But still, is there no way to introduce the subject in a slightly less obtuse and confusing way?
With "application" they mean "function application" (aka. calling a function).
But I feel you. I'm sure there is a less academic way to explain this stuff.
Do you know Lisp or Clojure? Then you already understand lambda calculus.
Do you know JavaScript? Then replace all those lambdas with arrow functions:
becomes: So lambda calculus "programs" are just a bunch of these nested immediately invoked anonymous functions. You could run that by just pasting it into the web console / NodeJS REPL.Like Turing machines, this is a general model for computation. But it's conceptually much simpler and easier to reason about mathematically.
Each computation step comes down to rewriting the program. That's just like computing algebraic expressions on paper:
sum up something there, multiply out here, ...In the lambda calculus you can do the same but you have "applying a function" as a step that you can do:
If you're confused by a mathematical or technical text, it can help to simply accept that, and read on for a bit (I've heard from professional mathematicians that this is commonplace in their readings)
In this case, we find after the abstract and the contents list that the idea is that a function is a general expression like x^2+1, with a variable x. Applying this function to an argument, e.g. 2, results in the value 5.
Edit: So that should make sense of the idea of a function, on one hand, and the application of the function, on the other.
The idea behind lambda calculus was to come up with a certain primitive set of operations that were precise enough that they could be defined mechanistically (and this was before the days when computers existed that could carry these out mechanistically), but also were rich enough that they could be used in combination to compute any function hitherto devised
I wouldn't be surprised if there were a much more clear and concise way to introduce it, but on the other hand, the topic doesn't seem all that useful to someone without exposure to some fundamentals, so it seems fine to me if it has a bunch of jargon. Lambda calculus programs that actually do something useful are extremely weird and hard to read and write, even very simple ones, so that's not where its value comes from. Probably a better thing to learn in connection with stuff like interpreter and compiler design when it isn't weird to say "function application" and expect you to know what that is.