So, over the last few weeks Tom Crinson and I read another paper, this time “The essence of functional programming” (available here) by Philip Wadler, one of the principal designers of Haskell. Tom’s post is here.
The paper
Don’t let yourself be fooled by the title, this is not an introduction to functional programming or lambda calculus. Instead it talks about monads, a topic one reads a lot about and that’s often made out to be a lot more complicated than it really is.
The whole paper is 23 pages long, so I won’t really summarize it all, but rather try to give you an idea if it might be an interesting read for you. Section 1 lays out some basics and explains the difference between pure and impure code. In section 2 the reader finally encounters the monad, defined using the usual triple (M, unitM, bindM). After that we see several versions of a simple call-by-value interpreter, demonstrating various well-known monads (identity, state, error, list etc), as well as a rewrite of the interpreter to implement call-by-name semantics. For some more theory Wadler lays down the monadic laws and provides a definition of monads in terms of mapM and joinM. He also mentions that list comprehension notation generalizes to arbitrary monads, which I found to be quite interesting. Section 3 introduces continuation passing style (CPS) and its relation to monads, before section 4 talks about the influence of monads on Haskell, including some cool info tidbits about early Haskell versions that seem to lack do-notation and the bind operator (the paper is from 1992 after all). Last but not least there’s the wrap-up and the work is put in context of other research.
Takeaway
Monads are not rocket science, they are just another tool in a programmer’s toolbox, especially of a programmer in a pure functional language. In the end, a monad really isn’t much more than a formalized computation, following certain rules like “I put in an A and get out a B, so what ever comes next has to take a B”. Ok, maybe that was a bit oversimplified, but it’s definitely not magic. In fact I wrote up an explanation for Tom, trying to get the basic idea across:
If you imagine a Monad as some sort of box, UnitM is what puts an item into the box. Bind extracts such a value (M a), processes it with a function (a → M b) and returns another item in a box (M b). Why is that cool, because you can’t leave the box! If you think about it – and that’s an oversimplification – it’s a bit like aligning your types so you can chain method calls (just that Unit and Bind have to follow the Monadic laws). Imagine in Ruby you have an intermediary result but instead of returning a plain value OR an array, you always make sure to return [*values]. This way you can chain anything that expects an array, no matter how many intermediary values there were. That’s a bit like having an item wrapped up in some sort of array monad (once again, we are in oversimplification land here, but maybe the analogy helps).
Difficulty of the paper
The author states that he assumes no previous knowledge of category theory or Haskell (the language used in the examples), but judging Tom’s reaction to the paper, I’m not convinced by that. For somebody who has seen Haskell before and/or has a basic understanding of Monads already this is a pretty good and easy to follow read though.
Q&A
Since Tom had some trouble with the paper – but bravely fought his way through it – we exchanged quite some emails on the actual content. This is cool, because it’s exactly the reason we are doing this! First we needed to get some Haskell syntax out of the way:
Let’s do it slow with actual Haskell (this is a stupid function though, you could just do add = (+)):
add :: Int → Int → Int
add x y = x + yThis means that “add” is a function that takes two “Ints” and returns an “Int”. The value after the last → is the actual return value of the function. If you wonder about the intermediate arrows, that’s because Haskell functions technically only take one argument and return a new function which takes another argument etc.
So what happens when you call
add 5 6
is that “add” gets applied to the argument 5, which returns another function that looks roughly like (add y = 5 + y) which in the end returns the number 11. This is a result of Haskell being based on the lambda calculus, where the equivalent expression would be \x.\y.x+y. For practical purposes it’s enough to remember the following though: whatever goes before the last → are the arguments to the function, the thing after the last → is the return type.
What about polymorphic functions? Easy:
add :: (Num a) => a → a → a
add x y = x + yHere we just say that we take 2 arguments of type “a” and return another “a”, with the constraint that “a” has to be numeric.
Since it also is covered in the paper, here’s an example of a higher order function, good old map (that’s the same as Ruby’s map):
map :: (a → b) → [a] → [b]
map f [] = []
map f (x:xs) = f x : map f xsThis basically tells us that “map” takes two arguments. The first is itself a one-argument function ((a → b)) the second a list of values ([a]) and we return another list of values ([b]). Note that “a” and “b” can be the same type, but don’t have to be.
Could you please, as simply as possible, define a Monad?
I have to quote Brian Beckman on that, “Monads are a theory for interacting things”. Or to paraphrase, Monads are an abstraction that allows you to define how the various stages of a computation interact. It’s a sequence of operations, put into context.
What would you use a Monad for?
As seen in the paper, Monads are quite a flexible abstraction and can be used for many different things (take for example the MapReduce Monad). Personally I’ve never used one outside of my limited Haskell experience, but .Net developers may find it interesting to know that LINQ is a monad (more on that here).
Can I use them in Ruby?
You can, in fact there’s a link listed below that shows simple Monads implemented in Ruby. It’s more a question of how useful they actually are in a mostly object oriented language that allows for side-effects.
If I wanted to learn more about this sort of thing where should I go?
Brian Beckman recorded an awesome 67 minute video on the topic, which is probably the best introduction I ever saw.
Links