Monads

I originally wrote a whole essay to explain monads myself (because they are very tricky to explain), but then I realized that Computerphile has a video on it, and you can just watch that.

Then again, though, the comments of that video aren't all positive. Some people still seem to be confused by it, so I'm going to try to explain it myself as well, with an abridged version of my essay.

Within the context of functional programming, a monad allows you to wrap a value up, optionally with some extra data. You can unwrap the value before passing it to the next parts of a computation (a process called "bind" or ">>="). The next part of the computation will go through the same process and report a new, wrapped up value. In addition, you can also wrap up a value from something that you've computed without a monad (a process called "unit" or, in this case, "return"), to use or return it.

You can call the next parts of the computation multiple times with different values, or not at all, and you can then report whatever you want. You can do this for every step of the computation. For example, you could pass all of the items in a list through the next parts of the computation, or you can pass certain values only if certain conditions are true.

Here is my project that implements monads. It showcases a "Maybe" monad (for a "Maybe" type, although for simplicity's sake it doesn't take a type parameter), and a "State" monad (which, as far as I know, is useless in Snap! because it supports imperative programming anyway, but it was interesting to program, and you can experiment with it).

https://cloud.snap.berkeley.edu/site/project.html?user=rdococ&project=Monads

If we're going to explain monads, I think it's worth explaining functors and applicatives as well.

Computational trinitarianism draws parallels between type theory and category theory. Under the trinitarianism, types are objects and functions are morphisms. With this idea, Haskell programmers speak of an informal category called Hask, although Hask fails the laws in the presence of seq, so it isn't a category.

In category theory, a functor is a mapping between categories and an endomorphism is a mapping from something to itself. In Haskell, a "functor" is an endofunctor from Hask to Hask. Any type constructor of kind * -> * has a functor instance, where the type constructor maps objects to objects (types to types) and fmap maps morphisms to morphisms (functions to functions). It's helpful to think of fmap :: (a -> b) -> f a -> f b as fmap :: (a -> b) -> (f a -> f b).

An applicative is a functor with extra structure, preserving products. We can curry functions; a -> b -> c is isomorphic to (a, b) -> c, give or take a few bottoms (I'm too lazy to try to work this out exactly). With (<*>) :: f (a -> b) -> f a -> f b and pure :: a -> f a, we can get f () and f a -> f b -> f (a, b), which are similar to the identity element of multiplication and the product operation for types.

unit :: (Applicative f) => f ()
unit = pure ()

(**) :: (Applicative f) => f a -> f b -> f (a, b)
a ** b = (,) <$> a <*> b

A monad adds the ability for the function to influence the internal computation via (>>=) :: a -> (a -> m b) -> m b. Notice that the passed function outputs an m b, not b. You can see this as a "lifted" version of function application (a -> (a -> b) -> b).

Perhaps a monad is more understandable if implemented in terms of Kleisli composition instead: (<=<) :: (b -> m c) -> (a -> m b) -> a -> m c. You can see this is a "lifted" version of function composition ((b -> c) -> (a -> b) -> a -> c). pure from the Applicative typeclass is already like a "lifted" version of the identity function (a -> a). So, as categories give each object an identity morphism and let you compose morphisms associatively, you can see that a monad instance gives a "Kleisli category" where the morphisms are the a -> m b stuff instead of the a -> b stuff.

Another way of defining a monad in terms of join :: m (m a) -> m a. The essence of a monad is the ability to "flatten." If you know about list functions, you can think of join as flatten and (>>=) as flat-map.

Okay, you guys are making fun of us, right?

Do you really have to study category theory to program in Haskell?

No, you don't. I'm merely curious about the mathematical background behind all these ideas.

Simply put, a monad is an interface.

class Functor f where
    fmap :: (a -> b) -> f a -> f b

class (Functor f) => Applicative f where
    pure :: a -> f a
    (<*>) :: f (a -> b) -> f a -> f b

class (Applicative m) => Monad m where
    return :: a -> m a
    (>>=) :: a -> (a -> m b) -> m b

You can implement the interface. Here is the simplest example:

data Identity a = Identity a

instance Functor Identity where
    fmap f (Identity a) = Identity (f a)

instance Applicative Identity where
    pure = Identity
    (Identity f) <*> (Identity a) = Identity (f a)

instance Monad Identity where
    return = pure
    (Identity a) >>= (Identity f) = f a

Monads in Haskell are glorified and not a big deal. They are an abstract concept (and have nothing to do with state). IO merely has a monad instance (just like Identity in the example above). IMO you don't need to understand the abstract idea of a monad to do IO, the same way that you don't need to know what a ring is to use numbers.

Here is another analogy: The same way that ring-like structures generalize addition and multiplication, functors and monads generalize mapping and flatmapping.

I didn't intend to "make fun" of anyone or put anyone down. I feel excited that now I am able to understand some of the category theory background behind typeclasses. (A few months ago, I would not have understood this background.) However, I now see that I was acting like a show-off, and I would like to apologize.

Ah, no, it's okay. It's just that this thread started as a supposed explanation of monads, and ended up way meta. (I'm afraid I still don't understand monads, but then I confess I've never made it through a Haskell manual.) I do understand map and flatmap, but am having trouble finding those in your example code.

The List type has Functor and Monad instances. For a List, fmap is mapping and (>>=) is flatmapping. You can think of functors and monads as generalizing the operations of mapping and flatmapping, respectively.

Another example of a monad instance, the one from the OP, is Maybe.

data Maybe a = Just a | Nothing

fmap :: (a -> b) -> Maybe a -> Maybe b
fmap f (Just a) = Just (f a)
fmap _ Nothing = Nothing

(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
f >>= (Just a) = f a
_ >>= Nothing = Nothing

Then, instead of doing:

case stuff of
    Just x -> f x
    Nothing -> Nothing

you can just do:

stuff >>= f

hiding the pattern match behind the flatmap.

Maybe we're using the word "flatmap" differently? The one I know is

(define (flatmap f lst)
   (flatten (map f lst)))
(define (flatten lst)
  (apply append lst))

I'm not seeing flattening in your code. Does "Just" mean something special?

Okay, let me explain.

First, Haskell lists are similar to Lisp lists. A Haskell list is either [], the analogue of nil, or first:rest, the equivalent of (cons first rest).

Where Lispers use (if (nil? ls) handle-nil-case handle-cons-case), Haskellers use pattern matching instead:

case ls of
    [] -> handleNilCase
    first:rest -> handleConsCase -- first and rest are variables bound to the parts of the list

We can say that fmap has type (a -> b) -> [a] -> [b]. A list of a is written [a].

fmap :: (a -> b) -> [a] -> [b]
fmap _ [] = [] -- Handle the nil case
fmap f (x:xs) = (f x):(fmap f xs) -- Handle the cons case

However, lists aren't the only thing that can be fmapped. For example, maybes can be fmapped as well: fmap :: (a -> b) -> Maybe a -> Maybe b

So can IO actions: fmap :: (a -> b) -> IO a -> IO b

We can generalize fmap :: (a -> b) -> f a -> f b.

The functor typeclass allows types to provide their own definitions of fmap:

class Functor f where
    fmap :: (a -> b) -> f a -> f b

instance Functor [] where
    fmap _ [] = []
    fmap f (x:xs) = (f x):(fmap f xs)

Monad is similar. The type of (>>=) might be [a] -> (a -> [b]) -> [b]. That is, it takes a list of a and a function from a to a list of b, and returns a list of b.

However, lists aren't the only thing that can be flatmapped. A more general type would be m a -> (a -> m b) -> m b for some monadic type m.

I just realized, seeing fmap might be confusing. fmap means map, not flatmap! (>>=) is flatmap! I believe that the "f" in fmap means "functor." In a design decision now seen as poor, Haskell has a map function that is the same thing as fmap, but only for lists,

Ah. It took me a while to see why that should be the type signature of flatmap but I think I understand. It's still headache-inducing, though.