Every now and then I come across a post that explains a concept so clearly it is inspiring. I'd like to thank Albert Lai for just such a post. Re: [Haskell-cafe] Monad laws
Deokhwan Kimwrites: > >What is the practical meaning of monad laws? > > 1. (return x) >>= f == f x > 2. m >>= return == m > 3. (m >>= f) >>= g == m >>= (\x -> f x >>= g) I offer to re-write the laws in do-notation. (Please view with a fixed-width (non-proportional) font.) 1. do { x' <- return x do { f x ; f x' == } } 2. do { x <- m == do { m ; return x } } 3. do { y <- do { x <- m do { x <- m ; f x ; do { y <- f x } == ; g y ; g y } } } do { x <- m using 3.14 ; y <- f x == ; g y } I think in this notation everyone sees the laws as plain common sense. If you do write a monad that doesn't follow some common sense, the dire consequence (practical or theoretical) is obvious. Just in case it is still not obvious to somebody... When we see a program written in a form on the LHS, we expect it to do the same thing as the corresponding RHS; and vice versa. And in practice, people do write like the lengthier LHS once in a while. First example: beginners tend to write skip_and_get = do { unused <- getLine ; line <- getLine ; return line } and it would really throw off both beginners and veterans if that did not act like (by law #2) skip_and_get = do { unused <- getLine ; getLine } Second example: Next, you go ahead to use skip_and_get: main = do { answer <- skip_and_get ; putStrLn answer } The most popular way of comprehending this program is by inlining (whether the compiler does or not is an orthogonal issue): main = do { answer <- do { unused <- getLine ; getLine } ; putStrLn answer } and applying law #3 so you can pretend it is main = do { unused <- getLine ; answer <- getLine ; putStrLn answer } Law #3 is amazingly pervasive: you have always assumed it, and you have never noticed it. (To put it into perspective, you hardly notice yourself breathing, but this only makes the practical meaning of breathing more profound, not less.) Whether compilers exploit the laws or not, you still want the laws for your own sake, just so you can avoid pulling your hair for counter-intuitive program behaviour that brittlely depends on how many redundant "return"s you insert or how you nest your do-blocks.
It is also worth reading apfelmus' followup for further elaboration on the intuition behind the monad laws.
Deokhwan Kim wrote: > But what practical problems can unsatisfying them cause? In other words, > I wonder if declaring a instance of the Monad class but not checking it > for monad laws may cause any problems, except for not being qualified as > a theoretical monad? This question is likely to be a result of an unlucky introduction to monads where they are introduced top down: "Hear ye, a monad, this is some mystic thing obeying the spiritual laws 1.,2. and 3.", isn't it? It is this way that monads get the attribute "theoretical". Asking what the practical meaning of the monad laws might be is like asking what the practical meaning of the laws for natural number addition could be: what does i) a + (b+c) == (a+b) + c mean? How can i understand ii) a + 0 == a ? What does iii) a + b == b + a signify? These question are unlikely to arise because you have an intuition of what a natural number is: a number of bullets in sack, coins in your pocket, people in the mailing-list etc. With this knowledge, you will most likely not have any problems explaining the laws i),ii),iii) to somebody else and most likely you will have not doubt about *why* they must be true. For monads, my intuition is as following: a value of type (M a) is an action, something producing a value of type a and (or by) executing a side-effect like drawing on the screen or screwing up the hard drive. With the operator >>=, I can execute such actions in a specific sequence. For the sequence, it is of course unimportant how i group my actions: i can group actions act1 and act2 first and then postpend act3, or i can group act2 and act3 first and then prepend it with act1. To simplify writing down a formular corresponding to this fact, we introduce the operator >> defined by act1 >> act2 = act1 >>= \x -> act2 which sequences actions but for simplicity discards the computed value x of type a. It is only the side-effect of act1 we are interested in. Now, the thought about grouping written does as formular is just (act1 >> act2) >> act3 == act1 >> (act2 >> act3) and this is the simplified version of law 3. Of course, we know that this is coined "associativity". The actual law 3 is just a formulation for >>= that takes proper care of the intermediate calculation result x. With return x , we can create an action which computes the value x but has absolutely no side-effects. This can also be stated in formulas, as Mr "return" explains: 1. "if i am prepended to guys doing side-effects, i give them the value x but do not take any responsibility for side-effects happening" (return x) >>= (\y -> f y) == f x 2. "if i am postponed to an action which computes a value x, i don't do any additional side-effects but just return the value i have been given" m >>= (\x -> return x) == m which is of course equivalent to m >>= return == m So to answer your question: > In other words, I wonder if declaring a instance of the Monad class > but not checking it for monad laws may cause any problems, except for not > being qualified as a theoretical monad? A thing you declare to be an instance of the Monad class, but one that does not fulfill the three laws above, simply does not match the intuition behind a monad. I.e. your definitions of (>>=) and (return) are most likely to be void of the intended meaning.
No comments:
Post a Comment