Last fall I went to Strange Loop in St Louis and gave a talk about Scalaz. The video is available on InfoQ:

InfoQ: Scalaz: Functional Programming in Scala.

Last fall I went to Strange Loop in St Louis and gave a talk about Scalaz. The video is available on InfoQ:

InfoQ: Scalaz: Functional Programming in Scala.

Advertisements

In the previous post, we looked at the `ST`

monad and how we can use it to encapsulate in-place mutation as first-class referentially transparent expressions. `ST`

gives us mutable references and arrays, with the following guarantees:

- Mutations of an object are guaranteed to run in sequence.
- The holder of a mutable object is guaranteed to hold the only reference to it.
- Once a sequence of mutations of an object terminates, that object is never mutated again.

These invariants are enforced by the type system. I think that’s pretty cool, and sort of stands as a testament to the power of Scala’s type system.

This is much more than “delay side-effects until the last possible moment”. Remember, it is *always safe* to call `runST`

on an `ST`

action, anywhere in your program. As long as the call typechecks, it will have no observable side-effects with regard to STRefs and STArrays.

OK, so far we only have these guarantees for references and arrays, but that’s really not bad at all. We can eliminate an enormous class of bugs that have to do with shared mutable state, by guaranteeing that mutable state is never shared. Of course, you can still mutate other objects to your heart’s content (ones that are neither STRefs nor STArrays). But imagine for a second if Scala were modified in such a way that the `var`

keyword actually constructed an `STRef`

, and the only arrays provided by the library were of type `STArray`

. Wouldn’t that be something? Wouldn’t that basically make Scala a purely functional language?

Well, no. There’s I/O. While ST gives us guarantees that mutable memory is never shared, it says nothing about reading/writing files, throwing exceptions, opening network sockets, database connections, etc.

We’re going to represent I/O actions as state transition functions, just like `ST`

actions. Remember that `ST`

is essentially a type like this:

type ST[S, A] = World[S] => (World[S], A)

The `IO`

data type is very similar, except that we fix the world-state to be of a specific type:

type IO[A] = ST[RealWorld, A]

`RealWorld`

is totally abstract. It’s an uninhabited type (there are no values of type `RealWorld`

). And we will understand a value of type `World[RealWorld]`

to represent the current state of the entire universe. Sequencing is guaranteed, just like with `ST`

, since the `IO`

monad has to pass the world state in order to execute the next action.

In Scalaz, it’s not possible (without cheating) to create a value of type `World[RealWorld]`

. There are no values of this type. So how do you run an `IO`

action? Well, the `IO[A]`

data type has the following method defined:

def unsafePerformIO: A = this(null)

This is actually cheating a little bit, because we’re faking the value of type `RealWorld`

by passing nothing at all. We’re about to potentially destroy the universe anyway, so this is OK just once. Besides, there’s a reason why this method has “unsafe” in the name. You only want to ever call this method *once*. The idea is that you construct your entire program with as much a purely functional core as possible, and an outer shell written in the `IO`

monad. Then at the “end of the universe”, you call `unsafePerformIO`

:

import scalaz._; import Scalaz._; import scalaz.effects._ def main(args: Array[String]): Unit = myProgram(ImmutableArray.fromArray(args)).unsafePerformIO def myProgram(args: ImmutableArray[String]): IO[Unit] = error("Your IO program goes here.")

Again, imagine if Scala were modified in such a way that instead of looking for `def main(args: Array[String]): Unit`

, it would look for `def main(args: ImmutableArray[String]): IO[Unit]`

.

Because the `World`

type argument is fixed in the definition of `IO`

, we don’t have the same guarantees that we did with `ST`

. Basically we can’t guarantee that the world state will never escape from `unsafePerformIO`

. But we do have some other nice benefits.

For example, sequencing is still guaranteed, so no part of an action that depends on another will ever run before its dependency. This can be a problem if `IO[A]`

is modeled as simply `() => A`

. Also, `IO`

actions are first-class objects, so they are freely composable and re-usable.

Scalaz includes a bunch of `IO`

combinators for manipulating standard input and output, throwing/catching errors, mutating variables, etc. For example, here are some combinators for standard I/O:`

def getChar: IO[Char] def putChar(c: Char): IO[Unit] def putStr(s: String): IO[Unit] def putStrLn(s: String): IO[Unit] def readLn: IO[String] def putOut[A](a: A): IO[Unit]

Composing these into programs is done monadically. So we can use for-comprehensions. Here’s a program that reads a line of input and prints it out again:

def program: IO[Unit] = for { line <- readLn _ <- putStrLn(line) } yield ()

Or equivalently:

def program: IO[Unit] = readLn flatMap putStrLn

And if we wanted to write another program that re-uses our existing program, we can. Here’s a program that runs out previous program forever:

def program2: IO[Unit] = program |+| program2

`IO[Unit]`

is an instance of `Monoid`

, so we can re-use the monoid addition function `|+|`

. Because everything is pure, we can concatenate programs just as easily as we concatenate Strings.

It’s also important to note that we’ve gained type safety. If you try to do this, you will get a type error:

scala> (readLn flatMap putStrLn) |+| System.exit(0) <console>:17: error: type mismatch; found : Unit required: scalaz.effects.IO[Unit]

We can gain a lot of static safety by separating values that produce I/O effects from values that have no effects, differentiating them via the type system. We also gain modularity by treating I/O actions as pure, compositional, first-class values that we can freely reuse in a completely deterministic way. Instead of running I/O effects everywhere in our code, we build programs through the `IO`

DSL, compose them like ordinary values, and then run them with `unsafePerformIO`

as part of our `main`

.

**This post is from 2011, but has seen a lot of traffic lately and drawn some comments that the solution given here is incomplete or “doesn’t work”. This post is definitely incomplete, but the solution does work. For the most up-to-date code, please see my paper Stackless Scala with Free Monads as well as the source code for scalaz.Free.**

It’s about time I made my contribution to the zoo of monad tutorials. After explaining the concept of monads to somebody who is not a programmer or a mathematician, I was told that what I had explained sounded exactly like a game of dominoes.

However, instead of explaining monads in terms of a cute metaphor about dominoes, I’m just going to throw you into the deep end. Deep breath!

*Definition*: A category is given by objects and arrows between these objects. Each arrow has a unique source object and a target object . For every three objects , , and , and arrows and , there is a composite arrow

such that

**Composition is associative**: if , , and , then

**There exists a unique identity arrow for each object** For every object , there exists an arrow such that for every arrow , we have

**A category for dominoes**:

The objects of the category are the natural numbers. An arrow in the category is a domino piece with two numbers on it: . A composition of arrows is two domino pieces placed together end-to-end by the following equation:

The identity arrow for each object is a domino piece with the same number on both ends.

Associativity:

Identity:

A monad for dominoes

Definition: A monad is given by a functor (from a category to itself, i.e. an endofunctor in ), equipped with natural transformations:

x

where is the identity functor and x is functor composition. This must satisfy the axioms that and are identity transformations, and (i.e. is associative).

The dominoes need not be annotated with numbers, but might be annotated with any set of objects. Therefore, there exists an endofunctor on the category of sets, such that for all sets , we can construct the set of dominoes .

For every object in , we can construct a domino . This is for our monad. Then, for the set of dominoes , we can construct which is the set of dominoes annotated with other dominoes. For any pair of dominoes in , composed end-to-end, we can see it as a single domino in . The operation in our monad is then the fact that for any such pair of dominoes , it behaves as the domino with regard to composition with other dominoes in .

Proof:

Associativity:

Identity:

Exercises left to the reader:

1. What does a monoid in the category of dominoes look like?

2. How would the game of dominoes change if it allowed products and coproducts?

3. Can you come up with a coherent theory describing the behavior of “spinners” in the game of dominoes?

Video of a talk given at Boston Area Scala Enthusiasts’ Jan 2011 meeting at Google Cambridge. Covers what FP actually means in unambiguous terms, and how to perform various programming tasks while staying pure.

Slides are here:

Code for the IO data structure mentioned but not shown at the end of the talk is given here:

sealed trait IO[A] case object ReadLn extends IO[String] case class WriteLn(s: String) extends IO[Unit] case class Const[A](a: A) extends IO[A] case class Compose[A, B](a: IO[A], f: A => IO[B]) extends IO[B] // An example program which reads a line and writes it out again def readThenWrite: IO[Unit] = Compose(ReadLn, WriteLn) // An implementation for IO programs which turns them into side-effects def run[A](program: IO[A]): A = program match { case ReadLn => readLine case WriteLn(s) => println(s) case Const(a) => a case Compose(a, f) => run(f(run(a))) } // An implementation for IO programs which turns them into list transitions def run[A](program: IO[A], in: List[String], out: List[String]): (A, List[String], List[String]) = program match { case ReadLn => (in.head, in.tail, out) case WriteLn(x) => ((), in, x :: out) case Const(a) => (a, in, out) case Compose(ioa, f) => { val (a, tmpi, tmpo) = run(ioa, in, out) run(f(a), tmpi, tmpo) } }

One use of KList and the ‘transform’ and ‘down’ methods from 8b is to implement methods like ‘zipWith’ for arbitrary tuple lengths. To start with, the signature of zipWith for a pair of Streams, operating on a fixed arity of 2, looks like:

def zipWith2[A,B,C](t2: (Stream[A], Stream[B]))(f: (A,B) => C): Stream[C] = t2 match { case (ha #:: ta, hb #:: tb) => Stream.cons( f(ha, hb), zipWith2( (ta, tb) )(f) ) case _ => Stream.empty }

Example usage:

val nats = Stream.from(1) val random = Stream.continually( math.random ) val seq = zipWith2( (nats, random) ) { (n, r) => if(r > 0.3) n else -n } scala> seq.take(10).toList res0: List[Int] = List(-1, 2, 3, 4, 5, 6, -7, -8, 9, 10)

For the implementation of zipWith2, if either Stream in the pair is empty, the resulting Stream is empty. Otherwise, there is a head element for each stream in the pair. We apply the provided function to these elements and make the result the head of a new Stream. The tail of this new Stream will be the result of recursing on the tails of the input pair.

To generalize this to arbitrary arity, we will operate on a KList of Streams. Because we want to abstract over arity, we use a heterogeneous list. We use KList instead of HList because we want to constrain each element in the tuple to be a Stream and we don’t care what the specific types of Streams the elements are, but we do want to preserve those types. When we take the head element of each Stream, the resulting list is the underlying HList type of the input KList. For example, given an input of type `KList[Stream, A :: B :: HNil]`

, when we take the head of each Stream in the KList we will get an HList of type `A :: B :: HNil`

. This is like going from `(Stream[A], Stream[B])`

to `(A,B)`

.

So, if we end up with the underlying HList type, the function we will apply to the input KList must be a function from that HList type to some other type. In the example above, the function type would be `A :: B :: HNil => T`

for some type T, which will be the type of the output Stream. With this, we have our signature for a generalized zipWith:

def zipWith[HL <: HList, T](kl: KList[Stream, HL])(f: HL => T): Stream[T]

To implement this function, we again break the problem into two parts. If any Stream is empty, the resulting stream is empty. Otherwise, we get all the head elements of the Streams as an HList, apply the input function to it, and make this the new head. For the tail, we get all of the tails of the Streams and recurse. To get the head elements, we use ‘down’ because we want `KList[Stream, HL] => HL`

. For the tails, we use 'transform' because we need a mapping `KList[Stream, HL] => KList[Stream, HL]`

. The implementation looks like:

def zipWith[HL <: HList, T](kl: KList[Stream, HL])(f: HL => T): Stream[T] = if(anyEmpty(kl)) Stream.empty else Stream.cons( f( kl down heads ), zipWith(kl transform tails )( f ) ) def anyEmpty(kl: KList[Stream, _]): Boolean = kl.toList.exists(_.isEmpty) val heads = new (Stream ~> Id) { def apply[T](s: Stream[T]): T = s.head } val tails = new (Stream ~> Stream) { def apply[T](s: Stream[T]): Stream[T] = s.tail }

The toList function on KList has type `KList[M, HL] => List[M[_]]`

and has a trivial implementation. Since List is homogeneous, we can’t preserve each individual cell’s type, but we can at least use the common type constructor. In ‘zipWith’, this means we can call the ‘isEmpty’ method on the elements of the list but we would not get a very specific type if we called ‘head’, for example. ‘heads’ and ‘tails’ are natural transformations that map a Stream[T] to its head of type T and its tail of type Stream[T], respectively.

The original example translated to use the generalized zipWith looks like:

val nats = Stream.from(1) val random = Stream.continually( math.random ) val seq = zipWith( nats :^: random :^: KNil ) { case n :: r :: HNil => if(r > 0.3) n else -n } scala> seq.take(10).toList res0: List[Int] = List(-1, 2, -3, -4, -5, 6, 7, 8, 9, 10)

We can implement the related ‘zipped’ function in terms of ‘zipWith’.

def zipped[HL <: HList](kl: KList[Stream, HL]): Stream[HL] = zipWith(kl)(x => x)

Or, we could have implemented zipWith in terms of zipped. In any case, we can implement several other functions using zipped:

def foreach[HL <: HList, T](kl: KList[Stream, HL])(f: HL => T): Unit = zipped(kl).foreach(f) def collect[HL <: HList, T](kl: KList[Stream, HL])(f: HL => Option[T]): Stream[T] = zipped(kl).collect(f) def flatMap[HL <: HList, T](kl: KList[Stream, HL])(f: HL => Stream[T]): Stream[T] = zipped(kl).flatMap(f) def forall[HL <: HList](kl: KList[Stream, HL])(f: HL => Boolean): Boolean = zippped(kl).forall(f) def exists[HL <: HList](kl: KList[Stream, HL])(f: HL => Boolean): Boolean = zipped(kl).exists(f)

An example using ‘foreach’:

val a = Stream(1,2,5,3,9,10,101) val b = Stream("one", "two", "three", "four") val c = Stream(true, false, false, true, true) zipped(a :^: b :^: c :^: KNil) foreach { case n :: s :: b :: HNil => println( s * (if(b) 1 else n) ) } one twotwo threethreethreethreethree four

In the absence of rank-2 types, it can be useful to have a higher-kinded heterogeneous list, which I’ll call KList here. A KList defines a type constructor M[_] that is used to construct the type for all cells in the list. The parameter passed to this type constructor can be different for each cell, which is the heterogeneous part. One use of a KList is to define a generic zipWith function. KLists are also used in the implementation of the new task engine in sbt 0.9. Each of these applications will be described in subsequent posts.

We’ll start with the basic definition of KList, which looks like:

sealed trait KList[+M[_], HL <: HList] final case class KCons[H, T <: HList, +M[_]](head: M[H], tail: KList[M,T]) extends KList[M, H :+: T] { // prepend def :^: [N[X] >: M[X], G](g: N[G]) = KCons(g, this) } sealed class KNil extends KList[Nothing, HNil] { def :^: [M[_], H](h: M[H]) = KCons(h, this) } object KNil extends KNil object KList { // nicer alias for pattern matching val :^: = KCons }

It looks similar to HList with the exception of the type constructor M. We keep the type of ‘head’ in KCons in two pieces: the type constructor M common to all cells in the KList and the type parameter to M that is specific to this cell. The full type of ‘head’ is then M[H].

An example construction:

val m = List(1, 2, 3, 4) :^: List("str1", "str2") :^: KNil

This has type:

KCons[Int,java.lang.String :: HNil,List]

Note that we can mix type constructors that are compatible:

val m = Seq(1, 2, 3, 4) :^: List("str1", "str2") :^: KNil m: KCons[Int,java.lang.String :: HNil,Seq]

Ones that are not compatible crash the compiler:

val m = Seq(1, 2, 3, 4) :^: Option("str1", "str2") :^: KNil scala.tools.nsc.symtab.Types$NoCommonType: lub/glb of incompatible types: [X]Option[X] and Seq

It is not possible to have types inferred in several cases, such as when the type constructor is Id, where type Id[X] = X :

// does not compile val p = 1 :^: true :^: KNil

A key use of a KList is to apply a natural transformation to its contents. We have kept the type constructor separate from the type parameters, which means we can apply a natural transformation M ~> N to each element and preserve the underlying type parameters. As an example, consider our heterogeneous list of Lists:

val m = List(1, 2, 3, 4) :^: List("str1", "str2") :^: KNil

and a natural transformation that takes a List and calls headOption on it:

val headOption = new (List ~> Option) { def apply[T](list: List[T]): Option[T] = list.headOption }

Then, apply headOption to m:

val heads = m transform headOption heads: KCons[Int,(java.lang.String :: HNil),Option] = KCons(Some(1),KCons(Some(str1),KNil))

We get a KList of Options, preserving the knowledge that the first element has type Option[Int] and the second has type Option[String].

The ‘transform’ method on KList is straightforward to implement:

sealed trait KList[+M[_], HL <: HList] { ... def transform[N[_]](f: M ~> N): KList[N, HL] } final case class KCons[H, T <: HList, +M[_]](head: M[H], tail: KList[M,T]) extends KList[M, H :+: T] { ... def transform[N[_]](f: M ~> N) = KCons( f(head), tail transform f ) } sealed class KNil extends KList[Nothing, HNil] { ... def transform[N[_]](f: Nothing ~> N) = KNil }

We can add another method that down-converts a KList to its underlying HList. For example, we might reduce each List in our KList above to its head element:

val head = new (List ~> Id) { def apply[T](list: List[T]): T = list.head } val heads = m down head heads: Int :: java.lang.String :: HNil = 1 :: str1 :: HNil

The definition of ‘down’ looks like:

sealed trait KList[+M[_], HL <: HList] { ... // For converting KList to an HList def down(f: M ~> Id): HL } final case class KCons[H, T <: HList, +M[_]](head: M[H], tail: KList[M,T]) extends KList[M, H :+: T] { ... def down(f: M ~> Id) = HCons(f(head), tail down f) } sealed class KNil extends KList[Nothing, HNil] { ... def down(f: Nothing ~> Id) = HNil }

We will use ‘down’ and ‘transform’ in the next section to implement zipWith for arbitrary arity.