Some Notes in Hostility Toward Subtyping

What follows is a session of thinking out loud, so it’s not to be taken too seriously.

The more I think about it, the more I convince myself that the idea of subtyping and class hierarchies is a mistake. “Inheritance” is a good way to lock down a design so that it becomes rigid and brittle. If you remember your (now scorched, I trust) GoF book, its main cargo was “composition over inheritance”. So why do we want this mechanism at all? Because we want type substitution. We say that if X is a subtype of Y, then X can be substituted wherever Y is expected. Informally, we want to say that an X is a Y. But what does this mean? It means that the set of all values in X is a subset of all the values in Y. So if you have a value x in X, then x is also in Y. In other words, X implies Y, or (X => Y). Hang on. This is just the type signature for a function.

When we observe this, we realise that we can substitute functions for subtyping. Moreover, we can make those functions implicit (as in Scala) and then type substitution will work just as if we were using subtypes (hopefully without loss of type inference). As a bonus, our implicit type substitutions have scope, while subtyping is a global declaration.

For example, take a look at the function type (A => B), recalling that this conceptually the same as saying “A implies B”, or even “A is a subtype of B”. Using subtyping, we could have types Foo and Bar, and say “Bar extends Foo”. But instead of subtyping, in Scala we could have an implicit function:

implicit def fooBar(b: Bar): Foo = b.foo

The easiest way to implement this is to have Bar wrap a value of type Foo by accepting it in its constructor. Then the Bar.foo method simply returns this value. But it can really be implemented any way we want. For example, if both Foo and Bar take a parameter of type Int, then we can extract it from one when constructing the other.

There’s a slight problem with implicit functions like this. Somebody else may have defined a function with the same type already, with a totally different intent. To remedy that, we can create a type for a specific kind of conversion to Foo; one that implies the relation we want to express. Something like…

trait IsAFoo[A] {
  def apply(a: A): Foo
}

This is now unambiguous. This kind of construct is a typeclass. IsAFoo classifies all types that imply type Foo. We can supply an implicit instance of this typeclass for Bar:

implicit val bar_is_a_foo = new IsAFoo[Bar] {
  def apply(b: Bar) = b.foo
}

And wherever we want to accept something that “is a” Foo, we accept an implicit parameter as evidence that it is indeed a Foo.

implicit def doSomethingWithFoo[A](implicit foo: IsAFoo[A])(a: A) = foo(a).methodOnFoo

We can call this method with a value of type Bar, because of the existence of the implicit instance bar_is_a_foo. In fact, Scala has an even nicer syntax for this, using “view bounds”. I leave it to you to check that out. What I want to impress on you is how flexible typeclasses are. We’re not constrained to using this mechanism to substitute for subtyping. We can use it to do the converse, i.e. supertyping an existing type. Or we can have the conversion go both ways to express isomorphisms between types. The pattern here is that we want to be able to state any kind of relation between types.

If Scala had the ability to state functional dependencies, the typeclass mechanism truly could obviate subtyping, with the added bonus that we could state any kind of type-level relation that we want, rather than just type order. I could talk about the downside of this, but… I could go on forever.

Variance of Functors

So, speaking of type order, there’s a tie-in here with variance. This is one of the things that trips people up when thinking about class hierarchies. I know it trips me up. But variance is much easier to reason about if we think of the subtype/supertype relation as just a function.

To compare, here’s what Wikipedia has to say about variance in class hierarchies:

If class B is a subtype of class A, then all member functions of B must return the same or narrower set of types as A; the return type is said to be covariant. On the other hand, the member functions of B must take the same or broader set of arguments compared with the member functions of A; the argument type is said to be contravariant. The problem for instances of B is how to be perfectly substitutable for instances of A. The only way to guarantee type safety and substitutability is to be equally or more liberal than A on inputs, and to be equally or more strict than A on outputs.

OK, but why is it the case that this is the only way to guarantee substitutability and type-safety? To understand that, it helps to throw away the notion of “subtype”, and simply think only in terms of functions, instead of the mixed notions of “subtype” and “member function”. Try translating that snippet from Wikipedia so that it’s worded in terms of functions. It gets pretty convoluted.

Proceeding from the premise that functions and subtype relations are interchangeable, we can derive a definition of co- and contravariance simply from the definition of substitutability. We start with this:

If B is a subtype of A, then every subtype of B is also a subtype of A, and every supertype of A is also a supertype of B.

Now, if we take (A <: B) to mean “A is a subtype of B”, and (A => B) to mean “function from A to B”, or equivalently “A implies B”, we can write this property as:

(A <: B) <==> ((B <: C) => (A <: C)) and ((C <: A) => (B <: C))

Remember that you can substitute functions for subtyping and vice versa, so any (=>) sign above can be replaced with the (<:) sign, or the other way around, and the meaning stays intact. So let’s restate this property purely in terms of functions: If there exists a function from B to A, then for every function from C to B there exists a function from C to A, and for every function from A to C there exists a function from B to C. (B => A) <==> ((C => B) => (C => A)) and ((A => C) => (B => C))

Moving (B => A) to the left of the <==> sign, we can infer two properties of (B => A):

(B => A) => (C => B) => (C => A)
and
(B => A) => (A => C) => (B => C)

Recalling that function application is logical implication, both of these properties evaluate to true, for all A, B, and C. Let’s use Wolfram Alpha to confirm this for us.
See here and here.

Let’s now say that C is a fixed type. Remembering what we did above with typeclasses, this gives rise to two such typeclasses, representing (C => A) and (A => C), respectively, for all A:

trait FromC[A] {
  def fromC(c: C): A
}
trait ToC[A] {
  def toC(a: A): C
}

To restate the properties above:

(B => A) => FromC[B] => FromC[A]
(B => A) => ToC[A] => ToC[B]

In other words, mapping a function (B => A) over FromC[B] results in FromC[A]. So the implication is preserved across the mapping. Mapping a function (B => A) over ToC[A] results in ToC[B]. So the implication is reversed across the mapping. This means that FromC is a covariant functor and ToC is a contravariant functor, by the definition of co- and contravariance. So now we have anchored the notion of substitutability to the variance of functors. For reference, here’s my off-the-cuff definition of co- and contravariance, along with some preliminaries:

Definition of Higher-Order Function
A higher-order function (HOF) is a function that takes another function as its argument.

Definition of covariance
A unary type constructor T[_] is a covariant functor if and only if there exists a HOF with the following type, for all A and B:
(A => B) => T[A] => T[B].

Definition of contravariance
A type constructor T[_] is a contravariant cofunctor if and only if there exists a HOF with the following type, for all A and B:
(B => A) => T[A] => T[B].

Note that these last two are exactly the properties above that we got for FromC and ToC, which we derived directly from the definition of subtype and supertype.

Conclusion

The benefits of subtyping, namely type substitution, can be expressed more naturally with a mechanism for classifying types.

Advertisements

Structural Pattern Matching in Java

One of the great features of modern programming languages is structural pattern matching on algebraic data types. Once you’ve used this feature, you don’t ever want to program without it. You will find this in languages like Haskell and Scala.

In Scala, algebraic types are provided by case classes. For example:

sealed trait Tree
case object Empty extends Tree
case class Leaf(n: Int) extends Tree
case class Node(l: Tree, r: Tree) extends Tree

To define operations over this algebraic data type, we use pattern matching on its structure:

def depth(t: Tree): Int = t match {
  case Empty => 0
  case Leaf(n) => 1
  case Node(l, r) => 1 + max(depth(l), depth(r))
}

When I go back to a programming language like, say, Java, I find myself wanting this feature. Unfortunately, algebraic data types aren’t provided in Java. However, a great many hacks have been invented over the years to emulate it, knowingly or not.

The Ugly: Interpreter and Visitor

What I have used most throughout my career to emulate pattern matching in languages that lack it are a couple of hoary old hacks. These venerable and well respected practises are a pair of design patterns from the GoF book: Interpreter and Visitor.

The Interpreter pattern really does describe an algebraic structure, and it provides a method of reducing (interpreting) the structure. However, there are a couple of problems with it. The interpretation is coupled to the structure, with a “context” passed from term to term, and each term must know how to mutate the context appropriately. That’s minus one point for tight coupling, and minus one for relying on mutation.

The Visitor pattern addresses the former of these concerns. Given an algebraic structure, we can define an interface with one “visit” method per type of term, and have each term accept a visitor object that implements this interface, passing it along to the subterms. This decouples the interpretation from the structure, but still relies on mutation. Minus one point for mutation, and minus one for the fact that Visitor is incredibly crufty. For example, to get the depth of our tree structure above, we have to implement a TreeDepthVisitor. A good IDE that generates boilerplate for you is definitely recommended if you take this approach.

On the plus side, both of these patterns provide some enforcement of the exhaustiveness of the pattern match. For example, if you add a new term type, the Interpreter pattern will enforce that you implement the interpretation method. For Visitor, as long as you remember to add a visitation method for the new term type to the visitor interface, you will be forced to update your implementations accordingly.

The Bad: Instanceof

An obvious approach that’s often sneered at is runtime type discovery. A quick and dirty way to match on types is to simply check for the type at runtime and cast:

public static int depth(Tree t) {
  if (t instanceof Empty)
    return 0;
  if (t instanceof Leaf)
    return 1;
  if (t instanceof Node)
    return 1 + max(depth(((Node) t).left), depth(((Node) t).right));
  throw new RuntimeException("Inexhaustive pattern match on Tree.");
}

There are some obvious problems with this approach. For one thing, it bypasses the type system, so you lose any static guarantees that it’s correct. And there’s no enforcement of the exhaustiveness of the matching. But on the plus side, it’s both fast and terse.

The Good: Functional Style

There are at least two approaches that we can take to approximate pattern matching in Java more closely than the above methods. Both involve utilising parametric polymorphism and functional style. Let’s consider them in order of increasing preference, i.e. less preferred method first.

Safe and Terse – Disjoint Union Types

The first approach is based on the insight that algebraic data types represent a disjoint union of types. Now, if you’ve done any amount of programming in Java with generics, you will have come across (or invented) the simple pair type, which is a conjunction of two types:

public abstract class P2<A, B> {
  public A _1();
  public B _2();
}

A value of this type can only be created if you have both a value of type A and a value of type B. So (conceptually, at least) it has a single constructor that takes two values. The disjunction of two types is a similar idea, except that a value of type Either<A, B> can be constructed with either a value of type A or a value of type B:

public final class Either<A, B> {
  ...
  public static <A, B> Either<A, B> left(A a) { ... }
  public static <A, B> Either<A, B> right(B a) { ... }
  ...
}

Encoded as a disjoint union type, then, our Tree data type above is: Either<Empty, Either<Leaf, Node>>

Let’s see that in context. Here’s the code.

public abstract class Tree {
  // Constructor private so the type is sealed.
  private Tree() {}

  public abstract Either<Empty, Either<Leaf, Node>> toEither();

  public static final class Empty extends Tree {
    public <T> T toEither() {
      return left(this);
    }

    public Empty() {}
  }

  public static final class Leaf extends Tree {
    public final int n;

    public Either<Empty, Either<Leaf, Node>> toEither() {
      return right(Either.<Leaf, Node>left(this));
    }

    public Leaf(int n) { this.n = n; }
  }

  public static final class Node extends Tree {
    public final Tree left;
    public final Tree right;    

    public Either<Empty, Either<Leaf, Node>> toEither() {
      return right(Either.<Leaf, Node>right(this));
    }

    public Node(Tree left, Tree right) {
      this.left = left; this.right = right;
    }
  }
}

The neat thing is that Either<A, B> can be made to return both Iterable<A> and Iterable<B> in methods right() and left(), respectively. One of them will be empty and the other will have exactly one element. So our pattern matching function will look like this:

public int depth(Tree t) {
  Either<Empty, Either<Leaf, Node>> eln = t.toEither();
  for (Empty e: eln.left())
    return 0;
  for (Either<Leaf, Node> ln: t.toEither().right()) {
    for (leaf: ln.left())
      return 1;
    for (node: ln.right())
      return 1 + max(depth(node.left), depth(node.right));
  }
  throw new RuntimeException("Inexhaustive pattern match on Tree.");
}

That’s terse and readable, as well as type-safe. The only issue with this is that the exhaustiveness of the patterns is not enforced, so we’re still only discovering that error at runtime. So it’s not all that much of an improvement over the instanceof approach.

Safe and Exhaustive: Church Encoding

Alonzo Church was a pretty cool guy. Having invented the lambda calculus, he discovered that you could encode data in it. We’ve all heard that every data type can be defined in terms of the kinds of operations that it supports. Well, what Church discovered is much more profound than that. A data type IS a function. In other words, an algebraic data type is not just a structure together with an algebra that collapses the structure. The algebra IS the structure.

Consider the boolean type. It is a disjoint union of True and False. What kinds of operations does this support? Well, you might want to do one thing if it’s True, and another if it’s False. Just like with our Tree, where we wanted to do one thing if it’s a Leaf, and another thing if it’s a Node, etc.

But it turns out that the boolean type IS the condition function. Consider the Church encoding of booleans:

true  = λa.λb.a
false = λa.λb.b

So a boolean is actually a binary function. Given two terms, a boolean will yield the former term if it’s true, and the latter term if it’s false. What does this mean for our Tree type? It too is a function:

empty = λa.λb.λc.a
leaf  = λa.λb.λc.λx.b x
node  = λa.λb.λc.λl.λr.c l r

You can see that this gives you pattern matching for free. The Tree type is a function that takes three arguments:

  1. A value to yield if the tree is empty.
  2. A unary function to apply to an integer if it’s a leaf.
  3. A binary function to apply to the left and right subtrees if it’s a node.

The type of such a function looks like this (Scala notation):

T => (Int => T) => (Tree => Tree => T) => T

Or equivalently:

(Empty => T) => (Leaf => T) => (Node => T) => T

Translated to Java, we need this method on Tree:

public abstract <T> T match(F<Empty, T> a, F<Leaf, T> b, F<Node, T> c);

The F interface is a first-class function from Functional Java. If you haven’t seen that before, here it is:

public interface F<A, B> { public B f(A a); }

Now our Tree code looks like this:

public abstract class Tree {
  // Constructor private so the type is sealed.
  private Tree() {}

  public abstract <T> T match(F<Empty, T> a, F<Leaf, T> b, F<Node, T> c);

  public static final class Empty extends Tree {
    public <T> T match(F<Empty, T> a, F<Leaf, T> b, F<Node, T> c) {
      return a.f(this);
    }

    public Empty() {}
  }

  public static final class Leaf extends Tree {
    public final int n;

    public <T> T match(F<Empty, T> a, F<Leaf, T> b, F<Node, T> c) {
      return b.f(this);
    }

    public Leaf(int n) { this.n = n; }
  }

  public static final class Node extends Tree {
    public final Tree left;
    public final Tree right;    

    public <T> T match(F<Empty, T> a, F<Leaf, T> b, F<Node, T> c) {
      return c.f(this);
    }

    public Node(Tree left, Tree right) {
      this.left = left; this.right = right;
    }
  }
}

And we can do our pattern matching on the calling side:

public int depth(Tree t) {
  return t.match(constant(0), constant(1), new F<Node, Integer>(){
    public Integer f(Node n) {
      return 1 + max(depth(n.left), depth(n.right));
    }
  });
}

This is almost as terse as the Scala code, and very easy to understand. Everything is checked by the type system, and we are guaranteed that our patterns are exhaustive. This is an ideal solution.

By the way, if you’re wondering what constant(0) means, it’s a method that returns a function F<A, Integer> that always returns 0, ignoring the argument.

Conclusion

With some slightly clever use of generics and a little help from our friends Church and Curry, we can indeed emulate structural pattern matching over algebraic data types in Java, to the point where it’s almost as nice as a built-in language feature.

So throw away your Visitors and set fire to your GoF book.

Heterogeneous Lists and the Limits of the Java Type System

Today, we’re going on a journey. It is a sojourn to the outer limits of the expressiveness of the Java type system, and to the edge of what can be considered sane programming. This is definitely one for the power users. You will need a firm grasp of the Java language, and an iron constitution for type annotations. But the reward will be something far greater than any treasure: understanding, entertainment, and perhaps even enlightenment. Remember that we choose to do these things in Java, not because they are easy, but because they are hard. Now then, to the ships.

A Most Versatile Vessel

In Java, we can create a list that contains values of type A, by constructing a value of type List<A>. The type system will enforce that each element in the list is in fact of type A. But sometimes we want lists of values that aren’t necessarily of the same type. Normally, for such a purpose, we would use a heterogeneous list, which in Java is just the raw list type List<?> or List<Object>. Since every class in Java is a subclass of Object (and now that we have autoboxing), such a list can contain any Java value. There are many kinds of situation where this would be necessary. For example, a row of database results will comprise values that are not all of the same type.

However, there’s a problem with the raw list approach. In using the List<?> type, we are dispensing with the type system. When you get a value from the list, how do you know what it is? How do you know which operations it supports? Well, you will have to defer that discovery until runtime, and use explicit type casting. Most will shrug at this and say: “So what?” After all, this is what we did anyway, before generics. Ah, but what if we don’t have to? Can we create generic heterogeneous collections that are type-safe? Yes, we can. Sort of.

Products of Types

What we would like to see is if it’s possible to declare some constraints on the types of a heterogeneous collection, to achieve essential type-safety while maintaining the extensibility of a list. Of course, it’s easy to create types that are the product of two or more types:

public abstract class P2<A, B> {
  public abstract A _1();
  public abstract B _2();
}

But the length of this kind of product is as fixed as the length of a string in Pascal. It isn’t extensible, so it’s more like a type-safe heterogeneous array than a list. If you want products of different lengths, you will need to declare separate classes for P3<A, B, C>, P4<A, B, C, D>, etc. What we’re trying to achieve is a product of arbitrary length, whose length might even vary at runtime. There’s no reason we couldn’t create products of products in a chain, like P2<A, P2<B, P2<C, D>>>, and this is more or less the approach that we will take.

Introducing HList

To achieve our goal, we’re going to implement linked lists in the type system. Let’s remind ourselves what a linked list looks like. A List<T> is essentially either the empty list or a value of type T paired with a List<T>. In Java, using the List<A> type from Functional Java, an unsafe heterogeneous list might be constructed in a manner like the following:

List<?> x = cons("One", cons(2, cons(false, nil()));

The cons method constructs a list, and the nil method returns the empty list. With just these two methods, we can create any homogeneous list. A list has two methods to access its members, head() which returns the first element, and tail() which returns the rest of the list. Getting the head or tail of the empty list is an error at runtime.

Let’s now take a step up into the type system, and say that a list of types is either the empty list or a type paired with a list of types. This gives rise to our heterogeneous list type:

public abstract class HList<A extends HList<A>> {
 private HList() {}

 private static final HNil nil = new HNil();

 public static HNil nil() {
   return nil;
 }

 public static <E, L extends HList<L>> HCons<E, L> cons(final E e, final L l) {
   return new HCons<E, L>(e, l);
 }

 public static final class HNil extends HList<HNil> {
   private HNil() {}
 }

 public static final class HCons<E, L extends HList<L>> extends HList<HCons<E, L>> {
   private E e;
   private L l;

   private HCons(final E e, final L l) {
     this.e = e;
     this.l = l;
   }

   public E head() {
     return e;
   }

   public L tail() {
     return l;
   }
 }
}

That’s not a lot of code, and it’s all relatively straightforward Java. The HList class is parameterised with a parameterised subclass of itself. There are only two concrete subclasses of HList that can possibly occupy that slot: the type HNil and the type constructor HCons. These represent the empty list and the list constructor, respectively. HCons takes two type parameters, the first representing the first element of the list, and the second being another HList, allowing us to form a chain of them. HNil does not take type parameters, so it terminates the chain.

As with regular old lists, you can access the head() and tail() of the list. Note, however, that the fact that you cannot get the head or tail of the empty list is now enforced by the type system. There’s a nil method to get the empty list, and a cons method to construct a nonempty list, just like with regular lists.

Here’s an example of how we would construct a heterogeneous list using this new type:

HCons<String, HCons<Integer, HCons<Boolean, HNil>>> x = cons("One", cons(2, cons(false, nil()));

This is more verbose than the unsafe version before, but not by much. Obviously, the HList example assumes a static import of HList.cons and the List<?> example assumes a static import of List.cons. Using the type-safe version is, however, much nicer. Compare these two contrived examples:

if (x.tail().tail().head()) {
  return x.head().length() == x.tail().head();
}

if ((boolean) x.index(3)) {
  return ((String) x.head()).length() == (int) x.index(2);
}

The latter, of course, offers no static guarantees and may throw ClassCastExceptions, or we might inadvertently get the head or tail of the empty list at runtime. The former will always work as long as it compiles, guaranteed.

Concatenating HLists

Now let’s do something more interesting with these lists. Notice that the cons methods for both type-safe and unsafe lists prepend an element to a list rather than appending. Sometimes we want to append a list to the end of another. This is unsurprisingly uncomplicated for unsafe lists:

List<?> c = a.append(b);

Behind the scenes, we can think of append as reversing the first list and consing each element to the second list in reverse order. Doing that for HList is a little more involved. We have to construct a chain of types in exactly the right way, at compile-time.

Appending an HList to another is a function that takes two HList-valued arguments and returns an HList. Using first-class functions from Functional Java, the append operation for HLists of specific types L and R, would be a function of the following type:

F2<L extends HList<R>, L extends HList<L>, LR extends HList<LR>>

Where LR is the type of the concatenated HList. Now, since we necessarily have the two arguments, we know the specific types of L and R. Since Java doesn’t have type inference, it cannot automatically figure out the specific type of LR. We will have to supply it as type annotation. Not to worry. Even though Java doesn’t infer types, it can be coerced into doing some type arithmetic. All we have to do is a little inductive reasoning.

Types as Formulae

According to the Curry-Howard isomorphism, a program is a proof, and the hypothesis that it proves is a type for the program. In this sense, Java’s type system is a kind of crude theorem prover. Put another way, a type is a predicate, and values of that type represent the terms for which the predicate holds. The function type above therefore asserts that for any two HLists, L and R, there exists some program to derive the HList LR. The function type by itself does not put any constraints on LR, however. It can be derived by any function, not just the concatenation function. We will remedy that presently. We need a formula that states that the two types L and R imply a third type LR which is the HList concatenation of L and R, given some concatenation function. Here is the type that represents that formula:

public static final class HAppend<L, R, LR> {
    private final F2<L, R, LR> append;

    private HAppend(final F2<L, R, LR> f) {
      append = f;
    }

    public LR append(final L l, final R r) {
      return append.f(l, r);
    }
}

At this point, HAppend is still just a hypothesis without evidence. Remember that a value of a type is proof of the formula that it represents. So we will need to supply two proofs in the form of constructors for values of this type; one for the base case of appending to the empty list HNil, and another for the case of appending to an HCons. The base case is easy. Appending anything to the empty list should result in that same thing. So the HAppend constructor for appending to the empty list looks like this:

    public static <L extends HList<L>> HAppend<HNil, L, L> append() {
      return new HAppend<HNil, L, L>(new F2<HNil, L, L>() {
        public L f(final HNil hNil, final L l) {
          return l;
        }
      });
    }

The case for the nonempty list is not quite as easy. Consider its type:

    public static <X, A extends HList<A>, B, C extends HList<C>, H extends HAppend<A, B, C>>
                         HAppend<HCons<X, A>, B, HCons<X, C>> append(final H h)

Read the return type first. This returns an HAppend that appends some B to an HCons<X, A>. The type of the head of the first list (X) becomes the type of the head of the concatenated list. The tail of the concatenated list is C. The type constraints state that C must be an HList, and that there must exist some way to append B (the second list) to A (the tail of the first list) so that they make C. We must supply proof that this last constraint holds, and you’ll see that such a proof is in fact supplied as an argument (in the form of the value h).
What this is saying is that, given the premise that A and B can be concatenated, the concatenation of HCons<X, A> and B can be inferred. A value of type HAppend<A, B, C> is precisely proof of the hypothesis that A and B can be concatenated, since there are only these two cases and we’ve supplied a proof for both. In other words, if we can append to the empty list, then that’s proof enough that we can append to a list of one element, which proves that we can append to a list of two elements, and so on. Given this, we can construct a chain of proofs. This concatenated proof, then, is a function that concatenates lists of the corresponding types.

OK, so how do we use this? Well, here’s an example program that appends one list to another:

public class HList_append {
  public static void main(final String[] args) {
    // The two lists
    final HCons<String, HCons<Integer, HCons<Boolean, HNil>>> a =
      cons("Foo", cons(3, cons(true, nil())));
    final HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>> b =
      cons(4.0, cons("Bar", cons(new Integer[]{1, 2}, nil())));

    // A lot of type annotation
    final HAppend<HNil, HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>,
      HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>> zero = append();
    final HAppend<HCons<Boolean, HNil>, HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>,
      HCons<Boolean, HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>>> one = append(zero);
    final HAppend<HCons<Integer, HCons<Boolean, HNil>>, HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>,
      HCons<Integer, HCons<Boolean, HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>>>> two = append(one);
    final HAppend<HCons<String, HCons<Integer, HCons<Boolean, HNil>>>,
      HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>,
      HCons<String, HCons<Integer, HCons<Boolean, HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>>>>>
      three = append(two);

    // And all of that lets us append one list to the other.
    final HCons<String, HCons<Integer, HCons<Boolean, HCons<Double, HCons<String, HCons<Integer&#91;&#93;, HNil>>>>>>
      x = three.append(a, b);

    // And we can access the components of the concatenated list in a type-safe manner
    if (x.tail().tail().head())
      System.out.println(x.tail().tail().tail().tail().tail()[1] * 2); // 4
  }
}

Holy pointy brackets, Batman! Do we really need all of that? Well, look at what it’s doing. It’s constructing a concatenation function of the appropriate type, by supplying the premise at each step. If this seems mechanical, then that’s because it is. There is only one possible implementation for the HAppend we need, but Java does not have any mechanism for figuring this out, nor does it provide a facility for the programmer to tell it how.
Contrast that to Scala. The above is a clear example of where Scala’s implicit arguments come in handy. If we import this to Scala, we can make both of the append functions implicit, and we can further make the H argument to the append function for nonempty lists implicit as well. There can be only one possible implementation of each function, so it can be declared once and used implicitly wherever proofs of the corresponding types are required. Jesper Nordenberg has implemented an HList library for Scala that demonstrates this well. With implicits and Scala, the whole middle section of our program is condensed from 12 lines of type annotations to just:

 val x = a.append(b)

Now, if you’re really into this Java stuff, you’re probably thinking: “implicits are just dependency injection”. Well, in a sense, you would be right. Both dependency injection and inheritance are degenerate forms of implicits. However, there is currently no dependency injection framework for Java that can abstract over type constructors such that it provides injection of parameterised types with injection of type parameters also. If you can prove me wrong, by all means send me evidence in the form of working code.

Conclusion

Clearly, Java is not very useful for this kind of type-safe programming. I was actually quite surprised that you can do this in Java at all, but we’ve definitely hit the outer boundary of what can be considered reasonably expressible.

The code you’ve seen in this article uses the new HList package that was released with Functional Java 2.16. And is based on the Haskell HS library by Oleg Kiselyov.

Higher-Order Java Parallelism, Part 4: A Better Future

This is the fourth installment in a series of posts about making highly concurrent software easier to write in Java. Previous entries are available here: part 1, part 2, part 3. However, I aim to make it possible to follow along even if you haven’t read the previous posts.

I Have Seen the Future…

If you have used the Java 5 concurrency API at all, you will have come across the Future class. For example, when you submit a Callable<Integer> to an ExecutorService, what you get back is a Future<Integer> which represents a computation, running concurrently, that will (hopefully) result in an integer at some time in the future. Once you have the Future<Integer> fi, you can later get the integer out of it by calling fi.get().

That’s all fine and dandy, but let’s say you want do do something like add two future integers. You could do something like this:

int sum = x.get() + y.get();

This will block the current thread until both of those integers are available, then add them together. But why wait for that? If you have an ExecutorService, you can create a new Future that computes the sum:


Future<Integer> sum = executorService.submit(new Callable<Integer>() {
  public Integer call() {
    return x.get() + y.get();
  }
});

Now the current thread can continue, but we’ve started a new thread that does nothing until the values of x and y have both been calculated by yet another thread.

We’re beginning to see a problem here. We want to be able to compose Futures together to form new Futures, but find that the number of threads required to compose n Future values is on the order of O(n). If we have a fixed-size thread pool, we’ll run into starvation. If we have an unbounded thread pool, then we might start more threads than the operating system can handle, most of which will be doing nothing at all but wait for other threads.

This should all sound very familiar. Threads are a space resource. What kind of processes are O(n) in their space requirement? If you said “linearly recursive processes”, go to the head of the class. Intuitively, for the same reason that we can find iterative versions of any recursive algorithm, it seems that we should be able to find an algorithm to accomplish the same thing with O(1) threads.

…and it is a Monad

In the above example, it’s like we’re giving seperate instructions, waiting for the results of each in between. Imagine if we were working in an office with Bob and Alice, and we needed work on something from both of them. We might go to Bob and say: “Bob, process this and give me the result”. Then we’d take the result to Alice and say: “Alice, here’s a result from Bob.” It would be much better, if we could just go to Bob and say: “Bob, process this and give the result to Alice.” This is the essential difference between recursive and iterative processes.

But wait! We say that kind of thing all the time, in Java:


public Work bob(Work w) { ... }
public Work alice(Work w) { ... }

public Work bobThenAlice(Work w) {
  Work b = bob(w);
  return alice(b);
}

Here, we’re instructing a single thread to do some work, then use the result of that work to do more work. What’s really sneaky here is the meaning of the semicolon. In this context, what the former semicolon means is “take the stored value b from the previous statement and bind it to the free variable b in the next statement”. You can think of the second semicolon as binding a blank statement over the result of the preceding statement.

Using first-class functions from Functional Java, and using the Callables monad from the first part of this series, you could implement that same behaviour using something like this:

F<Work, Callable<Work>> bob = new F<Work, Callable<Work>>() {
  public Callable<Work> f(final Work w) {
    return new Callable<Work>() {
      public Work call() { ... }
    };
  }
};
F<Work, Callable<Work>> alice = new F<Work, Callable<Work>>() { ... };

public Callable<Work> bobThenAlice(Work w) {
  return Callables.bind(bob.f(w), alice);
}

That’s pretty neat. Now we have a single Callable that we can run concurrently in a new thread, turning it into a Future. But wouldn’t it be cool if we could bind Futures? That would let us take already running computations and combine them in exactly this way. We want a Future monad.

The problem with combining Futures is in the nature of the future. This is a deliberate pun on “future”. Think about time for a second. What does it mean to get a value that’s in the future? By the very fact that causality is sequential, it’s a violation of the nature of reality to have something that doesn’t yet exist. It’s the future; you’re not supposed to get stuff out. But, we can put stuff in, can’t we? Yes we can. You know those corny time-capsule things where people put their mountain bikes and Nintendo games for future generations to enjoy later? We can do that with data values. And not just values, but computations.

Here’s One I Made Earlier

The Future class in the standard Java libraries doesn’t come with any methods for projecting computations into the future. But Functional Java comes with a class called Promise<A> which does have that feature. It makes use of light-weight concurrent processes (actors), and parallel strategies, as described in the previous post, to implement the ability to combine concurrent computations into larger (concurrently executing) structures.

Since it is implemented as a monad, the methods it provides are all the usual suspects: unit, bind, fmap, join, etc. Here’s a quick overview of what they do and why they’re useful. Grasping them doesn’t just help you understand the Promise class, but any monad you may come across in the (ahem) future.

The unit function, the constructor of Promises, is just called promise. It has a few overloaded forms, but here is the simplest one.

public static <A> Promise<A> promise(Strategy<A> s, P1<A> p);

The P1 class is just a simple closure with no arguments, provided by the Functional Java library. P1<A> consists of one abstract method: A _1(). Strategy represents a method of evaluating P1s concurrently. I also talk about Strategies in the previous post, but the long and the short of it is that it has methods to evaluate the P1 value according to some parallelisation strategy, like with a thread pool for instance.

Calling the promise method starts a concurrent computation, in a manner according to the given strategy, that evaluates p. The resulting Promise value is a handle on the running computation, and can be used to retrieve the value later. Promise.claim() will block the current thread until the value is available, exactly like Future.get(), but this is generally not what you want to do. Instead, you want to bind.

The essence of the monad pattern is the binding function. If you don’t think you already know what a monad is, but understand this method, then you know more than you think:

public Promise<B> bind(F<A, Promise<B>> f);

This method means that if you have a Promise of an A, and a function from an A to a Promise of a B, you can get a Promise of a B. I.e. if somebody promises you an A, and I can promise you a B for every A, it’s the same thing as being promised a B in the first place.

The mapping function:

public Promise<B> fmap(F<A, B> f);

This method means that if you have an Promise of an A, and a function from A to B, you can get a Promise of a B. In other words, you can map any function over a Promise, and fmap will return you a Promise of the result. Behind the scenes, fmap is implemented by calling the bind and promise methods. The difference between this method and the bind method is subtle but important. Calling p.bind(f) is exactly equivalent to calling Promise.join(p.fmap(f)).

The join function:

public static <A> Promise<A> join(Promise<Promise<A>> a);

Join is a lot more useful than it looks. If you have a promised Promise, it’s the same as just having a Promise. In practise, that means that if you can start a concurrent task that starts a concurrent task, you can combine those into one concurrent task. You can think of it as the semantic equivalent of Thread.join(), except that our method returns the joined Promise immediately.

Coming back to Bob and Alice for a second, we can implement bob and alice from the Callables example above, using Promise instead of Callable . Both bob and alice will construct Promises using the promise method, putting whatever work they do inside a P1. That way, when you call bob, he’s already doing his work by the time you mention Alice’s name:

final Strategy<Work> s = Strategy.simpleThreadStrategy();
F<Work, Promise<Work>> bob = new F<Work, Promise<Work>>() {
  public Promise<Work> f(final Work w) {
    return promise(s, new P1() {
      public Work _1() { ... }
    });
  }
};
F<Work, Promise<Work>> alice = new F<Work, Promise<Work>>() { ... };

public Promise<Work> bobThenAlice(Work w) {
  return bob.f(w).bind(alice);
}

So now that we can build arbitrarily complex concurrent processes from already-running processes, how do we get the final promised value out? Again, you could call Promise.claim(), but that blocks the current thread as we know. Instead, Promise comes equipped with a method to(Actor<A>) which promises to send the value to the given Actor as soon as it’s ready. Control is returned to the current thread immediately, and the whole computation continues in the background, including the action to take on the final result. Actors were discussed in the previous post.

A Fully Functional Example

I think an example is in order. The following program calculates Fibonacci numbers using a naive recursive algorithm. This is an algorithm that benefits particularly well from parallelisation (barring any other kind of optimisation). If we were just using plain old Future instead of Promise, the number of Threads required to calculate the nth Fibonacci number is O(fib(n)). But since we’re using Promise, we can use a fixed number of actual Java threads.

package concurrent;

import static fj.Bottom.error;
import fj.Effect;
import fj.F;
import fj.F2;
import fj.Function;
import fj.P;
import fj.P1;
import fj.P2;
import fj.Unit;
import fj.data.List;
import fj.control.parallel.Actor;
import fj.control.parallel.Promise;
import fj.control.parallel.Strategy;
import static fj.data.List.range;
import static fj.function.Integers.add;
import static fj.control.parallel.Promise.join;
import static fj.control.parallel.Promise.promise;
import static fj.control.parallel.Actor.actor;

import java.text.MessageFormat;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Fibs {

private static final int CUTOFF = 35;

public static void main(final String[] args) throws Exception {
if (args.length < 1) throw error("This program takes an argument: number_of_threads"); final int threads = Integer.parseInt(args[0]); final ExecutorService pool = Executors.newFixedThreadPool(threads); final Strategy su = Strategy.executorStrategy(pool);
final Strategy> spi = Strategy.executorStrategy(pool);

// This actor performs output and detects the termination condition.
final Actor> out = actor(su, new Effect>() {
public void e(final List fs) {
for (P2 p : fs.zipIndex()) {
System.out.println(MessageFormat.format(“n={0} => {1}”, p._2(), p._1()));
}
pool.shutdown();
}
});

// A parallel recursive Fibonacci function
final F> fib = new F>() {
public Promise f(final Integer n) {
return n < CUTOFF ? promise(su, P.p(seqFib(n))) : f(n - 1).bind(f(n - 2), add); } }; System.out.println("Calculating Fibonacci sequence in parallel..."); join(su, spi.parMap(fib, range(0, 46)).map(Promise.sequence(su))).to(out);
}

// The sequential version of the recursive Fibonacci function
public static int seqFib(final int n) {
return n < 2 ? n : seqFib(n - 1) + seqFib(n - 2); } } [/sourcecode] For all you Scala fans out there, the Functional Java library comes with convenient bindings for Scala as well. Here’s the same thing written in Scala. Note that this does not use the Actor library from the standard Scala libraries, but the same lighter weight Java implementation that the Java example above uses.

package concurrent

import fj.control.parallel.{Actor, Promise}
import fj.Function.curry
import fj.control.parallel.Strategy.executorStrategy
import fjs.control.parallel.Strategy.parMap
import fjs.control.parallel.Promise._
import fjs.control.parallel.Actor._
import Integer.parseInt
import List.range
import java.util.concurrent.Executors.newFixedThreadPool
import fjs.F._
import fjs.F2._
import fjs.P1._
import fjs.P2._
import fjs.data.List._
import fjs.control.parallel.Strategy.ListPar

object Fibs {
val CUTOFF = 35;

def main(args: Array[String]) = {
if (args.length < 1) error("This program takes an argument: number_of_threads") val threads = parseInt(args(0)) val pool = newFixedThreadPool(threads) implicit def s[A] = executorStrategy[A](pool) // This actor performs output and detects the termination condition. val out: Actor[List[Int]] = actor{ ns =>
for ((n, i) <- ns.zipWithIndex) printf("n=%d => %d\n”, i, n)
pool.shutdown()
}

// A parallel recursive Fibonacci function
def fib(n: Int): Promise[Int] = {
if (n < CUTOFF) promise(() => seqFib(n))
else fib(n – 1).bind(fib(n – 2), curry((_: Int) + (_: Int)))
}

println(“Calculating Fibonacci sequence in parallel…”)
out ! sequence(parMap[Int, Promise[Int], List](fib, range(0, 46)));
}

// The sequential version of the recursive Fibonacci function
def seqFib(n: Int): Int = if (n < 2) n else seqFib(n - 1) + seqFib(n - 2); } [/sourcecode] Here's an example run of this program using a pool of 10 threads. It runs about 7 times faster that way than with just 1 thread on my 8-way machine. The Scala version is also very slightly faster for some reason.

$ scala -classpath .:../../../build/classes/src concurrent.Fibs 10
Calculating Fibonacci sequence in parallel…
n=0 => 0
n=1 => 1
n=2 => 1
n=3 => 2
n=4 => 3
n=5 => 5
n=6 => 8
n=7 => 13
n=8 => 21
n=9 => 34
n=10 => 55
n=11 => 89
n=12 => 144
n=13 => 233
n=14 => 377
n=15 => 610
n=16 => 987
n=17 => 1597
n=18 => 2584
n=19 => 4181
n=20 => 6765
n=21 => 10946
n=22 => 17711
n=23 => 28657
n=24 => 46368
n=25 => 75025
n=26 => 121393
n=27 => 196418
n=28 => 317811
n=29 => 514229
n=30 => 832040
n=31 => 1346269
n=32 => 2178309
n=33 => 3524578
n=34 => 5702887
n=35 => 9227465
n=36 => 14930352
n=37 => 24157817
n=38 => 39088169
n=39 => 63245986
n=40 => 102334155
n=41 => 165580141
n=42 => 267914296
n=43 => 433494437
n=44 => 701408733
n=45 => 1134903170

Massive win! If we had been using Future instead of Promise, we would have needed at least 55 threads (since we’re using a cutoff at 35 and 45 – 35 = 10 and fib(10) = 55). Heck, we could even remove the threshold value altogether and calculate all 45 parallel fibs, in parallel. That would require 1,134,903,170 threads in the absence of non-blocking concurrency abstractions like Promise and Actor. We can run that in just one thread if we’d like.

Higher-Order Java Parallelism, Part 3: Threadless Concurrency With Actors

Multi-core processing is changing the way we write programs. Modern computers come with multi-core or multiple processors, and modern applications no longer do just one thing at a time. But all of this has left Java a little bit behind. While languages like Erlang and Haskell, even other JVM languages like Scala, have powerful concurrency abstractions, concurrent programming in Java is still mostly based around the threading practices of way back in the time of applets.

Threading in Java has long been the domain of the brave or foolish, weaving a brittle and intricate web of manual synchronization between threads, or threads waiting on other threads to feed them work. We all know the examples from old Java books where there might be a thread running in a loop, sleeping for a second, and waking up to check if work needs to be done. This kind of implementation is still being taught to newcomers to the Java language. And when they start writing their own applications in Java, they find that this approach does not scale. Not along the axis of program complexity, nor the axis of the number of threads. In a complex application or one with many threads, you may end up with a program that stops doing anything at all for long periods of time, or worse, hangs forever, while still consuming all the operating system resources that go with those threads.

Blocking vs. Nonblocking Concurrency

The new concurrency library in Java 5 help this situation a great deal. At least some advancement has been made towards better concurrency abstractions. But there are still pitfalls. For instance, as we’ve seen, the Future and Callable interfaces are really rather jolly useful, and they can take us a long way towards the kinds of things possible in those other languages. But at the end of the day, something has to call Future.get(), and that something will block unless the value is already available, until such time that it is. This can result in deadlocks or starvation, as you may end up in a situation where all threads are blocking on Futures and none are available to advance your program forward. This would be bad. In fact, we could say that in a highly concurrent system, blocking on shared state is a catastrophic event.

The Java libraries are a veritable minefield of methods that, ultimately, block on shared memory monitors. There’s Thread.join(), Future.get(), BlockingQueue.take(), and the list goes on. But what if we could solve this problem with a simple abstraction? Could we solve the blocking problem once and re-use our solution for different situations? Let’s see.

Instead of sleeping or blocking, what we really want is the ability for our code to say: “When X happens, wake up a thread to execute me.” That is, in some sense we want threads to go on doing useful work, and have them be notified when an event happens that requires more work to be done. For example, when Bob comes into my office and asks me for this month’s TPS reports, he doesn’t stand around and wait for them, nor does he check periodically to see if I have them done, and he certainly doesn’t go to sleep to periodically wake up and poll me for them. He will continue with his work (or whatever it is he does all day), since I have instructions to send him those TPS reports as soon as they’re ready. We’re independent and concurrent actors, so we don’t wait on each other.

There is a model of computation, called the Actor Model, that works very much the same way. This is the model employed in the design of Erlang, and it was a major motivator for languages like Simula and Scheme. An implementation of Actors comes with the standard Scala library, and as it happens an implementation of it for Java has just been released as part of the 2.8 version of the Functional Java library. You can read more about the actor model on your own, but I will explain in some detail how it works in Functional Java.

Algorithm + Strategy = Parallelism

The first thing to explain is Parallel Strategies. I’ve talked about them before, but I’ll do so again here since they’re important to what I’m trying to demonstrate. The idea of the Parallel Strategy is that we can capture a threading pattern, or some method of concurrency in general, in a separate abstraction called a Strategy, and write our concurrent programs independently of the particular threading pattern.

In Java terms, a Strategy<A> is a way of turning any Java expression, whose evaluated type is A, into a value of type A. How this happens is implementation-specific, but we’d like it to be concurrent. First, we need a way of capturing the idea of an unevaluated A. It’s a bit like the idea behind Runnable, only that we can get a value out of it. Java 5 has the Callable interface, which is really close to representing “any expression”, since we can create Callables anonymously, but its call() method throws Exceptions, creating a kind of barrier for the programmer. Instead, we’ll use fj.P1<A>. It’s an abstract class with one abstract method: A _1()

The Strategy class has an instance method called par(P1<A>). This will evaluate the given P1 concurrently, immediately returning another P1 that can get the value once it’s ready. Behind the scenes, it actually creates a Callable, turns it into a Future, then returns a P1 that calls Future.get(), but all of that is implementation detail. Strategy comes with some static construction methods that implement basic threading patterns, and you can implement your own by passing a Future-valued function to the strategy(F<P1<A>, Future<A>>) method.

You might be concerned that P1 can’t throw any checked exceptions. This is by design. You’ll find that once you start working with concurrent effects, the value of checked exceptions goes out the window. Besides, if our P1 did need to throw errors, we would use Lazy Error Handling and declare this in the P1's type.

Strategy + Effect = Actor

So now we have concurrent evaluation and parallel transformations. But we’re still ending up with objects that require us to call a blocking method to get their value. The only way around that is to not return any values at all. Instead, we’re going to let our P1s have side-effects. So instead of returning a P1 that lets us wait for the value, our P1s are now going to update a variable or send the value somewhere.

To model a P1 that doesn’t return a useful value, I’m going to use the Unit type. It’s another one of those trivial but useful classes. The Unit type is part of Functional Java, and it’s simply a type that only has one possible value. It’s a lot like the standard library’s Void type, except Void is even further impoverished. When you use this type, you’re saying that the actual value is immaterial. So to describe a P1 that doesn’t return anything useful, we use the type P1<Unit>. We can also describe a transformation from some type T to Unit, with F<T, Unit>.

Functional Java comes with a more honest version of F<A, Unit>, which is Effect<A>. It’s more honest in the sense that it doesn’t pretend to be a transformation, which properly should have no side-effects. Effect<A> is explicit about having side-effects, which is what we intend for Actors. It has one method that doesn’t return a value at all: void e(A a).

We now have what we need to instantiate the Actor class. Here’s an example of how to create and use a simple actor:


  Strategy<Unit> s = Strategy.simpleThreadStrategy();

  Actor<String> a = Actor.actor(s, new Effect<String>()
    {public void e(String s)
      {System.out.println(s);}});

  a.act("Hello, actors!");


The actor receives “messages” on its act method, and the Strategy serves as a kind of mailbox for it. You will note that there’s no dependency at all on any particular threading strategy or any part of the concurrency library by the Actor class. The strategy could be sending our Effects to be evaluated by Threads, remote server farms, by ForkJoin tasks, or even by a Mechanical Turk.

An Actor With its Own Mailbox

The Actor class by itself doesn’t yet quite achieve the kind of actors model you see implemented in Erlang or Scala. The important difference is that the Actor above can process multiple “messages” simultaneously. This solution, then, is more general, although if an actor such as the above mutates some state, we’re likely to run into race conditions. Not to worry, it’s easy to construct the more specific case. We just add a queue and ensure that only one message is processed at a time. This construct is available as part of Functional Java, and it’s called QueueActor.

Of course, the “one thread at a time” requirement is not implemented using any blocking or synchronization. Instead, The QueueActor has two possible states–“suspended” or “running”–and, behind the scenes, this is enforced with an AtomicBoolean to keep it consistent in the face of concurrency. If the actor is suspended when it receives a message, it becomes running and its Effect is immediately handed off to its Strategy. If it’s already running, then callers will leave a message on its queue. The QueueActor's Effect is a concurrent, threadless recursion (i.e. it uses a Strategy rather than a Thread) that completely empties the queue, then puts the QueueActor's state back to “suspended”.

QueueActor puts some sanity into managing locally mutable state within an actor’s Effect, since it’s ensured that the state can only be mutated by one thread at a time. It is guaranteed to act on its messages in some unspecified order, but is otherwise semantically equivalent to Actor.

The Obligatory Example

We now have a light-weight implementation of the actor model, in Java. Don’t believe it? Have a look at this implementation of the canonical Ping-Pong example (imports omitted), and compare it to similar examples for Erlang and Scala.

First, a Ping actor that sends a number of pings to a given Pong actor, waiting for a pong reply each time before sending the next ping.


  public class Ping
    {private final Pong pong;
     private final Actor<Pong> ping;
     private final Actor<Integer> cb;
     private volatile int n;

     public Ping(final Strategy<Unit> s, final int i, final Pong pong, final int id, final Actor<Integer> callback)
       {n = i;
        this.pong = pong;
        cb = callback;
        ping = actor
          (s, new Effect<Pong>() {public void e(final Pong pong)
            {n--;
             if (n > 0)
                pong.act(Ping.this);
             else // Done. Notify caller.
                cb.act(id);}});}

    // Commence pinging
    public P1<Unit> start()
      {return pong.act(this);}

    // Receive a pong
    public P1<Unit> act(final Pong p)
      {return ping.act(p);}}

The Pong actor simply receives ping messages and responds.


  public class Pong
    {private final Actor<Ping> p;

    public Pong(final Strategy<Unit> s)
      {p = actor(s, new Effect<Ping>()
        {public void e(final Ping m)
          {m.act(Pong.this);}});}

    // Receive a ping
    public P1<Unit> act(final Ping ping)
      {return p.act(ping);}}

And here’s the main program that uses the Ping and Pong actors. There’s only one Pong actor that responds to any number of Ping actors pinging it concurrently. There’s also a further QueueActor that is contacted by each Ping actor once that Ping actor has done its work. The example uses a thread pool to back its Strategy. When all the Ping actors have sent all their pings and received all their pongs, the program is terminated by shutting down the thread pool.


  public class PingPong
    {private final int actors;
     private final int pings;
     private final Strategy<Unit> s;
     private final Actor<Integer> callback;
     private volatile int done;

     public PingPong(final ExecutorService pool, final int actors, final int pings)
       {this.actors = actors;
        this.pings = pings;
        s = Strategy.executorStrategy(pool);

        // This actor gives feedback to the user that work is being done
        // and also terminates the program when all work has been completed.
        callback = QueueActor.queueActor
          (s, new Effect<Integer>() {public void e(final Integer i)
            {done++;
             if (done >= actors)
               {System.out.println("All done.");
                pool.shutdown();}
             else if (actors < 10 || done % (actors / 10) == 0)
                     System.out.println(MessageFormat.format
                       ("{0} actors done ({1} total pongs).", done, pings * done));}})
          .asActor();}

    public static void main(final String[] args)
      {final int actors  = Integer.parseInt(args[0]);
       final int pings   = Integer.parseInt(args[1]);
       final int threads = Integer.parseInt(args[2]);
       new PingPong(Executors.newFixedThreadPool(threads), actors, pings).start();}

    public void start()
      {// We will use one Pong actor...
       final Pong pong = new Pong(s);
       // ...and an awful lot of Ping actors.
       for (int i = 1; i <= actors; i++)
           {new Ping(s, pings, pong, i, callback).start();
            if (actors < 10 || i % (actors / 10) == 0)
               System.out.println(MessageFormat.format("{0} actors started.", i));}}}

What follows is an example run of this Java program, with a million concurrent Ping actors pinging 7 times each. Each actor takes about 300 bytes of memory, so we need a sizable heap for one million of them, but 19 real Java Threads handle this quite nicely on my 8-core machine.

$ time java -Xmx600m -cp ../../../.build/classes/src:. concurrent.PingPong 1000000 7 19
100,000 actors started.
200,000 actors started.
300,000 actors started.
400,000 actors started.
500,000 actors started.
600,000 actors started.
700,000 actors started.
800,000 actors started.
900,000 actors started.
1,000,000 actors started.
100,000 actors done (700,000 total pongs).
200,000 actors done (1,400,000 total pongs).
300,000 actors done (2,100,000 total pongs).
400,000 actors done (2,800,000 total pongs).
500,000 actors done (3,500,000 total pongs).
600,000 actors done (4,200,000 total pongs).
700,000 actors done (4,900,000 total pongs).
800,000 actors done (5,600,000 total pongs).
900,000 actors done (6,300,000 total pongs).
All done.

real    1m16.376s
user    3m53.612s
sys     0m10.924s

As you see, these simple tools, built on basic components of the Java 5 concurrency library, paired with powerful abstractions from programming in functional style, makes it seem like we have millions of tasks running concurrently. We get a virtually unbounded degree of concurrency while never seeing any locks nor performing any blocking calls. The number of concurrent tasks is limited only by the size of your heap.

Again, everything you see here, and more, has been released just recently as part of the Functional Java library. So head over to their download page and put those extra cores to use with Java, today!