Higher-Order Java Parallelism, Part 4: A Better Future

This is the fourth installment in a series of posts about making highly concurrent software easier to write in Java. Previous entries are available here: part 1, part 2, part 3. However, I aim to make it possible to follow along even if you haven’t read the previous posts.

I Have Seen the Future…

If you have used the Java 5 concurrency API at all, you will have come across the Future class. For example, when you submit a Callable<Integer> to an ExecutorService, what you get back is a Future<Integer> which represents a computation, running concurrently, that will (hopefully) result in an integer at some time in the future. Once you have the Future<Integer> fi, you can later get the integer out of it by calling fi.get().

That’s all fine and dandy, but let’s say you want do do something like add two future integers. You could do something like this:

int sum = x.get() + y.get();

This will block the current thread until both of those integers are available, then add them together. But why wait for that? If you have an ExecutorService, you can create a new Future that computes the sum:


Future<Integer> sum = executorService.submit(new Callable<Integer>() {
  public Integer call() {
    return x.get() + y.get();
  }
});

Now the current thread can continue, but we’ve started a new thread that does nothing until the values of x and y have both been calculated by yet another thread.

We’re beginning to see a problem here. We want to be able to compose Futures together to form new Futures, but find that the number of threads required to compose n Future values is on the order of O(n). If we have a fixed-size thread pool, we’ll run into starvation. If we have an unbounded thread pool, then we might start more threads than the operating system can handle, most of which will be doing nothing at all but wait for other threads.

This should all sound very familiar. Threads are a space resource. What kind of processes are O(n) in their space requirement? If you said “linearly recursive processes”, go to the head of the class. Intuitively, for the same reason that we can find iterative versions of any recursive algorithm, it seems that we should be able to find an algorithm to accomplish the same thing with O(1) threads.

…and it is a Monad

In the above example, it’s like we’re giving seperate instructions, waiting for the results of each in between. Imagine if we were working in an office with Bob and Alice, and we needed work on something from both of them. We might go to Bob and say: “Bob, process this and give me the result”. Then we’d take the result to Alice and say: “Alice, here’s a result from Bob.” It would be much better, if we could just go to Bob and say: “Bob, process this and give the result to Alice.” This is the essential difference between recursive and iterative processes.

But wait! We say that kind of thing all the time, in Java:


public Work bob(Work w) { ... }
public Work alice(Work w) { ... }

public Work bobThenAlice(Work w) {
  Work b = bob(w);
  return alice(b);
}

Here, we’re instructing a single thread to do some work, then use the result of that work to do more work. What’s really sneaky here is the meaning of the semicolon. In this context, what the former semicolon means is “take the stored value b from the previous statement and bind it to the free variable b in the next statement”. You can think of the second semicolon as binding a blank statement over the result of the preceding statement.

Using first-class functions from Functional Java, and using the Callables monad from the first part of this series, you could implement that same behaviour using something like this:

F<Work, Callable<Work>> bob = new F<Work, Callable<Work>>() {
  public Callable<Work> f(final Work w) {
    return new Callable<Work>() {
      public Work call() { ... }
    };
  }
};
F<Work, Callable<Work>> alice = new F<Work, Callable<Work>>() { ... };

public Callable<Work> bobThenAlice(Work w) {
  return Callables.bind(bob.f(w), alice);
}

That’s pretty neat. Now we have a single Callable that we can run concurrently in a new thread, turning it into a Future. But wouldn’t it be cool if we could bind Futures? That would let us take already running computations and combine them in exactly this way. We want a Future monad.

The problem with combining Futures is in the nature of the future. This is a deliberate pun on “future”. Think about time for a second. What does it mean to get a value that’s in the future? By the very fact that causality is sequential, it’s a violation of the nature of reality to have something that doesn’t yet exist. It’s the future; you’re not supposed to get stuff out. But, we can put stuff in, can’t we? Yes we can. You know those corny time-capsule things where people put their mountain bikes and Nintendo games for future generations to enjoy later? We can do that with data values. And not just values, but computations.

Here’s One I Made Earlier

The Future class in the standard Java libraries doesn’t come with any methods for projecting computations into the future. But Functional Java comes with a class called Promise<A> which does have that feature. It makes use of light-weight concurrent processes (actors), and parallel strategies, as described in the previous post, to implement the ability to combine concurrent computations into larger (concurrently executing) structures.

Since it is implemented as a monad, the methods it provides are all the usual suspects: unit, bind, fmap, join, etc. Here’s a quick overview of what they do and why they’re useful. Grasping them doesn’t just help you understand the Promise class, but any monad you may come across in the (ahem) future.

The unit function, the constructor of Promises, is just called promise. It has a few overloaded forms, but here is the simplest one.

public static <A> Promise<A> promise(Strategy<A> s, P1<A> p);

The P1 class is just a simple closure with no arguments, provided by the Functional Java library. P1<A> consists of one abstract method: A _1(). Strategy represents a method of evaluating P1s concurrently. I also talk about Strategies in the previous post, but the long and the short of it is that it has methods to evaluate the P1 value according to some parallelisation strategy, like with a thread pool for instance.

Calling the promise method starts a concurrent computation, in a manner according to the given strategy, that evaluates p. The resulting Promise value is a handle on the running computation, and can be used to retrieve the value later. Promise.claim() will block the current thread until the value is available, exactly like Future.get(), but this is generally not what you want to do. Instead, you want to bind.

The essence of the monad pattern is the binding function. If you don’t think you already know what a monad is, but understand this method, then you know more than you think:

public Promise<B> bind(F<A, Promise<B>> f);

This method means that if you have a Promise of an A, and a function from an A to a Promise of a B, you can get a Promise of a B. I.e. if somebody promises you an A, and I can promise you a B for every A, it’s the same thing as being promised a B in the first place.

The mapping function:

public Promise<B> fmap(F<A, B> f);

This method means that if you have an Promise of an A, and a function from A to B, you can get a Promise of a B. In other words, you can map any function over a Promise, and fmap will return you a Promise of the result. Behind the scenes, fmap is implemented by calling the bind and promise methods. The difference between this method and the bind method is subtle but important. Calling p.bind(f) is exactly equivalent to calling Promise.join(p.fmap(f)).

The join function:

public static <A> Promise<A> join(Promise<Promise<A>> a);

Join is a lot more useful than it looks. If you have a promised Promise, it’s the same as just having a Promise. In practise, that means that if you can start a concurrent task that starts a concurrent task, you can combine those into one concurrent task. You can think of it as the semantic equivalent of Thread.join(), except that our method returns the joined Promise immediately.

Coming back to Bob and Alice for a second, we can implement bob and alice from the Callables example above, using Promise instead of Callable . Both bob and alice will construct Promises using the promise method, putting whatever work they do inside a P1. That way, when you call bob, he’s already doing his work by the time you mention Alice’s name:

final Strategy<Work> s = Strategy.simpleThreadStrategy();
F<Work, Promise<Work>> bob = new F<Work, Promise<Work>>() {
  public Promise<Work> f(final Work w) {
    return promise(s, new P1() {
      public Work _1() { ... }
    });
  }
};
F<Work, Promise<Work>> alice = new F<Work, Promise<Work>>() { ... };

public Promise<Work> bobThenAlice(Work w) {
  return bob.f(w).bind(alice);
}

So now that we can build arbitrarily complex concurrent processes from already-running processes, how do we get the final promised value out? Again, you could call Promise.claim(), but that blocks the current thread as we know. Instead, Promise comes equipped with a method to(Actor<A>) which promises to send the value to the given Actor as soon as it’s ready. Control is returned to the current thread immediately, and the whole computation continues in the background, including the action to take on the final result. Actors were discussed in the previous post.

A Fully Functional Example

I think an example is in order. The following program calculates Fibonacci numbers using a naive recursive algorithm. This is an algorithm that benefits particularly well from parallelisation (barring any other kind of optimisation). If we were just using plain old Future instead of Promise, the number of Threads required to calculate the nth Fibonacci number is O(fib(n)). But since we’re using Promise, we can use a fixed number of actual Java threads.

package concurrent;

import static fj.Bottom.error;
import fj.Effect;
import fj.F;
import fj.F2;
import fj.Function;
import fj.P;
import fj.P1;
import fj.P2;
import fj.Unit;
import fj.data.List;
import fj.control.parallel.Actor;
import fj.control.parallel.Promise;
import fj.control.parallel.Strategy;
import static fj.data.List.range;
import static fj.function.Integers.add;
import static fj.control.parallel.Promise.join;
import static fj.control.parallel.Promise.promise;
import static fj.control.parallel.Actor.actor;

import java.text.MessageFormat;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Fibs {

private static final int CUTOFF = 35;

public static void main(final String[] args) throws Exception {
if (args.length < 1) throw error("This program takes an argument: number_of_threads"); final int threads = Integer.parseInt(args[0]); final ExecutorService pool = Executors.newFixedThreadPool(threads); final Strategy su = Strategy.executorStrategy(pool);
final Strategy> spi = Strategy.executorStrategy(pool);

// This actor performs output and detects the termination condition.
final Actor> out = actor(su, new Effect>() {
public void e(final List fs) {
for (P2 p : fs.zipIndex()) {
System.out.println(MessageFormat.format(“n={0} => {1}”, p._2(), p._1()));
}
pool.shutdown();
}
});

// A parallel recursive Fibonacci function
final F> fib = new F>() {
public Promise f(final Integer n) {
return n < CUTOFF ? promise(su, P.p(seqFib(n))) : f(n - 1).bind(f(n - 2), add); } }; System.out.println("Calculating Fibonacci sequence in parallel..."); join(su, spi.parMap(fib, range(0, 46)).map(Promise.sequence(su))).to(out);
}

// The sequential version of the recursive Fibonacci function
public static int seqFib(final int n) {
return n < 2 ? n : seqFib(n - 1) + seqFib(n - 2); } } [/sourcecode] For all you Scala fans out there, the Functional Java library comes with convenient bindings for Scala as well. Here’s the same thing written in Scala. Note that this does not use the Actor library from the standard Scala libraries, but the same lighter weight Java implementation that the Java example above uses.

package concurrent

import fj.control.parallel.{Actor, Promise}
import fj.Function.curry
import fj.control.parallel.Strategy.executorStrategy
import fjs.control.parallel.Strategy.parMap
import fjs.control.parallel.Promise._
import fjs.control.parallel.Actor._
import Integer.parseInt
import List.range
import java.util.concurrent.Executors.newFixedThreadPool
import fjs.F._
import fjs.F2._
import fjs.P1._
import fjs.P2._
import fjs.data.List._
import fjs.control.parallel.Strategy.ListPar

object Fibs {
val CUTOFF = 35;

def main(args: Array[String]) = {
if (args.length < 1) error("This program takes an argument: number_of_threads") val threads = parseInt(args(0)) val pool = newFixedThreadPool(threads) implicit def s[A] = executorStrategy[A](pool) // This actor performs output and detects the termination condition. val out: Actor[List[Int]] = actor{ ns =>
for ((n, i) <- ns.zipWithIndex) printf("n=%d => %d\n”, i, n)
pool.shutdown()
}

// A parallel recursive Fibonacci function
def fib(n: Int): Promise[Int] = {
if (n < CUTOFF) promise(() => seqFib(n))
else fib(n – 1).bind(fib(n – 2), curry((_: Int) + (_: Int)))
}

println(“Calculating Fibonacci sequence in parallel…”)
out ! sequence(parMap[Int, Promise[Int], List](fib, range(0, 46)));
}

// The sequential version of the recursive Fibonacci function
def seqFib(n: Int): Int = if (n < 2) n else seqFib(n - 1) + seqFib(n - 2); } [/sourcecode] Here's an example run of this program using a pool of 10 threads. It runs about 7 times faster that way than with just 1 thread on my 8-way machine. The Scala version is also very slightly faster for some reason.

$ scala -classpath .:../../../build/classes/src concurrent.Fibs 10
Calculating Fibonacci sequence in parallel…
n=0 => 0
n=1 => 1
n=2 => 1
n=3 => 2
n=4 => 3
n=5 => 5
n=6 => 8
n=7 => 13
n=8 => 21
n=9 => 34
n=10 => 55
n=11 => 89
n=12 => 144
n=13 => 233
n=14 => 377
n=15 => 610
n=16 => 987
n=17 => 1597
n=18 => 2584
n=19 => 4181
n=20 => 6765
n=21 => 10946
n=22 => 17711
n=23 => 28657
n=24 => 46368
n=25 => 75025
n=26 => 121393
n=27 => 196418
n=28 => 317811
n=29 => 514229
n=30 => 832040
n=31 => 1346269
n=32 => 2178309
n=33 => 3524578
n=34 => 5702887
n=35 => 9227465
n=36 => 14930352
n=37 => 24157817
n=38 => 39088169
n=39 => 63245986
n=40 => 102334155
n=41 => 165580141
n=42 => 267914296
n=43 => 433494437
n=44 => 701408733
n=45 => 1134903170

Massive win! If we had been using Future instead of Promise, we would have needed at least 55 threads (since we’re using a cutoff at 35 and 45 – 35 = 10 and fib(10) = 55). Heck, we could even remove the threshold value altogether and calculate all 45 parallel fibs, in parallel. That would require 1,134,903,170 threads in the absence of non-blocking concurrency abstractions like Promise and Actor. We can run that in just one thread if we’d like.

Higher-Order Java Parallelism, Part 3: Threadless Concurrency With Actors

Multi-core processing is changing the way we write programs. Modern computers come with multi-core or multiple processors, and modern applications no longer do just one thing at a time. But all of this has left Java a little bit behind. While languages like Erlang and Haskell, even other JVM languages like Scala, have powerful concurrency abstractions, concurrent programming in Java is still mostly based around the threading practices of way back in the time of applets.

Threading in Java has long been the domain of the brave or foolish, weaving a brittle and intricate web of manual synchronization between threads, or threads waiting on other threads to feed them work. We all know the examples from old Java books where there might be a thread running in a loop, sleeping for a second, and waking up to check if work needs to be done. This kind of implementation is still being taught to newcomers to the Java language. And when they start writing their own applications in Java, they find that this approach does not scale. Not along the axis of program complexity, nor the axis of the number of threads. In a complex application or one with many threads, you may end up with a program that stops doing anything at all for long periods of time, or worse, hangs forever, while still consuming all the operating system resources that go with those threads.

Blocking vs. Nonblocking Concurrency

The new concurrency library in Java 5 help this situation a great deal. At least some advancement has been made towards better concurrency abstractions. But there are still pitfalls. For instance, as we’ve seen, the Future and Callable interfaces are really rather jolly useful, and they can take us a long way towards the kinds of things possible in those other languages. But at the end of the day, something has to call Future.get(), and that something will block unless the value is already available, until such time that it is. This can result in deadlocks or starvation, as you may end up in a situation where all threads are blocking on Futures and none are available to advance your program forward. This would be bad. In fact, we could say that in a highly concurrent system, blocking on shared state is a catastrophic event.

The Java libraries are a veritable minefield of methods that, ultimately, block on shared memory monitors. There’s Thread.join(), Future.get(), BlockingQueue.take(), and the list goes on. But what if we could solve this problem with a simple abstraction? Could we solve the blocking problem once and re-use our solution for different situations? Let’s see.

Instead of sleeping or blocking, what we really want is the ability for our code to say: “When X happens, wake up a thread to execute me.” That is, in some sense we want threads to go on doing useful work, and have them be notified when an event happens that requires more work to be done. For example, when Bob comes into my office and asks me for this month’s TPS reports, he doesn’t stand around and wait for them, nor does he check periodically to see if I have them done, and he certainly doesn’t go to sleep to periodically wake up and poll me for them. He will continue with his work (or whatever it is he does all day), since I have instructions to send him those TPS reports as soon as they’re ready. We’re independent and concurrent actors, so we don’t wait on each other.

There is a model of computation, called the Actor Model, that works very much the same way. This is the model employed in the design of Erlang, and it was a major motivator for languages like Simula and Scheme. An implementation of Actors comes with the standard Scala library, and as it happens an implementation of it for Java has just been released as part of the 2.8 version of the Functional Java library. You can read more about the actor model on your own, but I will explain in some detail how it works in Functional Java.

Algorithm + Strategy = Parallelism

The first thing to explain is Parallel Strategies. I’ve talked about them before, but I’ll do so again here since they’re important to what I’m trying to demonstrate. The idea of the Parallel Strategy is that we can capture a threading pattern, or some method of concurrency in general, in a separate abstraction called a Strategy, and write our concurrent programs independently of the particular threading pattern.

In Java terms, a Strategy<A> is a way of turning any Java expression, whose evaluated type is A, into a value of type A. How this happens is implementation-specific, but we’d like it to be concurrent. First, we need a way of capturing the idea of an unevaluated A. It’s a bit like the idea behind Runnable, only that we can get a value out of it. Java 5 has the Callable interface, which is really close to representing “any expression”, since we can create Callables anonymously, but its call() method throws Exceptions, creating a kind of barrier for the programmer. Instead, we’ll use fj.P1<A>. It’s an abstract class with one abstract method: A _1()

The Strategy class has an instance method called par(P1<A>). This will evaluate the given P1 concurrently, immediately returning another P1 that can get the value once it’s ready. Behind the scenes, it actually creates a Callable, turns it into a Future, then returns a P1 that calls Future.get(), but all of that is implementation detail. Strategy comes with some static construction methods that implement basic threading patterns, and you can implement your own by passing a Future-valued function to the strategy(F<P1<A>, Future<A>>) method.

You might be concerned that P1 can’t throw any checked exceptions. This is by design. You’ll find that once you start working with concurrent effects, the value of checked exceptions goes out the window. Besides, if our P1 did need to throw errors, we would use Lazy Error Handling and declare this in the P1's type.

Strategy + Effect = Actor

So now we have concurrent evaluation and parallel transformations. But we’re still ending up with objects that require us to call a blocking method to get their value. The only way around that is to not return any values at all. Instead, we’re going to let our P1s have side-effects. So instead of returning a P1 that lets us wait for the value, our P1s are now going to update a variable or send the value somewhere.

To model a P1 that doesn’t return a useful value, I’m going to use the Unit type. It’s another one of those trivial but useful classes. The Unit type is part of Functional Java, and it’s simply a type that only has one possible value. It’s a lot like the standard library’s Void type, except Void is even further impoverished. When you use this type, you’re saying that the actual value is immaterial. So to describe a P1 that doesn’t return anything useful, we use the type P1<Unit>. We can also describe a transformation from some type T to Unit, with F<T, Unit>.

Functional Java comes with a more honest version of F<A, Unit>, which is Effect<A>. It’s more honest in the sense that it doesn’t pretend to be a transformation, which properly should have no side-effects. Effect<A> is explicit about having side-effects, which is what we intend for Actors. It has one method that doesn’t return a value at all: void e(A a).

We now have what we need to instantiate the Actor class. Here’s an example of how to create and use a simple actor:


  Strategy<Unit> s = Strategy.simpleThreadStrategy();

  Actor<String> a = Actor.actor(s, new Effect<String>()
    {public void e(String s)
      {System.out.println(s);}});

  a.act("Hello, actors!");


The actor receives “messages” on its act method, and the Strategy serves as a kind of mailbox for it. You will note that there’s no dependency at all on any particular threading strategy or any part of the concurrency library by the Actor class. The strategy could be sending our Effects to be evaluated by Threads, remote server farms, by ForkJoin tasks, or even by a Mechanical Turk.

An Actor With its Own Mailbox

The Actor class by itself doesn’t yet quite achieve the kind of actors model you see implemented in Erlang or Scala. The important difference is that the Actor above can process multiple “messages” simultaneously. This solution, then, is more general, although if an actor such as the above mutates some state, we’re likely to run into race conditions. Not to worry, it’s easy to construct the more specific case. We just add a queue and ensure that only one message is processed at a time. This construct is available as part of Functional Java, and it’s called QueueActor.

Of course, the “one thread at a time” requirement is not implemented using any blocking or synchronization. Instead, The QueueActor has two possible states–“suspended” or “running”–and, behind the scenes, this is enforced with an AtomicBoolean to keep it consistent in the face of concurrency. If the actor is suspended when it receives a message, it becomes running and its Effect is immediately handed off to its Strategy. If it’s already running, then callers will leave a message on its queue. The QueueActor's Effect is a concurrent, threadless recursion (i.e. it uses a Strategy rather than a Thread) that completely empties the queue, then puts the QueueActor's state back to “suspended”.

QueueActor puts some sanity into managing locally mutable state within an actor’s Effect, since it’s ensured that the state can only be mutated by one thread at a time. It is guaranteed to act on its messages in some unspecified order, but is otherwise semantically equivalent to Actor.

The Obligatory Example

We now have a light-weight implementation of the actor model, in Java. Don’t believe it? Have a look at this implementation of the canonical Ping-Pong example (imports omitted), and compare it to similar examples for Erlang and Scala.

First, a Ping actor that sends a number of pings to a given Pong actor, waiting for a pong reply each time before sending the next ping.


  public class Ping
    {private final Pong pong;
     private final Actor<Pong> ping;
     private final Actor<Integer> cb;
     private volatile int n;

     public Ping(final Strategy<Unit> s, final int i, final Pong pong, final int id, final Actor<Integer> callback)
       {n = i;
        this.pong = pong;
        cb = callback;
        ping = actor
          (s, new Effect<Pong>() {public void e(final Pong pong)
            {n--;
             if (n > 0)
                pong.act(Ping.this);
             else // Done. Notify caller.
                cb.act(id);}});}

    // Commence pinging
    public P1<Unit> start()
      {return pong.act(this);}

    // Receive a pong
    public P1<Unit> act(final Pong p)
      {return ping.act(p);}}

The Pong actor simply receives ping messages and responds.


  public class Pong
    {private final Actor<Ping> p;

    public Pong(final Strategy<Unit> s)
      {p = actor(s, new Effect<Ping>()
        {public void e(final Ping m)
          {m.act(Pong.this);}});}

    // Receive a ping
    public P1<Unit> act(final Ping ping)
      {return p.act(ping);}}

And here’s the main program that uses the Ping and Pong actors. There’s only one Pong actor that responds to any number of Ping actors pinging it concurrently. There’s also a further QueueActor that is contacted by each Ping actor once that Ping actor has done its work. The example uses a thread pool to back its Strategy. When all the Ping actors have sent all their pings and received all their pongs, the program is terminated by shutting down the thread pool.


  public class PingPong
    {private final int actors;
     private final int pings;
     private final Strategy<Unit> s;
     private final Actor<Integer> callback;
     private volatile int done;

     public PingPong(final ExecutorService pool, final int actors, final int pings)
       {this.actors = actors;
        this.pings = pings;
        s = Strategy.executorStrategy(pool);

        // This actor gives feedback to the user that work is being done
        // and also terminates the program when all work has been completed.
        callback = QueueActor.queueActor
          (s, new Effect<Integer>() {public void e(final Integer i)
            {done++;
             if (done >= actors)
               {System.out.println("All done.");
                pool.shutdown();}
             else if (actors < 10 || done % (actors / 10) == 0)
                     System.out.println(MessageFormat.format
                       ("{0} actors done ({1} total pongs).", done, pings * done));}})
          .asActor();}

    public static void main(final String[] args)
      {final int actors  = Integer.parseInt(args[0]);
       final int pings   = Integer.parseInt(args[1]);
       final int threads = Integer.parseInt(args[2]);
       new PingPong(Executors.newFixedThreadPool(threads), actors, pings).start();}

    public void start()
      {// We will use one Pong actor...
       final Pong pong = new Pong(s);
       // ...and an awful lot of Ping actors.
       for (int i = 1; i <= actors; i++)
           {new Ping(s, pings, pong, i, callback).start();
            if (actors < 10 || i % (actors / 10) == 0)
               System.out.println(MessageFormat.format("{0} actors started.", i));}}}

What follows is an example run of this Java program, with a million concurrent Ping actors pinging 7 times each. Each actor takes about 300 bytes of memory, so we need a sizable heap for one million of them, but 19 real Java Threads handle this quite nicely on my 8-core machine.

$ time java -Xmx600m -cp ../../../.build/classes/src:. concurrent.PingPong 1000000 7 19
100,000 actors started.
200,000 actors started.
300,000 actors started.
400,000 actors started.
500,000 actors started.
600,000 actors started.
700,000 actors started.
800,000 actors started.
900,000 actors started.
1,000,000 actors started.
100,000 actors done (700,000 total pongs).
200,000 actors done (1,400,000 total pongs).
300,000 actors done (2,100,000 total pongs).
400,000 actors done (2,800,000 total pongs).
500,000 actors done (3,500,000 total pongs).
600,000 actors done (4,200,000 total pongs).
700,000 actors done (4,900,000 total pongs).
800,000 actors done (5,600,000 total pongs).
900,000 actors done (6,300,000 total pongs).
All done.

real    1m16.376s
user    3m53.612s
sys     0m10.924s

As you see, these simple tools, built on basic components of the Java 5 concurrency library, paired with powerful abstractions from programming in functional style, makes it seem like we have millions of tasks running concurrently. We get a virtually unbounded degree of concurrency while never seeing any locks nor performing any blocking calls. The number of concurrent tasks is limited only by the size of your heap.

Again, everything you see here, and more, has been released just recently as part of the Functional Java library. So head over to their download page and put those extra cores to use with Java, today!

Higher-Order Java Parallelism, Part 2: Parallel List Transformations

With a Callable monad and Parallel Strategies in hand, we’re ready to construct some general-purpose parallel functions of a higher order.

Something we’ll want to do quite often in parallel programming is run a function over a whole list of values, in parallel. So often, in fact, that it would be very tedious and repetitive to write a loop every time we want to do this, sparking off threads for each function call, and collecting the results. Instead, what we’re going to develop next is a kind of parallel list functor. That is, a higher-order function that will turn any function into a function that operates on every element of a list simultaneously.

First, let’s get some terminology out of the way, since I’ve been accused of not defining my terms.

A higher-order function is simply a function that takes a function as an argument.

A functor is any type T for which there exists a higher-order function, call it fmap, that transforms a function of type F<A,B> into a function F<T<A>,T<B>>. This fmap function must also obey the laws of identity and composition such that the following expressions return true for all x, p, and q:

  fmap(identity()).f(x) == x
  fmap(compose(p, q)).f(x) == fmap(p).f(fmap(q).f(x))

Here, identity is a first-class function of type F<A,A> that just returns its argument, and compose is function composition. Composition is defined to mean that compose(p, q).f(x) is the same as p.f(q.f(x)).

For example, Functional Java’s List class is a functor, because it comes equipped with a method List<B> map(F<A,B> f) which obeys the laws above. The implementation of List’s map method is trivial: It iterates over the list, calling the given function for each element, and returns the list of the results.

There’s a lot of information about functors and monads on the Internet, and I’ve written about them before, with examples in Java. Tony Morris also wrote a great post where he explains functors as “function application in an environment”. That’s a good way of looking at things.

I also want to make it clear, if you’re just now tuning in, that I’m using classes from Functional Java in the code examples, in addition to classes from the J2SE concurrency library. Basically only Callable and Future are from the J2SE library. Strategy, List, and the Callables utility class are from Functional Java. The code used here is from the latest trunk head revision of that library and may differ from the current official release.

Now, let’s get to the money.

Mapping Over a List in Parallel

We’re going to make a very similar kind of thing to a list functor, except it’s going to map a function over a list in parallel. Our parallel list transformation will take the form of a static method called parMap. It will take any existing function and turn it into a parallel function on lists. That is, if we already have written a function that takes an Integer and returns a String, we can pass that function to parMap, and it will return us a function that takes a list of Integers and returns a list of Strings. This means that nothing has to be rewritten or refactored to support the mapping of a function over a list in parallel. The parMap function will convert our existing code to parallel code, at runtime.

Here’s parMap:

  public <B> F<List<B>, Callable<List<A>>> parMap(final F<B, A> f) {
    return new F<List<B>, Callable<List<A>>>() {
      public Callable<List<A>> f(final List<B> as) {
        return sequence(as.map(compose(par(), callable(f))));
      }
    };
  }

That’s a lot of power for so little code. Let’s read through what it does. It takes a function f and returns a new function that takes a list called as. This new function transforms f with Callables.callable to wrap its return type in a callable (or: callable(f) applies the Kleisli arrow for Callables to f), and composes that with a function called par(). It then maps the resulting composition over the list as, which will result in a list of Callables, which we turn into a Callable of a List with the sequence function. The only new thing here is par(), which is a very simple instance method on Strategy:

  public F<Callable<A>, Callable<A>> par() {
    return compose(Strategy.<A>obtain(), f());
  }

So par() is just the Strategy’s function composed with the obtain() method that we created in Part 1. This turns the Future back into a Callable so that we can manipulate it in a lazy manner. This little guy is the meat of parMap. That is, parMap is basically just mapping par() over a list.

The result of parMap(myFun).f(myList), then, is a Callable of a list that, when called, will give us the results of calling myFun on every element of myList in parallel. What’s more, it will work for any kind of parallelisation Strategy, and it will return immediately (remember, Callable is lazy), ready for us to make further calculations on the results even while those results are being calculated.

I think that’s pretty cool.

Nested Parallelism

Sometimes the problem at hand is not quite so flat that it can be solved with just a map over a list. You might have a need for nested traversal of lists, where for every element in a list you traverse another list. We can model that behaviour with a higher-order function very similar to parMap, except that it takes a function that generates a list. This higher-order function is parFlatMap:

  public static <A, B> Callable<List<B>> parFlatMap(final Strategy<List<B>> s,
                                                    final F<A, List<B>> f,
                                                    final List<A> as) {
    return fmap(List.<B>join_()).f(s.parMap(f).f(as));
  }

In this definition, fmap is from the Callable monad, as described in Part 1. Notice that parFlatMap provides an example use of parMap, using it to turn the given function into a parallel function. The call to parMap with that function will actually yield a Callable<List<List<B>>> (can you see why?), so we use a final join lifted into the Callable environment with fmap, to turn the List of Lists into just a List.

The parFlatMap function is analogous to a parallel list monad, and it works very much like a parallel version of nested loops. For example, given a list of Points pts, a distance function dist that takes two Points and returns the distance between them, and a Strategy<Double> s, we can use parFlatMap to calculate the distance between every Point and every other, in parallel:

  Callable<List<Double>> distances = parFlatMap(s, new F<Point, List<Double>>() {
    public List<Double> f(final Point p) {
      return pts.map(dist.f(p));
    }
  }, pts);

The inner “loop” is represented by List.map, so we’re traversing the list serially, multiplying every element in parallel with each element in sequence. I.e. the outer loop is parallel and the inner loop is serial. It’s quite possible to replace map with parMap, and I’ll leave that as an exercise for the reader (hint: you will need both fmap and join). Note that the call to parFlatMap will return immediately, and the computation will be carried out in the background.

Position-Wise Parallelism and Concurrent Folding

Another function I want to talk about is the parZipWith function. By contrast to parFlatMap, it is like a parallel version of a single loop over two lists. Values from each list are taken at matching positions and the pairs are fed through the argument function all at the same time:

  public static <A, B, C> F2<List<A>, List<B>, Callable<List<C>>> parZipWith(final Strategy<C> s,
                                                                             final F2<A, B, C> f) {
    return new F2<List<A>, List<B>, Callable<List<C>>>() {
      public Callable<List<C>> f(final List<A> as, final List<B> bs) {
        return sequence(as.zipWith(bs, compose(Callables.<B, C>arrow(), curry(f))).map(s.par()));
      }
    };
  }

You will note that parZipWith simply uses zipWith from Functional Java’s List class. There’s some manouvering required to turn the argument function into the right form. The curry method turns the F2<A,B,C> into the equivalent F<A,F<B,C>> (sometimes called a Curried function). The arrow function is the first-class Kleisli arrow for Callables. That just means that composing it with f yields a version of f which returns a Callable.

Let’s take an example of using parZipWith to do useful work. If we had a list of Points representing a path, and we wanted to get the total length of the path, we could zip the list and its tail (every element except the first) with the dist function and take the sum of the results:

  Double length = parZipWith(s, dist).f(pts, pts.tail()).call().foldLeft(curry(sum), 0);

For brevity’s sake, I’m pretending that there’s already a function called sum, which is just an F2<Double, Double, Double> that returns the sum of its two arguments. If foldLeft is something new to you, then have a look at one implementation of it. It’s a more general abstraction of a for-loop than map() is. Folding a list {1,2,3} with the sum function (and 0) can be thought of as replacing all the commas with plusses to yield [0+1+2+3]. We could have just as well iterated over the list, adding each value to a total, instead of using a foldLeft function. But we didn’t, for a reason that will become apparent presently.

There’s something about the above example that’s dissatisfying. We’re getting the distances in parallel, but then call() sticks out like a sore thumb. The computation will block until the list of distances is ready, then it will fold the list with the sum function. What’s worse, call() might throw an Exception. There’s a way we can avoid all that by traversing the list of distances while they’re being calculated. What we need to do is somehow lift the foldLeft function into the Callable environment. The call to foldLeft above turns a List<Double> into a Double. What we want is to avoid the call to call(), and turn our Callable<List<Double>> into a Callable<Double> directly.

This is really easy to do, and it’s a good thing we’re using a folding function rather than a for-loop. Here’s what we do:

  Callable<List<Double>> edges = parZipWith(s, dist).f(pts, pts.tail());
  Callable<Double> length = par(fmap(List.<B, B>foldLeft().f(curry(sum)).f(0)).f(edges));

There. No exceptions, and no waiting for results. This code will return immediately and perform the whole computation in the background. The part that calculates the edge lengths is the same as before, and we’re keeping that in a variable edges for clarity. The other line may require some explanation. We’re getting a first-class foldLeft function from the List class, partially applying it to yield a function that folds a List<Double> into a Double. Then we’re promoting it with fmap into the Callable monad. The promoted function then gets applied to edges, and finally the whole fold is evaluated concurrently by passing it to par.

Hopefully you can see what an immensely powerful tool for parallel programming functional style and higher-order functions are, even in Java. We were able to develop a couple of one-liners above that are terser, clearer, more maintainable, more re-usable (and more type-safe) than anything we could write in imperative object-oriented style using producer/consumer, factories, inversion of control, etc. There’s no secret sauce, no special syntax, no functional fairy dust. It’s just plain old Java with a handful of very useful interfaces.

In the third installment of this series, we will develop a tiny light-weight library for threadless Actors that can juggle millions of simultaneous computations in only a few threads. Stay close.

Note: The code herein is adapted from a Haskell library by Phil Trinder, Hans-Wolfgang Loidl, Kevin Hammond et al.