Higher-Order Java Parallelism, Part 3: Threadless Concurrency With Actors

Multi-core processing is changing the way we write programs. Modern computers come with multi-core or multiple processors, and modern applications no longer do just one thing at a time. But all of this has left Java a little bit behind. While languages like Erlang and Haskell, even other JVM languages like Scala, have powerful concurrency abstractions, concurrent programming in Java is still mostly based around the threading practices of way back in the time of applets.

Threading in Java has long been the domain of the brave or foolish, weaving a brittle and intricate web of manual synchronization between threads, or threads waiting on other threads to feed them work. We all know the examples from old Java books where there might be a thread running in a loop, sleeping for a second, and waking up to check if work needs to be done. This kind of implementation is still being taught to newcomers to the Java language. And when they start writing their own applications in Java, they find that this approach does not scale. Not along the axis of program complexity, nor the axis of the number of threads. In a complex application or one with many threads, you may end up with a program that stops doing anything at all for long periods of time, or worse, hangs forever, while still consuming all the operating system resources that go with those threads.

Blocking vs. Nonblocking Concurrency

The new concurrency library in Java 5 help this situation a great deal. At least some advancement has been made towards better concurrency abstractions. But there are still pitfalls. For instance, as we’ve seen, the Future and Callable interfaces are really rather jolly useful, and they can take us a long way towards the kinds of things possible in those other languages. But at the end of the day, something has to call Future.get(), and that something will block unless the value is already available, until such time that it is. This can result in deadlocks or starvation, as you may end up in a situation where all threads are blocking on Futures and none are available to advance your program forward. This would be bad. In fact, we could say that in a highly concurrent system, blocking on shared state is a catastrophic event.

The Java libraries are a veritable minefield of methods that, ultimately, block on shared memory monitors. There’s Thread.join(), Future.get(), BlockingQueue.take(), and the list goes on. But what if we could solve this problem with a simple abstraction? Could we solve the blocking problem once and re-use our solution for different situations? Let’s see.

Instead of sleeping or blocking, what we really want is the ability for our code to say: “When X happens, wake up a thread to execute me.” That is, in some sense we want threads to go on doing useful work, and have them be notified when an event happens that requires more work to be done. For example, when Bob comes into my office and asks me for this month’s TPS reports, he doesn’t stand around and wait for them, nor does he check periodically to see if I have them done, and he certainly doesn’t go to sleep to periodically wake up and poll me for them. He will continue with his work (or whatever it is he does all day), since I have instructions to send him those TPS reports as soon as they’re ready. We’re independent and concurrent actors, so we don’t wait on each other.

There is a model of computation, called the Actor Model, that works very much the same way. This is the model employed in the design of Erlang, and it was a major motivator for languages like Simula and Scheme. An implementation of Actors comes with the standard Scala library, and as it happens an implementation of it for Java has just been released as part of the 2.8 version of the Functional Java library. You can read more about the actor model on your own, but I will explain in some detail how it works in Functional Java.

Algorithm + Strategy = Parallelism

The first thing to explain is Parallel Strategies. I’ve talked about them before, but I’ll do so again here since they’re important to what I’m trying to demonstrate. The idea of the Parallel Strategy is that we can capture a threading pattern, or some method of concurrency in general, in a separate abstraction called a Strategy, and write our concurrent programs independently of the particular threading pattern.

In Java terms, a Strategy<A> is a way of turning any Java expression, whose evaluated type is A, into a value of type A. How this happens is implementation-specific, but we’d like it to be concurrent. First, we need a way of capturing the idea of an unevaluated A. It’s a bit like the idea behind Runnable, only that we can get a value out of it. Java 5 has the Callable interface, which is really close to representing “any expression”, since we can create Callables anonymously, but its call() method throws Exceptions, creating a kind of barrier for the programmer. Instead, we’ll use fj.P1<A>. It’s an abstract class with one abstract method: A _1()

The Strategy class has an instance method called par(P1<A>). This will evaluate the given P1 concurrently, immediately returning another P1 that can get the value once it’s ready. Behind the scenes, it actually creates a Callable, turns it into a Future, then returns a P1 that calls Future.get(), but all of that is implementation detail. Strategy comes with some static construction methods that implement basic threading patterns, and you can implement your own by passing a Future-valued function to the strategy(F<P1<A>, Future<A>>) method.

You might be concerned that P1 can’t throw any checked exceptions. This is by design. You’ll find that once you start working with concurrent effects, the value of checked exceptions goes out the window. Besides, if our P1 did need to throw errors, we would use Lazy Error Handling and declare this in the P1's type.

Strategy + Effect = Actor

So now we have concurrent evaluation and parallel transformations. But we’re still ending up with objects that require us to call a blocking method to get their value. The only way around that is to not return any values at all. Instead, we’re going to let our P1s have side-effects. So instead of returning a P1 that lets us wait for the value, our P1s are now going to update a variable or send the value somewhere.

To model a P1 that doesn’t return a useful value, I’m going to use the Unit type. It’s another one of those trivial but useful classes. The Unit type is part of Functional Java, and it’s simply a type that only has one possible value. It’s a lot like the standard library’s Void type, except Void is even further impoverished. When you use this type, you’re saying that the actual value is immaterial. So to describe a P1 that doesn’t return anything useful, we use the type P1<Unit>. We can also describe a transformation from some type T to Unit, with F<T, Unit>.

Functional Java comes with a more honest version of F<A, Unit>, which is Effect<A>. It’s more honest in the sense that it doesn’t pretend to be a transformation, which properly should have no side-effects. Effect<A> is explicit about having side-effects, which is what we intend for Actors. It has one method that doesn’t return a value at all: void e(A a).

We now have what we need to instantiate the Actor class. Here’s an example of how to create and use a simple actor:


  Strategy<Unit> s = Strategy.simpleThreadStrategy();

  Actor<String> a = Actor.actor(s, new Effect<String>()
    {public void e(String s)
      {System.out.println(s);}});

  a.act("Hello, actors!");


The actor receives “messages” on its act method, and the Strategy serves as a kind of mailbox for it. You will note that there’s no dependency at all on any particular threading strategy or any part of the concurrency library by the Actor class. The strategy could be sending our Effects to be evaluated by Threads, remote server farms, by ForkJoin tasks, or even by a Mechanical Turk.

An Actor With its Own Mailbox

The Actor class by itself doesn’t yet quite achieve the kind of actors model you see implemented in Erlang or Scala. The important difference is that the Actor above can process multiple “messages” simultaneously. This solution, then, is more general, although if an actor such as the above mutates some state, we’re likely to run into race conditions. Not to worry, it’s easy to construct the more specific case. We just add a queue and ensure that only one message is processed at a time. This construct is available as part of Functional Java, and it’s called QueueActor.

Of course, the “one thread at a time” requirement is not implemented using any blocking or synchronization. Instead, The QueueActor has two possible states–“suspended” or “running”–and, behind the scenes, this is enforced with an AtomicBoolean to keep it consistent in the face of concurrency. If the actor is suspended when it receives a message, it becomes running and its Effect is immediately handed off to its Strategy. If it’s already running, then callers will leave a message on its queue. The QueueActor's Effect is a concurrent, threadless recursion (i.e. it uses a Strategy rather than a Thread) that completely empties the queue, then puts the QueueActor's state back to “suspended”.

QueueActor puts some sanity into managing locally mutable state within an actor’s Effect, since it’s ensured that the state can only be mutated by one thread at a time. It is guaranteed to act on its messages in some unspecified order, but is otherwise semantically equivalent to Actor.

The Obligatory Example

We now have a light-weight implementation of the actor model, in Java. Don’t believe it? Have a look at this implementation of the canonical Ping-Pong example (imports omitted), and compare it to similar examples for Erlang and Scala.

First, a Ping actor that sends a number of pings to a given Pong actor, waiting for a pong reply each time before sending the next ping.


  public class Ping
    {private final Pong pong;
     private final Actor<Pong> ping;
     private final Actor<Integer> cb;
     private volatile int n;

     public Ping(final Strategy<Unit> s, final int i, final Pong pong, final int id, final Actor<Integer> callback)
       {n = i;
        this.pong = pong;
        cb = callback;
        ping = actor
          (s, new Effect<Pong>() {public void e(final Pong pong)
            {n--;
             if (n > 0)
                pong.act(Ping.this);
             else // Done. Notify caller.
                cb.act(id);}});}

    // Commence pinging
    public P1<Unit> start()
      {return pong.act(this);}

    // Receive a pong
    public P1<Unit> act(final Pong p)
      {return ping.act(p);}}

The Pong actor simply receives ping messages and responds.


  public class Pong
    {private final Actor<Ping> p;

    public Pong(final Strategy<Unit> s)
      {p = actor(s, new Effect<Ping>()
        {public void e(final Ping m)
          {m.act(Pong.this);}});}

    // Receive a ping
    public P1<Unit> act(final Ping ping)
      {return p.act(ping);}}

And here’s the main program that uses the Ping and Pong actors. There’s only one Pong actor that responds to any number of Ping actors pinging it concurrently. There’s also a further QueueActor that is contacted by each Ping actor once that Ping actor has done its work. The example uses a thread pool to back its Strategy. When all the Ping actors have sent all their pings and received all their pongs, the program is terminated by shutting down the thread pool.


  public class PingPong
    {private final int actors;
     private final int pings;
     private final Strategy<Unit> s;
     private final Actor<Integer> callback;
     private volatile int done;

     public PingPong(final ExecutorService pool, final int actors, final int pings)
       {this.actors = actors;
        this.pings = pings;
        s = Strategy.executorStrategy(pool);

        // This actor gives feedback to the user that work is being done
        // and also terminates the program when all work has been completed.
        callback = QueueActor.queueActor
          (s, new Effect<Integer>() {public void e(final Integer i)
            {done++;
             if (done >= actors)
               {System.out.println("All done.");
                pool.shutdown();}
             else if (actors < 10 || done % (actors / 10) == 0)
                     System.out.println(MessageFormat.format
                       ("{0} actors done ({1} total pongs).", done, pings * done));}})
          .asActor();}

    public static void main(final String[] args)
      {final int actors  = Integer.parseInt(args[0]);
       final int pings   = Integer.parseInt(args[1]);
       final int threads = Integer.parseInt(args[2]);
       new PingPong(Executors.newFixedThreadPool(threads), actors, pings).start();}

    public void start()
      {// We will use one Pong actor...
       final Pong pong = new Pong(s);
       // ...and an awful lot of Ping actors.
       for (int i = 1; i <= actors; i++)
           {new Ping(s, pings, pong, i, callback).start();
            if (actors < 10 || i % (actors / 10) == 0)
               System.out.println(MessageFormat.format("{0} actors started.", i));}}}

What follows is an example run of this Java program, with a million concurrent Ping actors pinging 7 times each. Each actor takes about 300 bytes of memory, so we need a sizable heap for one million of them, but 19 real Java Threads handle this quite nicely on my 8-core machine.

$ time java -Xmx600m -cp ../../../.build/classes/src:. concurrent.PingPong 1000000 7 19
100,000 actors started.
200,000 actors started.
300,000 actors started.
400,000 actors started.
500,000 actors started.
600,000 actors started.
700,000 actors started.
800,000 actors started.
900,000 actors started.
1,000,000 actors started.
100,000 actors done (700,000 total pongs).
200,000 actors done (1,400,000 total pongs).
300,000 actors done (2,100,000 total pongs).
400,000 actors done (2,800,000 total pongs).
500,000 actors done (3,500,000 total pongs).
600,000 actors done (4,200,000 total pongs).
700,000 actors done (4,900,000 total pongs).
800,000 actors done (5,600,000 total pongs).
900,000 actors done (6,300,000 total pongs).
All done.

real    1m16.376s
user    3m53.612s
sys     0m10.924s

As you see, these simple tools, built on basic components of the Java 5 concurrency library, paired with powerful abstractions from programming in functional style, makes it seem like we have millions of tasks running concurrently. We get a virtually unbounded degree of concurrency while never seeing any locks nor performing any blocking calls. The number of concurrent tasks is limited only by the size of your heap.

Again, everything you see here, and more, has been released just recently as part of the Functional Java library. So head over to their download page and put those extra cores to use with Java, today!

28 thoughts on “Higher-Order Java Parallelism, Part 3: Threadless Concurrency With Actors

  1. Presumably you still have some way of blocking for, say, running automated tests. Or have you a way of having those run concurrently?

    I’d be interested to see if you could build something on top of this to make it look like you have continuations running pure functions, rather than side effects.

  2. Ricky: You can have a sequential strategy that simply makes the blocking call instead of running in a thread.

    As for making it look like we have functions rather than effects, the Promise class is the beginning of an implementation for that kind of thing.

  3. Holger: The first example shows you how to do blocking I/O. I’d rather see you do nonblocking I/O, though ;)

  4. Ahem ;) I meant blocking I/O and freeing up the executing Thread, continuation-style, for the duration of the syscall.
    It seems to me that without continuations (which require a custom compiler or bytecode-fu to maintain the actor state, like Kilim/Rife etc. do) this seems little more than a thread pool with pseudo-functional sugarcoating on top. Admittedly very tasty sugarcoating. :)
    Another question: if the thread pool is bounded, isn’t there an inherent danger of deadlocking with the current QueueActor? Not in the n-ping:1-pong example, but for example when the actors would form interacting groups. As far as I understand that is one of the reasons why Erlang’s message-send is always nonblocking.
    My interest in all this is something that is guaranteed to be deadlock-free (which means noblocking) yet with a bounded number of OS threads to prevent stack runaway/overallocation/overloading and better deterministic behaviour.

  5. Holger: It seems what you want is nio. To your second question, I don’t see a danger of deadlocking with QueueActor since it’s totally non-blocking. I guess you could still design interacting actors (QueueActors or otherwise) that wait for messages from each other indefinitely.

  6. We need to stop looking at the world through our object shades and start looking out an actor window. I have recently started looking at DataRush which introduced me to the flow based paradigm which seems very similar to the actor paradigm. Where you build a graph based program where the parallelism implicit. Where nodes(actors) are the basic computational unit and messages flow between them. I would recommend checking out their free flow based java library at http://www.pervasivedatarush.com/downloads.

  7. Great article!

    I agree with your approach. In fact, I’ve implemented a similar library in Java. You can find a Ping Pong Example with Jetlang on Gooogle Code.

    The jetlang library is a java port of Retlang which is a .NET library for message based concurrency. Retlang was originally influenced by Erlang.

    In jetlang, a ProcessFiber is roughly equivalent to your QueueActor. Channels allow messages to be exchanged between fibers.

    Take a look if you get a chance. Feedback is appreciated.

    Good luck on your project. I will continue to monitor your blog for updates.

    Mike

  8. Pingback: Higher-Order Java Parallelism, Part 4: A Better Future « Apocalisp

  9. Let’s look at this your Bob example once again. In the original post, apocalisp wrote:

    “For example, when Bob comes into my office and asks me for this month’s TPS reports, he doesn’t stand around and wait for them, nor does he check periodically to see if I have them done, and he certainly doesn’t go to sleep to periodically wake up and poll me for them. He will continue with his work (or whatever it is he does all day), since I have instructions to send him those TPS reports as soon as they’re ready. We’re independent and concurrent actors, so we don’t wait on each other.”

    Now do i get it correctly, that (in the sense of the “classical” actor model as in erlang, scala actors, etc.) this behaviour of Bob not waiting for me getting those reports ready is achieved through Bob continuing his work on the SAME THREAD, but DIFFERENT actor.. right? ONE Bob’s actor will be in-active (or you could say, “blocked”) and waiting on a “receive” statement for a message with TPS reports, but as multiple actors run in one thread and can be switched, this doesn’t yet block the whole thread.

    So it would somehow be easier to think about it as Bob being a *thread* (and continuing his work by activating other actors on this same thread while waiting for the report). And one of the *actors* on Bob’s thread is *still* waiting for the report (as opposed to the sentence “We’re independent and concurrent actors, so we don’t wait on each other.”).

    Am i… am i more or less right in this? :S

  10. Besides, if i’d like to refer to this article [writing my bachelor thesis], by what name should i do it? Well I can’t cite “apocalisp” in my university work, can i:D maybe you could send drop me a line about how you would like to be referred to my mail sander -dot sonajalg -at mail -dot ee

  11. Sander,

    The actor model doesn’t say very much about threads. It’s perfectly reasonable for each actor to run in its own thread, or for all actors to run in the same thread. It’s not only implementation-specific, but can vary at run-time. The idea with the “TPS report” example is that the Bob actor can continue doing work while the report is being generated. Instead of his waiting on my work, I will interrupt his work when my work is done.

    It’s easiest to think of actors in terms of work queues. Bob puts a request for TPS reports on my work queue and continues processing whatever is in his work queue. When I’m ready to pop my queue (when I complete whatever I’m doing right now), I start working on the reports. When I’m done, I put the reports in his work queue and he’ll receive them when he retrieves them from his queue.

  12. I mean.. if you for example would consider the following Scala Actor framework (pseudo-)code [i know i made a thousand mistakes cause i’ve never actually programmed scala]. It tries to implement this me-Bob (called Alice-Bob, in this case) scenario.

    // Messages classes

    case class AskTPSReport(a: Actor)
    case class TPSRequiringTaskBlocking()
    case class TPSRequiringTaskNonBlocking()
    case class VeryTimeConsumingTask()
    case class TPSReportReceived(report: TPSReport)

    class TPSReport {
    // TODO
    }

    // Actors

    class Alice extends Actor {
    override def run(): unit = startNewTask()

    def startNewTask(): unit = {
    Console.println(“[Alice] ready for new work! What’s my next task?”)
    receive {

    case AskTPSReport(asker)
    Console.println(“[Alice] creating TPS report..”)
    // spend some time writing it..
    var report = TPSReport(“contents_of_report”)
    asker ! TPSReportReceived(report)
    startNewTask()

    // other work..
    case _ =>
    Console.println(“[Alice] doing some other work..”)
    // spend some time
    startNewTask()

    }
    }
    }

    class Bob extends Actor {
    override def run(): unit = startNewTask()

    def startNewTask(): unit = {
    Console.println(“[Bob] ready for new work!”)
    receive {

    // a BLOCING version of this task.. actor will be blocked until receives the TPSReport
    case TPSRequiringTaskBlocking() =>
    Console.println(“[Bob] somebody asked for a TPS report from *ME*.. redirecting to Alice”)

    // TODO reference ‘alice’ missing!
    alice ! AskTPSReportBlocking(this)

    // BLOCK: Bob will “hang around and wait” until Alice gets the report ready
    receive {
    case TPSReport(data) =>
    // complete the rest of this task..
    parseTPSReport(report)
    startNewTask()
    }

    startNewTask()

    // a NON-BLOCING version.. actor will ask for report and skip the rest of the task for now
    case TPSRequiringTaskNonBlocking() =>
    Console.println(“[Bob] somebody asked for a TPS report from *ME*.. redirecting to Alice”)

    // TODO reference ‘alice’ missing!
    alice ! AskTPSReportNonBlocking(this)

    // SKIP the rest of this task until alice has sent back the report.. continue with other work
    startNewTask()

    // .. continue with the task that needed TPS record when it has arrived
    case TPSReportReceived(report) =>
    parseTPSReport(report)
    startNewTask()

    case VeryTimeConsumingTask() =>
    Console.println(“[Bob] got new task.. i’m gonna be busy on this for next 2 days”)
    // spend 2 days
    startNewTask()

    // other work..
    case _ =>
    Console.println(“[Bob] doing some other work..”)
    // spend some time
    startNewTask()

    }
    }
    }

    1) the *blocking* version of this method (TPSRequiringTaskBlocking) doesn’t definately fulfill the requirements you set by your example, as bob virtually hangs around or stands on Alice’s door until she gets the report finished

    2) the *NON*-blocking version of this method (TPSRequiringTaskNonBlocking) almost fulfills your requirements, until you assume that Bob runs startNewTask reasonably often. But what about the case when it is busy executing some task that takes really-really-really long time..? (VeryTimeConsumingTask in our example). Then it DOESN’T get “interrupted”, as you required, because it doesn’t run the startNewTask cycle again until it has finished it’s current long task. So basically, i don’t see how this framework does INTERRUPT Bob that it

    (This IS a very realistic scenario: imagine that i have a 4-core processor and i create 4 parallel actors on separate threads to perform a very long search task (let’s say: finding a discrete logarithm of some long integer modulo something). When one actor actually FINDS a solution, it would need to INTERRUPT the others and say “hey guys, i already found the solution: you can stop working now!” But in our case, as the other actors are being BUSY on their previous work and not checking their mailboxes, they wouldn’t get INTERRUPTED unless they explicitly go and check their mailboxes (after some timeout for example).

    It feels to me that the only way to implement this interrupting-thing would be to somehow *clone* bob on the same thread: Bob1 would be waiting for the TPS report, while Bob2 would be running the VeryTimeConsumingTask. When the message arrives, control on the thread is instantly transformed to Bob1.. i think?

    The thing is that i’m not 100% sure that scala actor framework would actually work this way (would it..? anybody..?), and i don’t have the *slightest* idea how the Functional Java would work here, as i haven’t had time to study it yet. How would it..?

  13. And here is the text commenting this code, that the moderation didn’t let through before (because of the code, i hope)

    Well, if you consider the example (behind the link in previous post) Scala Actor framework (pseudo-)code [i know i made a thousand mistakes cause i’ve never actually programmed scala]. It tries to implement this me-Bob (called Alice-Bob, in this case) scenario.

    1) the *blocking* version of this method (TPSRequiringTaskBlocking) doesn’t definately fulfill the requirements you set by your example, as bob virtually hangs around or stands on Alice’s door until she gets the report finished

    2) the *NON*-blocking version of this method (TPSRequiringTaskNonBlocking) almost fulfills your requirements, until you assume that Bob runs startNewTask reasonably often. But what about the case when it is busy executing some task that takes really-really-really long time..? (VeryTimeConsumingTask in our example). Then it DOESN’T get “interrupted”, as you required, because it doesn’t run the startNewTask cycle again until it has finished it’s current long task. So basically, i don’t see how this framework does INTERRUPT Bob that it

    (This IS a very realistic scenario: imagine that i have a 4-core processor and i create 4 parallel actors on separate threads to perform a very long search task (let’s say: finding a discrete logarithm of some long integer modulo something). When one actor actually FINDS a solution, it would need to INTERRUPT the others and say “hey guys, i already found the solution: you can stop working now!” But in our case, as the other actors are being BUSY on their previous work and not checking their mailboxes, they wouldn’t get INTERRUPTED unless they explicitly go and check their mailboxes (after some timeout for example).

    It feels to me that the only way to implement this interrupting-thing would be to somehow *clone* bob on the same thread: Bob1 would be waiting for the TPS report, while Bob2 would be running the VeryTimeConsumingTask. When the message arrives, control on the thread is instantly transformed to Bob1.. i think?

    The thing is that i’m not 100% sure that scala actor framework would actually work this way (would it..? anybody..?), and i don’t have the *slightest* idea how the Functional Java would work here, as i haven’t had time to study it yet. How would it..?

  14. Yes, now i’m pretty sure at least about how Scala Actor Framework would handle this:

    1) in the general case, you would assume that Bob does not receive a task that takes 2 days to execute, e.g. it is sufficient to just use the TPSRequiringTaskNonBlocking style implementation and, after the TPS reoport has arrived, Bob will get it (okay, you would have to implement a priority message stuff also somehow, to make sure that Bob immediately handles the TPSReportReceived message, and not anything else waiting in his mailbox *before* this desired message)

    2) in the case where Bob would be re-acting to some message for a long-long time (message VeryTimeConsumingTask, in our case), the only thing basically that would help would be to create a NEW Bob that is blocked on the “receive” statement waiting for the TPSReportReceived message. Then the *old* Bob could still go execute VeryTimeConsumingTask, while the new one being ready to react to the report the very moment it arrives. Right..? I think so..

  15. Sander,

    You should make sure you understand the difference between a thread blocking and an actor blocking. In this context, threads are a limited operating system resource and the system can only start so many. When a thread is blocked, it is waiting for another thread to modify some shared state. When all threads are blocked, the entire system halts, since no execution can take place to advance it to the next state. This is a resource starvation problem.

    When an actor is “blocked”, it is simply waiting for a message. The only OS resource expended on a blocked actor is memory on the heap. The system will allocate threads to carry other actors forward and the expected message will eventually arrive as long as there are threads executing any actors. It’s still possible to arrive at a situation where all actors are waiting for messages, but this is not a resource starvation problem. This is the halting problem.

    In your example, with an actor executing a task requiring two days, ask yourself if anyone works this way in real life. Let’s say Bob’s task was “read Atlas Shrugged”. Weighing in at over 1000 pages, this book would take a while to read. But in reality, we would break this task down into smaller chunks. The Bob actor would have the option of stopping at each chapter and putting the remainder of the task back on his own work queue.

    Spawning a second instance of Bob will not necessarily be of any help in your example, since the system could be running in a single thread, in which the TPS reports would not be processed until the sytem’s only thread had run Bob’s current task. If you want other actors to interrupt Bob while he’s doing his work, you have to design him and his work to be interruptible.

    I highly recommend familiarising yourself with the Actor model proper. There are many good resources linked from the Wikipedia article, but I find the following paper best:

    http://dspace.mit.edu/bitstream/handle/1721.1/6952/AITR-844.pdf?sequence=2

  16. Thanks-a-lot.. your posts clear these issues a bit for me:)

    Of course, breaking things up into smaller chunks definately makes sense, and as it seems, is even inevitable here.

    Anyway, about your extreme case of running *all* the actors on the same thread.. would doing this actually *ever* make sense? As I understand from P.Haller’s Scala Actors articles, at least Scala’s send/receive semantics work in such way that if everything would run on the same thread, even the *execution order* of things would be 100% deterministic (and of course, there wouldn’t any longer be any actual parallelism of separate actors doing something at the very same time). Is this then even “concurrent programming” any more? :p

  17. Sander,

    It absolutely makes sense to run a system with thousands of actors and only one thread, for example if you have only one processor. It’s the equivalent of running a web browser and an email client at the same time on a machine with only one CPU. At the microprocessor level, only one program is running at a given instant, since the program counter can only point to a single instruction. But in the abstract the programs are running concurrently because the operating system switches the processor between them, constantly advancing the state of each program.

    Remember, time is relative. When you say that single-threaded system doesn’t do two things “at the very same time”, you’re speaking relative to you, an external observer. But what is time? Time is simply the fact that the universe is causal and that causality is sequential.

    So you see, relative to a given actor, time literally stops while its sequence of instructions is not being executed. It suspends at t(n), and receives its next message at t(n+1). Relative to us, eons could pass, but to the actor it is simply the next instant. The important thing is that the actor cannot know the difference, and does not need to know whether it’s running on a machine with one core or a billion of them. This means that we don’t have to design our systems differently depending on the capabilities and architecture of the machine, and that’s a large benefit.

  18. > Relative to us, eons could pass

    That reminds me of one of my favorite slides from my actors talk. In trying to explain the difference between synch (call with continuation/callback) vs. async (fire and forget) I get to a place where deadlock occurs. I ask people what they expect happens next and then follow up with “well, I don’t know if it’s the _next_ thing to happen, but at some point, the sun explodes” and put up a nice picture of a supernova. Gets a laugh.

    Interesting discussion. In actor work some time ago, I had the long computation issue where another actor may have made the computation of the current actor moot. Periodically the actor would send itself a null message and return, allowing other messages to be processed, including updates of the solved problems, so it could check, upon receiving the null message, and return if the computation wasn’t needed anymore. A kind of polling, which I didn’t really like, but I think is inevitable in that kind of problem decomposition. Dramatis (ruby/python actor library) has an even nicer interface where you can just call “yield”, which does the message/response under the covers and makes the application code a bit cleaner looking, but in the end, it’s the same communication.

  19. great post.

    Is there any way to stop individual actors without shutting down the executor service and stopping all actors?
    Something like sending ‘poison pill’ to actors so they terminate themselves?

    I tested the Ping Pong example with some modification.
    When using multiple threads, the actors processed message concurrently, but when I used a single thread to run multiple Ping Actors, the output in console of ping actors curiously seemed sequential rather than concurrent.

    I gave each actor unique ID and outputted all pings to the console with actor ID and timestamp.
    The actors were outputting message to the console one after another instead of outputting at the same time. As the second actor’s output didn’t start before the first actor finished outputting ping message, and that’s how the subsequent actors behaved.

    I was expecting all actors to process message at the same time concurrently.

    Did I miss something about actors entirely?

    Is the concurrency directly dependent on the number of threads in ExecutorService?

    • Adrian,

      I’m not sure what it would mean to “stop” an actor in general. If you stop sending messages to it, it will stop. If it’s already processing a message, then you would have to terminate the thread it’s running in if you wanted to stop it. A “poison pill” is specific to your application, I think. You could, for example, have the Ping actor accept an Either, where Kill instructs the actor to ignore further Pongs and notify its callback that it has finished.

      The effect you’re seeing with sequential output when using a single thread ExecutorService is probably just an effect of the way the ExecutorService works. I don’t know, but I’d suspect that when the single thread parks, it will pick up tasks submitted by itself first, due to thread-local memory.

Leave a reply to JamesF Cancel reply