This post has been moved to
http://blog.higher-order.com/blog/2009/04/27/a-critique-of-impure-reason/
This post has been moved to
http://blog.higher-order.com/blog/2009/04/27/a-critique-of-impure-reason/
Today, we’re going on a journey. It is a sojourn to the outer limits of the expressiveness of the Java type system, and to the edge of what can be considered sane programming. This is definitely one for the power users. You will need a firm grasp of the Java language, and an iron constitution for type annotations. But the reward will be something far greater than any treasure: understanding, entertainment, and perhaps even enlightenment. Remember that we choose to do these things in Java, not because they are easy, but because they are hard. Now then, to the ships.
A Most Versatile Vessel
In Java, we can create a list that contains values of type A
, by constructing a value of type List<A>
. The type system will enforce that each element in the list is in fact of type A
. But sometimes we want lists of values that aren’t necessarily of the same type. Normally, for such a purpose, we would use a heterogeneous list, which in Java is just the raw list type List<?>
or List<Object>
. Since every class in Java is a subclass of Object (and now that we have autoboxing), such a list can contain any Java value. There are many kinds of situation where this would be necessary. For example, a row of database results will comprise values that are not all of the same type.
However, there’s a problem with the raw list approach. In using the List<?>
type, we are dispensing with the type system. When you get a value from the list, how do you know what it is? How do you know which operations it supports? Well, you will have to defer that discovery until runtime, and use explicit type casting. Most will shrug at this and say: “So what?” After all, this is what we did anyway, before generics. Ah, but what if we don’t have to? Can we create generic heterogeneous collections that are type-safe? Yes, we can. Sort of.
Products of Types
What we would like to see is if it’s possible to declare some constraints on the types of a heterogeneous collection, to achieve essential type-safety while maintaining the extensibility of a list. Of course, it’s easy to create types that are the product of two or more types:
public abstract class P2<A, B> { public abstract A _1(); public abstract B _2(); }
But the length of this kind of product is as fixed as the length of a string in Pascal. It isn’t extensible, so it’s more like a type-safe heterogeneous array than a list. If you want products of different lengths, you will need to declare separate classes for P3<A, B, C>
, P4<A, B, C, D>
, etc. What we’re trying to achieve is a product of arbitrary length, whose length might even vary at runtime. There’s no reason we couldn’t create products of products in a chain, like P2<A, P2<B, P2<C, D>>>
, and this is more or less the approach that we will take.
Introducing HList
To achieve our goal, we’re going to implement linked lists in the type system. Let’s remind ourselves what a linked list looks like. A List<T>
is essentially either the empty list or a value of type T
paired with a List<T>
. In Java, using the List<A>
type from Functional Java, an unsafe heterogeneous list might be constructed in a manner like the following:
List<?> x = cons("One", cons(2, cons(false, nil()));
The cons
method constructs a list, and the nil
method returns the empty list. With just these two methods, we can create any homogeneous list. A list has two methods to access its members, head()
which returns the first element, and tail()
which returns the rest of the list. Getting the head or tail of the empty list is an error at runtime.
Let’s now take a step up into the type system, and say that a list of types is either the empty list or a type paired with a list of types. This gives rise to our heterogeneous list type:
public abstract class HList<A extends HList<A>> { private HList() {} private static final HNil nil = new HNil(); public static HNil nil() { return nil; } public static <E, L extends HList<L>> HCons<E, L> cons(final E e, final L l) { return new HCons<E, L>(e, l); } public static final class HNil extends HList<HNil> { private HNil() {} } public static final class HCons<E, L extends HList<L>> extends HList<HCons<E, L>> { private E e; private L l; private HCons(final E e, final L l) { this.e = e; this.l = l; } public E head() { return e; } public L tail() { return l; } } }
That’s not a lot of code, and it’s all relatively straightforward Java. The HList
class is parameterised with a parameterised subclass of itself. There are only two concrete subclasses of HList
that can possibly occupy that slot: the type HNil
and the type constructor HCons
. These represent the empty list and the list constructor, respectively. HCons
takes two type parameters, the first representing the first element of the list, and the second being another HList
, allowing us to form a chain of them. HNil
does not take type parameters, so it terminates the chain.
As with regular old lists, you can access the head()
and tail()
of the list. Note, however, that the fact that you cannot get the head or tail of the empty list is now enforced by the type system. There’s a nil
method to get the empty list, and a cons
method to construct a nonempty list, just like with regular lists.
Here’s an example of how we would construct a heterogeneous list using this new type:
HCons<String, HCons<Integer, HCons<Boolean, HNil>>> x = cons("One", cons(2, cons(false, nil()));
This is more verbose than the unsafe version before, but not by much. Obviously, the HList
example assumes a static import of HList.cons
and the List<?>
example assumes a static import of List.cons
. Using the type-safe version is, however, much nicer. Compare these two contrived examples:
if (x.tail().tail().head()) { return x.head().length() == x.tail().head(); } if ((boolean) x.index(3)) { return ((String) x.head()).length() == (int) x.index(2); }
The latter, of course, offers no static guarantees and may throw ClassCastExceptions
, or we might inadvertently get the head or tail of the empty list at runtime. The former will always work as long as it compiles, guaranteed.
Concatenating HLists
Now let’s do something more interesting with these lists. Notice that the cons methods for both type-safe and unsafe lists prepend an element to a list rather than appending. Sometimes we want to append a list to the end of another. This is unsurprisingly uncomplicated for unsafe lists:
List<?> c = a.append(b);
Behind the scenes, we can think of append as reversing the first list and consing each element to the second list in reverse order. Doing that for HList
is a little more involved. We have to construct a chain of types in exactly the right way, at compile-time.
Appending an HList
to another is a function that takes two HList
-valued arguments and returns an HList
. Using first-class functions from Functional Java, the append operation for HLists
of specific types L
and R
, would be a function of the following type:
F2<L extends HList<R>, L extends HList<L>, LR extends HList<LR>>
Where LR
is the type of the concatenated HList
. Now, since we necessarily have the two arguments, we know the specific types of L
and R
. Since Java doesn’t have type inference, it cannot automatically figure out the specific type of LR
. We will have to supply it as type annotation. Not to worry. Even though Java doesn’t infer types, it can be coerced into doing some type arithmetic. All we have to do is a little inductive reasoning.
Types as Formulae
According to the Curry-Howard isomorphism, a program is a proof, and the hypothesis that it proves is a type for the program. In this sense, Java’s type system is a kind of crude theorem prover. Put another way, a type is a predicate, and values of that type represent the terms for which the predicate holds. The function type above therefore asserts that for any two HLists
, L
and R
, there exists some program to derive the HList LR
. The function type by itself does not put any constraints on LR
, however. It can be derived by any function, not just the concatenation function. We will remedy that presently. We need a formula that states that the two types L
and R
imply a third type LR
which is the HList
concatenation of L
and R
, given some concatenation function. Here is the type that represents that formula:
public static final class HAppend<L, R, LR> { private final F2<L, R, LR> append; private HAppend(final F2<L, R, LR> f) { append = f; } public LR append(final L l, final R r) { return append.f(l, r); } }
At this point, HAppend
is still just a hypothesis without evidence. Remember that a value of a type is proof of the formula that it represents. So we will need to supply two proofs in the form of constructors for values of this type; one for the base case of appending to the empty list HNil
, and another for the case of appending to an HCons
. The base case is easy. Appending anything to the empty list should result in that same thing. So the HAppend
constructor for appending to the empty list looks like this:
public static <L extends HList<L>> HAppend<HNil, L, L> append() { return new HAppend<HNil, L, L>(new F2<HNil, L, L>() { public L f(final HNil hNil, final L l) { return l; } }); }
The case for the nonempty list is not quite as easy. Consider its type:
public static <X, A extends HList<A>, B, C extends HList<C>, H extends HAppend<A, B, C>> HAppend<HCons<X, A>, B, HCons<X, C>> append(final H h)
Read the return type first. This returns an HAppend
that appends some B
to an HCons<X, A>
. The type of the head of the first list (X
) becomes the type of the head of the concatenated list. The tail of the concatenated list is C
. The type constraints state that C
must be an HList
, and that there must exist some way to append B
(the second list) to A
(the tail of the first list) so that they make C
. We must supply proof that this last constraint holds, and you’ll see that such a proof is in fact supplied as an argument (in the form of the value h
).
What this is saying is that, given the premise that A
and B
can be concatenated, the concatenation of HCons<X, A>
and B
can be inferred. A value of type HAppend<A, B, C>
is precisely proof of the hypothesis that A
and B
can be concatenated, since there are only these two cases and we’ve supplied a proof for both. In other words, if we can append to the empty list, then that’s proof enough that we can append to a list of one element, which proves that we can append to a list of two elements, and so on. Given this, we can construct a chain of proofs. This concatenated proof, then, is a function that concatenates lists of the corresponding types.
OK, so how do we use this? Well, here’s an example program that appends one list to another:
public class HList_append { public static void main(final String[] args) { // The two lists final HCons<String, HCons<Integer, HCons<Boolean, HNil>>> a = cons("Foo", cons(3, cons(true, nil()))); final HCons<Double, HCons<String, HCons<Integer[], HNil>>> b = cons(4.0, cons("Bar", cons(new Integer[]{1, 2}, nil()))); // A lot of type annotation final HAppend<HNil, HCons<Double, HCons<String, HCons<Integer[], HNil>>>, HCons<Double, HCons<String, HCons<Integer[], HNil>>>> zero = append(); final HAppend<HCons<Boolean, HNil>, HCons<Double, HCons<String, HCons<Integer[], HNil>>>, HCons<Boolean, HCons<Double, HCons<String, HCons<Integer[], HNil>>>>> one = append(zero); final HAppend<HCons<Integer, HCons<Boolean, HNil>>, HCons<Double, HCons<String, HCons<Integer[], HNil>>>, HCons<Integer, HCons<Boolean, HCons<Double, HCons<String, HCons<Integer[], HNil>>>>>> two = append(one); final HAppend<HCons<String, HCons<Integer, HCons<Boolean, HNil>>>, HCons<Double, HCons<String, HCons<Integer[], HNil>>>, HCons<String, HCons<Integer, HCons<Boolean, HCons<Double, HCons<String, HCons<Integer[], HNil>>>>>>> three = append(two); // And all of that lets us append one list to the other. final HCons<String, HCons<Integer, HCons<Boolean, HCons<Double, HCons<String, HCons<Integer[], HNil>>>>>> x = three.append(a, b); // And we can access the components of the concatenated list in a type-safe manner if (x.tail().tail().head()) System.out.println(x.tail().tail().tail().tail().tail()[1] * 2); // 4 } }
Holy pointy brackets, Batman! Do we really need all of that? Well, look at what it’s doing. It’s constructing a concatenation function of the appropriate type, by supplying the premise at each step. If this seems mechanical, then that’s because it is. There is only one possible implementation for the HAppend
we need, but Java does not have any mechanism for figuring this out, nor does it provide a facility for the programmer to tell it how.
Contrast that to Scala. The above is a clear example of where Scala’s implicit arguments come in handy. If we import this to Scala, we can make both of the append
functions implicit, and we can further make the H
argument to the append
function for nonempty lists implicit as well. There can be only one possible implementation of each function, so it can be declared once and used implicitly wherever proofs of the corresponding types are required. Jesper Nordenberg has implemented an HList library for Scala that demonstrates this well. With implicits and Scala, the whole middle section of our program is condensed from 12 lines of type annotations to just:
val x = a.append(b)
Now, if you’re really into this Java stuff, you’re probably thinking: “implicits are just dependency injection”. Well, in a sense, you would be right. Both dependency injection and inheritance are degenerate forms of implicits. However, there is currently no dependency injection framework for Java that can abstract over type constructors such that it provides injection of parameterised types with injection of type parameters also. If you can prove me wrong, by all means send me evidence in the form of working code.
Conclusion
Clearly, Java is not very useful for this kind of type-safe programming. I was actually quite surprised that you can do this in Java at all, but we’ve definitely hit the outer boundary of what can be considered reasonably expressible.
The code you’ve seen in this article uses the new HList
package that was released with Functional Java 2.16. And is based on the Haskell HS library by Oleg Kiselyov.
This is the fourth installment in a series of posts about making highly concurrent software easier to write in Java. Previous entries are available here: part 1, part 2, part 3. However, I aim to make it possible to follow along even if you haven’t read the previous posts.
I Have Seen the Future…
If you have used the Java 5 concurrency API at all, you will have come across the Future
class. For example, when you submit a Callable<Integer>
to an ExecutorService
, what you get back is a Future<Integer>
which represents a computation, running concurrently, that will (hopefully) result in an integer at some time in the future. Once you have the Future<Integer> fi
, you can later get the integer out of it by calling fi.get().
That’s all fine and dandy, but let’s say you want do do something like add two future integers. You could do something like this:
int sum = x.get() + y.get();
This will block the current thread until both of those integers are available, then add them together. But why wait for that? If you have an ExecutorService
, you can create a new Future
that computes the sum:
Future<Integer> sum = executorService.submit(new Callable<Integer>() { public Integer call() { return x.get() + y.get(); } });
Now the current thread can continue, but we’ve started a new thread that does nothing until the values of x
and y
have both been calculated by yet another thread.
We’re beginning to see a problem here. We want to be able to compose Futures
together to form new Futures
, but find that the number of threads required to compose n Future
values is on the order of O(n). If we have a fixed-size thread pool, we’ll run into starvation. If we have an unbounded thread pool, then we might start more threads than the operating system can handle, most of which will be doing nothing at all but wait for other threads.
This should all sound very familiar. Threads are a space resource. What kind of processes are O(n) in their space requirement? If you said “linearly recursive processes”, go to the head of the class. Intuitively, for the same reason that we can find iterative versions of any recursive algorithm, it seems that we should be able to find an algorithm to accomplish the same thing with O(1) threads.
…and it is a Monad
In the above example, it’s like we’re giving seperate instructions, waiting for the results of each in between. Imagine if we were working in an office with Bob and Alice, and we needed work on something from both of them. We might go to Bob and say: “Bob, process this and give me the result”. Then we’d take the result to Alice and say: “Alice, here’s a result from Bob.” It would be much better, if we could just go to Bob and say: “Bob, process this and give the result to Alice.” This is the essential difference between recursive and iterative processes.
But wait! We say that kind of thing all the time, in Java:
public Work bob(Work w) { ... } public Work alice(Work w) { ... } public Work bobThenAlice(Work w) { Work b = bob(w); return alice(b); }
Here, we’re instructing a single thread to do some work, then use the result of that work to do more work. What’s really sneaky here is the meaning of the semicolon. In this context, what the former semicolon means is “take the stored value b
from the previous statement and bind it to the free variable b
in the next statement”. You can think of the second semicolon as binding a blank statement over the result of the preceding statement.
Using first-class functions from Functional Java, and using the Callables monad from the first part of this series, you could implement that same behaviour using something like this:
F<Work, Callable<Work>> bob = new F<Work, Callable<Work>>() { public Callable<Work> f(final Work w) { return new Callable<Work>() { public Work call() { ... } }; } }; F<Work, Callable<Work>> alice = new F<Work, Callable<Work>>() { ... }; public Callable<Work> bobThenAlice(Work w) { return Callables.bind(bob.f(w), alice); }
That’s pretty neat. Now we have a single Callable
that we can run concurrently in a new thread, turning it into a Future
. But wouldn’t it be cool if we could bind Futures?
That would let us take already running computations and combine them in exactly this way. We want a Future
monad.
The problem with combining Futures
is in the nature of the future. This is a deliberate pun on “future”. Think about time for a second. What does it mean to get a value that’s in the future? By the very fact that causality is sequential, it’s a violation of the nature of reality to have something that doesn’t yet exist. It’s the future; you’re not supposed to get stuff out. But, we can put stuff in, can’t we? Yes we can. You know those corny time-capsule things where people put their mountain bikes and Nintendo games for future generations to enjoy later? We can do that with data values. And not just values, but computations.
Here’s One I Made Earlier
The Future
class in the standard Java libraries doesn’t come with any methods for projecting computations into the future. But Functional Java comes with a class called Promise<A> which does have that feature. It makes use of light-weight concurrent processes (actors), and parallel strategies, as described in the previous post, to implement the ability to combine concurrent computations into larger (concurrently executing) structures.
Since it is implemented as a monad, the methods it provides are all the usual suspects: unit, bind, fmap, join, etc. Here’s a quick overview of what they do and why they’re useful. Grasping them doesn’t just help you understand the Promise
class, but any monad you may come across in the (ahem) future.
The unit function, the constructor of Promises
, is just called promise
. It has a few overloaded forms, but here is the simplest one.
public static <A> Promise<A> promise(Strategy<A> s, P1<A> p);
The P1
class is just a simple closure with no arguments, provided by the Functional Java library. P1<A>
consists of one abstract method: A _1(). Strategy
represents a method of evaluating P1s
concurrently. I also talk about Strategies
in the previous post, but the long and the short of it is that it has methods to evaluate the P1
value according to some parallelisation strategy, like with a thread pool for instance.
Calling the promise
method starts a concurrent computation, in a manner according to the given strategy, that evaluates p
. The resulting Promise
value is a handle on the running computation, and can be used to retrieve the value later. Promise.claim()
will block the current thread until the value is available, exactly like Future.get()
, but this is generally not what you want to do. Instead, you want to bind.
The essence of the monad pattern is the binding function. If you don’t think you already know what a monad is, but understand this method, then you know more than you think:
public Promise<B> bind(F<A, Promise<B>> f);
This method means that if you have a Promise
of an A
, and a function from an A
to a Promise
of a B
, you can get a Promise
of a B
. I.e. if somebody promises you an A
, and I can promise you a B
for every A
, it’s the same thing as being promised a B
in the first place.
The mapping function:
public Promise<B> fmap(F<A, B> f);
This method means that if you have an Promise
of an A
, and a function from A
to B
, you can get a Promise
of a B
. In other words, you can map any function over a Promise
, and fmap
will return you a Promise
of the result. Behind the scenes, fmap
is implemented by calling the bind
and promise
methods. The difference between this method and the bind
method is subtle but important. Calling p.bind(f)
is exactly equivalent to calling Promise.join(p.fmap(f)).
The join function:
public static <A> Promise<A> join(Promise<Promise<A>> a);
Join
is a lot more useful than it looks. If you have a promised Promise,
it’s the same as just having a Promise
. In practise, that means that if you can start a concurrent task that starts a concurrent task, you can combine those into one concurrent task. You can think of it as the semantic equivalent of Thread.join()
, except that our method returns the joined Promise
immediately.
Coming back to Bob and Alice for a second, we can implement bob
and alice
from the Callables
example above, using Promise
instead of Callable
. Both bob
and alice
will construct Promises
using the promise
method, putting whatever work they do inside a P1
. That way, when you call bob
, he’s already doing his work by the time you mention Alice’s name:
final Strategy<Work> s = Strategy.simpleThreadStrategy(); F<Work, Promise<Work>> bob = new F<Work, Promise<Work>>() { public Promise<Work> f(final Work w) { return promise(s, new P1() { public Work _1() { ... } }); } }; F<Work, Promise<Work>> alice = new F<Work, Promise<Work>>() { ... }; public Promise<Work> bobThenAlice(Work w) { return bob.f(w).bind(alice); }
So now that we can build arbitrarily complex concurrent processes from already-running processes, how do we get the final promised value out? Again, you could call Promise.claim()
, but that blocks the current thread as we know. Instead, Promise
comes equipped with a method to(Actor<A>)
which promises to send the value to the given Actor
as soon as it’s ready. Control is returned to the current thread immediately, and the whole computation continues in the background, including the action to take on the final result. Actors
were discussed in the previous post.
A Fully Functional Example
I think an example is in order. The following program calculates Fibonacci numbers using a naive recursive algorithm. This is an algorithm that benefits particularly well from parallelisation (barring any other kind of optimisation). If we were just using plain old Future
instead of Promise
, the number of Threads
required to calculate the nth Fibonacci number is O(fib(n)). But since we’re using Promise
, we can use a fixed number of actual Java threads.
package concurrent;
import static fj.Bottom.error;
import fj.Effect;
import fj.F;
import fj.F2;
import fj.Function;
import fj.P;
import fj.P1;
import fj.P2;
import fj.Unit;
import fj.data.List;
import fj.control.parallel.Actor;
import fj.control.parallel.Promise;
import fj.control.parallel.Strategy;
import static fj.data.List.range;
import static fj.function.Integers.add;
import static fj.control.parallel.Promise.join;
import static fj.control.parallel.Promise.promise;
import static fj.control.parallel.Actor.actor;
import java.text.MessageFormat;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Fibs {
private static final int CUTOFF = 35;
public static void main(final String[] args) throws Exception {
if (args.length < 1)
throw error("This program takes an argument: number_of_threads");
final int threads = Integer.parseInt(args[0]);
final ExecutorService pool = Executors.newFixedThreadPool(threads);
final Strategy
final Strategy
// This actor performs output and detects the termination condition.
final Actor> out = actor(su, new Effect
>() {
public void e(final List
for (P2
System.out.println(MessageFormat.format(“n={0} => {1}”, p._2(), p._1()));
}
pool.shutdown();
}
});
// A parallel recursive Fibonacci function
final F
public Promise
return n < CUTOFF ?
promise(su, P.p(seqFib(n))) :
f(n - 1).bind(f(n - 2), add);
}
};
System.out.println("Calculating Fibonacci sequence in parallel...");
join(su, spi.parMap(fib, range(0, 46)).map(Promise.
}
// The sequential version of the recursive Fibonacci function
public static int seqFib(final int n) {
return n < 2 ? n : seqFib(n - 1) + seqFib(n - 2);
}
}
[/sourcecode]
For all you Scala fans out there, the Functional Java library comes with convenient bindings for Scala as well. Here’s the same thing written in Scala. Note that this does not use the Actor library from the standard Scala libraries, but the same lighter weight Java implementation that the Java example above uses.
package concurrent
import fj.control.parallel.{Actor, Promise}
import fj.Function.curry
import fj.control.parallel.Strategy.executorStrategy
import fjs.control.parallel.Strategy.parMap
import fjs.control.parallel.Promise._
import fjs.control.parallel.Actor._
import Integer.parseInt
import List.range
import java.util.concurrent.Executors.newFixedThreadPool
import fjs.F._
import fjs.F2._
import fjs.P1._
import fjs.P2._
import fjs.data.List._
import fjs.control.parallel.Strategy.ListPar
object Fibs {
val CUTOFF = 35;
def main(args: Array[String]) = {
if (args.length < 1)
error("This program takes an argument: number_of_threads")
val threads = parseInt(args(0))
val pool = newFixedThreadPool(threads)
implicit def s[A] = executorStrategy[A](pool)
// This actor performs output and detects the termination condition.
val out: Actor[List[Int]] = actor{
ns =>
for ((n, i) <- ns.zipWithIndex) printf("n=%d => %d\n”, i, n)
pool.shutdown()
}
// A parallel recursive Fibonacci function
def fib(n: Int): Promise[Int] = {
if (n < CUTOFF) promise(() => seqFib(n))
else fib(n – 1).bind(fib(n – 2), curry((_: Int) + (_: Int)))
}
println(“Calculating Fibonacci sequence in parallel…”)
out ! sequence(parMap[Int, Promise[Int], List](fib, range(0, 46)));
}
// The sequential version of the recursive Fibonacci function
def seqFib(n: Int): Int = if (n < 2) n else seqFib(n - 1) + seqFib(n - 2);
}
[/sourcecode]
Here's an example run of this program using a pool of 10 threads. It runs about 7 times faster that way than with just 1 thread on my 8-way machine. The Scala version is also very slightly faster for some reason.
$ scala -classpath .:../../../build/classes/src concurrent.Fibs 10 Calculating Fibonacci sequence in parallel… n=0 => 0 n=1 => 1 n=2 => 1 n=3 => 2 n=4 => 3 n=5 => 5 n=6 => 8 n=7 => 13 n=8 => 21 n=9 => 34 n=10 => 55 n=11 => 89 n=12 => 144 n=13 => 233 n=14 => 377 n=15 => 610 n=16 => 987 n=17 => 1597 n=18 => 2584 n=19 => 4181 n=20 => 6765 n=21 => 10946 n=22 => 17711 n=23 => 28657 n=24 => 46368 n=25 => 75025 n=26 => 121393 n=27 => 196418 n=28 => 317811 n=29 => 514229 n=30 => 832040 n=31 => 1346269 n=32 => 2178309 n=33 => 3524578 n=34 => 5702887 n=35 => 9227465 n=36 => 14930352 n=37 => 24157817 n=38 => 39088169 n=39 => 63245986 n=40 => 102334155 n=41 => 165580141 n=42 => 267914296 n=43 => 433494437 n=44 => 701408733 n=45 => 1134903170
Massive win! If we had been using Future
instead of Promise
, we would have needed at least 55 threads (since we’re using a cutoff at 35 and 45 – 35 = 10 and fib(10) = 55). Heck, we could even remove the threshold value altogether and calculate all 45 parallel fibs, in parallel. That would require 1,134,903,170 threads in the absence of non-blocking concurrency abstractions like Promise
and Actor
. We can run that in just one thread if we’d like.