While working on a presentation for JSR 315 – servlet specification 3.0, I realized that a key aspect to understanding asynchronous servlets was to understand how asynchronous processing worked in Java in the first place.

One thing led to another, and soon I was neck deep in executors and executor services – the key building blocks of aysnchronous processing in Java.

In this blog post, I summarize my learnings on this topic.


A task is defined as a small independent activity that represents some unit of work that starts at some point, requires some activity or computation, and then terminates. In a web server, each individual incoming request meets this definition. In Java, these are represented by instances of Runnable or Callable.

A thread can be considered to be a running instance of a task. If a task represents some unit of work that needs to be done, then a thread represents the actual performance of that task. In Java, these are represented by instances of Thread.

Synchronous processing occurs when a task must be done in the main thread of execution. In other words, the main program must wait until the current task is done, before it can continue on with its processing.

Asynchronous processing is when the main thread delegates the processing of a task to a separate independent thread. That thread is then responsible for the processing associated with the task, while the main thread returns to doing whatever main programs do.

A thread pool represents one or more threads sitting around waiting for work to be assigned to them. A pool of threads brings a number of advantages to the party. First, it limits the cost of setting up and tearing down threads, since threads in the pool are reused rather than created from scratch each time. Second, it can serve to limit the total number of active threads in the system, which reduces the memory and computing burdens on the server. Finally, it lets you delegate the problem of managing threads to the pool, simplifying your application.

At this point, it is important to note that there are three critical mechanisms at work here – there’s the arrival of tasks to be processed (someone is requesting some units of work to be done), there is the submission of tasks to some holding tank, and then there’s the actual processing of each task. The Executor framework in Java separates the latter two mechanisms – submission and processing.

The arrival of requests is generally out of the control of the program – and may be driven by requests from clients. The submission of a request is typically made by requesting that the task be added to a queue of incoming tasks, while the processing is implemented using a pool of threads that sit idle waiting to be assigned an incoming task to process.

Java 5.0 and Thread Pools

Java 5.0 comes with its own thread pool implementation – within the Executor and ExecutorService interfaces. This makes it easier for you to use thread pools within your own programs.

An Executor provides application programs with a convenient abstraction for thinking about tasks. Rather than thinking in terms of threads, an application now deals simply with instances of Runnable, which it then passes to an Executor to process.

The ExecutorService interface extends the simplistic Executor interface, by adding lifecycle methods to manage the threads in the pool. For instance, you can shutdown the threads in the pool.

In addition, while the Executor lets you submit a single task for execution by a thread in the pool, the ExecutorService lets you submit a collection of tasks for execution, or to obtain a Future object that you can use to track the progress of that task.

Runnable and Callable

The Executor framework represents tasks using instances of either Runnable or Callable. Runnable‘s run() method is limiting in that it cannot return a value, nor throw a checked exception.  Callable is a more functional version, and defines a call() method that allows the return of some computed value, and even throwing an exception if necessary.

Controlling your Tasks

You can get detailed information about your tasks using the FutureTask class, an instance of which can wrap either a Callable or a Runnable. You can get an instance of this as the return value of the submit() method of an ExecutorService, or you can manually wrap your task in a FutureTask before calling the execute() method.

The FutureTask instance, which implements the Future interface, gives you the ability to monitor a running task, cancel it, and to retrieve its result (as the return value of a Callable‘s call() method).


The most common implementation of ExecutorService that we will encounter is the ThreadPoolExecutor.

Tasks are submitted to a ThreadPoolExecutor as instances of Runnable. The executor is then responsible for the actual processing, and your application no longer needs to care about what happens behind that abstraction.

This executor is defined in terms of:

  1. a pool of threads (with a configured number of minimum and maximum threads),
  2. a work queue,
    this queue holds the submitted tasks which are still to be assigned a Thread from the pool. There are two main types of queue – bounded and unbounded.Adding tasks to an unbounded queue always succeeds.A bounded queue (such as a LinkedBlockingQueue with a fixed capacity) rejects tasks once the number of pending tasks reaches its maximum capacity.
  3. a handler that defines how rejections should be handled (the saturation policy).
    When a task cannot be added to a queue, the thread pool will call its registered rejection handler to determine what should happen. The default rejection policy is to simply throw a RejectedExecutionException runtime exception, and it is up to the program to catch the exception and process it. Other policies exist, such as DiscardPolicy, which silently discards the task without any notifications.
  4. a thread factory .
    By default, new threads constructed by the executor will have certain properties – such as a priority of Thread.NORM_PRIORITY, and a thread name that is based on the pool number and thread number within the pool. You can use a custom thread factory to override these defaults.

Algorithm for using an Executor

1.  Create an Executor

You first create an instance of an Executor or ExecutorService in some global context (such as the application context for a servlet container).

The Executors class has a number of convenience static factory methods that create an ExecutorService. For instance, newFixedThreadPool() returns a ThreadPoolExecutor instance which is intialized with an unbounded queue and a fixed number of threads; while newCachedThreadPool() returns a ThreadPoolExecutor instance initialized with an unbounded queue and unbounded number of threads. In the latter case, existing threads are reused if available, and if no free thread is available, a new one is created and added to the pool. Threads that have been idle for longer than a timeout period will be removed from the pool.

private static final Executor executor = Executors.newFixedThreadPool(10);

Rather than use these convenience methods, you might find it more appropriate to instantiate your own fully customized version of ThreadPoolExecutor – using one of its many constructors.

private static final Executor executor = new ThreadPoolExecutor(10, 10, 50000L,   TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(100));

This creates a bounded queue of size 100, with a thread pool of fixed size 10.

2. Create one or more tasks

You are required to have one or more tasks to be performed as instances of either Runnable or Callable.

3. Submit the task to the Executor

Once you have an ExecutorService, you can submit a task to it using either the submit() or execute() methods, and a free thread from the pool will automatically dequeue the tasks and execute it.

4. Execute the task

The Executor is then responsible for managing the task’s execution as well as the thread pool and queue. Exactly what happens here depends on the thread pool size limits, the number of idle threads, and the bounds of the queue.

In general, if the pool has fewer than its configured number of minimum threads, new threads will be created to handle queued tasks until that limit is reached.

If the number of threads is higher than the configured minimum, then the pool is reluctant to start any more threads. Instead, the task is queued until a thread is freed up to process it. If the queue is full, then a new thread must be started to handle it.

If the number of threads is at the maximum, the pool is unable to start new threads, and hence the task will either be added to the queue, or will be rejected if the queue is full.

The threads in the pool will continually monitor the queue, for tasks to run. Threads that are higher than the configured minimum become ripe for termination if they have been idle for longer than the configured timeout period.

5. Shutdown the Executor

At application shutdown, we terminate the executor by invoking its shutdown() method. You can choose to terminate it gracefully, or abruptly.

C’est le fin!


Java Concurrency in Practice, Goetz et al

Java Threads, Oaks and Wong