Previous Up Next

1  Introduction

Concurrency is a property of systems in which several computations are executing “simultaneously”, and potentially interacting with each other. Concurrency doesn’t imply that some hardware parallelism be available but just that the computations (that we’ll call “threads” in this text) are “in progress” at the same time, and will evolve independently. Of course, threads also need to synchronize and exchange data.

OCaml[15] is a functional language of the ML family. It has support for both imperative and object paradigms and can be compiled to native code or bytecode executed by a virtual machine. Its standard library includes a Thread module that provides concurrent execution of threads using either the operating system support for threads or support from the virtual machine (this last choice being available only for bytecode executables).

Besides making potentially easier the exploitation of hardware parallelism,1 threads allow overlapping I/O and computation (while a thread is blocked on an I/O operation, other threads may proceed) and support a concurrent programming style. Many applications can be expressed more cleanly with concurrency.

Operating systems provide concurrency by time sharing of the CPU between different processes or threads. Scheduling of such threads is preemptive, i.e. the system decides when to suspend a thread to allow another to run. This operation, called a context switch, is relatively costly[16]. Since threads can be switched at any time, synchronisation tools such as locks must generally be used to define atomic operations. Locks are low level objects with ugly properties such as breaking compositionnality [22]. The same applies for OCaml VM threads.

Threads can also be implemented in user-space without support from the operating system. Either the language runtime [1] or a library [7, 6] arranges to schedule several “threads” running in a single system thread. In such a setting, scheduling is generally cooperative: threads decide themselves when to suspend to let others execute. Care must be taken in handling I/O since if one such thread blocks waiting for an I/O operation to finish the whole set of threads is blocked. The solution is to use only non blocking I/O operations and switch to another thread if the operation fails. This approach:

On the other hand the programmer has to ensure that threads do indeed yield regularly.

In this paper we study several ways to implement very light weight cooperative threads in a functional language such as OCaml without any addition to the language. We first describe our concurrency model in Section 2. Three simple example applications making heavy use of concurrency are then presented in Section 3. The various implementations are described in Sections 4 and 5. Performance comparisons based on the example applications are given in Section 6. The paper closes with a conclusion and perspectives (Section 7).


Previous Up Next