lecture9: Threads


Most programs we have see so far had a single thread of control. In sequential programs, execution starts at main and follows sequentially, branching via control structures and method calls. In event-based programs, control follows the event loop, which extracts events and calls handlers. But there is still a single control thread, if an event handler loops forever, the program locks up.

However, it is often convenient to have multiple threads of control; to have the program doing more than one thing at once. For example:

There are several ways to accomplish this:
Use a variation on event-based programming in which control loops over all of the task is the systems, running each until it blocks or surrenders control. For I/O based programs where we are waiting for multiple channels, we can sleep on a timer and periodically check all sourcesfor something to do. This is called "polling". These approaches in essences require building a mini operating system into the program. The scheme has some disadvantages: the tasks must remember to surrender control or other tasks are starved, and the handler routines for each task cannot use the call stack and local variables to store task state.

Since this is a common programming circumstance, most programming systems now provide a capability to address these issues, usually called threads.


To understand threads, we start with processes. Processes are the operating systems notion of 'program'. A processes state consists of several portions.

Modern operating systems manage multiple processes. Each process in effect sees it's own version of the computer and is isolated from the other processes. Computing resources and shared via 'time-slicing'. A module in the OS, called the scheduler, runs each process in turn for a certain amount of time. Then the state of that process is stored, and the next process is run (storing process state and swapping processes involve major programming magic). There are two types of schedulers, pre-emptive and non-pre-emptive. In non-pre-emptive scheduling, a process runs until it blocks or surrenders control. In pre-emptive scheduling, the OS enforces time limits on running times and can interrupt a process in the middle of computation and let others run.


Java Threads

Java Thread Scheduling


Java Synchronization


More Thread Notes;

Common Thread Patterns


Review thread and synchronization

Introduce explicit wait(), notify() and advanced threading

Pipeline Streams