In addition to process scheduling, another important responsibility of the operating system is process synchronization. Synchronization involves the orderly sharing of system resources by processes. To illustrate, consider the following intersection diagram to the right. We can think of this intersection as a system resource that is shared by two processes: the car process and the train process. If only one process is active, then no resource conflict exists. But what happens when both processes are active and they both arrive at the intersection simultaneously? In this case, the shared resource becomes a problem. They cannot both use the resource at the same time or a collision will occur. Similarly, processes sharing resources on a computer must be properly managed in order to avoid "collisions." Brookshear [1997] illustrates this concept with the example of a shared printer:

"Consider a machine with a single printer running a time-sharing operation system. If a process needs to print its results, it must request that the operating system give it access to the printer's device driver. At this point, the operating system must decide whether to grant this request, depending upon whether the printer is already being used by another process. If it is not, the operating system should grant the request and allow the process to continue; otherwise, the operating system should deny the request and perhaps classify the process as a waiting process until the printer becomes available. Indeed, if two processes were given simultaneous access to the machine's printer, the results would be worthless to both."

Now that the problem of synchronization is properly stated, consider the following related definitions [Balci 1996]:

Critical Resource:
a resource shared with constraints on its use (e.g., memory, files, printers, etc.)

Critical Section:
code that accesses a critical resource

Mutual Exclusion:
at most one process may be executing a Critical Section with respect to a particular critical resource simultaneously

In the example given by Brookshear, the printer is the critical resource. Let's suppose that the processes which are sharing this resource are called process A and process B. The critical sections of process A and process B are the sections of the code which issue the print command. In order to insure that both process do not attempt to use the printer at the same, they must be granted mutually exclusive access to the printer driver. We can illustrate the idea of mutual exclusion with our railroad intersection by adding a semaphore to the picture. Brookshear [1997] explains: "Semaphores are used in software systems in much the same way as they are in railway systems. Corresponding to the section of track that can contain only one train at a time is a sequence of instructions that can be executed by only one process at a time. Such a sequence of instructions is called a critical [section]."

One way to implement semaphores in computers is to use a flag (i.e., a bit of memory) that can be tested and set to either 0 or 1 in a single machine operation. Because both the test and set actions occur in the same machine instruction, this operation is indivisible and cannot be interrupted by another process which is running on the machine. By placing test-and-set operations around the critical section of a program, programmers can guarantee mutually exclusive access to the critical resource.

The applet below [Magee and Kramer 1999] illustrates three processes sharing one critical resource. In order to access the critical section of their code, the processes must wait until the semaphore Mutex (Mutual exclusion) is one. When this happens, the process uses the test-and-set operation to change the value of Mutex to zero and then it executes its critical section. Upon exiting the critical section, the process set the value of Mutex back to one to allow other processes to access the critical resource. The critical section is represented by a light blue (cyan) color. The length of time each process spends in its critical section may be adjusted using the slider control. Moving the slider further to the right increases this time. The adjustment takes effect at the beginning of each revolution. Use the "Run" buttons to begin the execution of the processes.

Unable to load applet


Copyright notice

Another way to implement a semaphore is to use a count rather than just two values. Such semaphores are called counting semaphores in contrast to the binary semaphore presented above. Counting semaphores are used to solve synchronization problems such as the Bounded Buffer problem. Nutt [1997] explains this problem:

"Suppose a system incorporates two processes, one of which produces information (the producer process) and another process that uses the information (the consumer process). The two processes communicate by having the producer obtain an empty buffer from an empty buffer pool, fill it with information, and place it in a pool of full buffers. The consumer obtains information by picking up a buffer from the full buffer pool, copying the information out of the buffer, and placing it in the empty buffer pool for recycling. The producer and consumer use a fixed, finite number, N, of buffers to pass an arbitrary amount of information between them."

The producer and consumer processes must be synchronized so that the consumer process blocks (i.e., pauses its execution) whenever all the buffers are empty, and the producer process blocks whenever all the buffers are full. To enforce this synchronization, counting semaphores are used to count the number of empty and full buffers.

The applet below [Magee and Kramer 1999] illustrates the Bounded Buffer problem. In this example, the size of the buffer pool is five (= 5) and the data being shared are characters. The producer process fills one buffer with a character each time it executes, and the consumer process removes one character from the buffer each time it executes. Use the "Run" buttons to begin the execution of the processes.

Unable to load applet


Copyright notice

References