[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

2. A Tour Through Pintos

This chapter is a brief tour through the Pintos code. It covers the entire code base, but you'll only be using Pintos one part at a time, so you may find that you want to read each part as you work on the corresponding project.

(Actually, the tour is currently incomplete. It fully covers only the threads project.)

We recommend using "tags" to follow along with references to function and variable names (see section F.1 Tags).

2.1 Loading

This section covers the Pintos loader and basic kernel initialization.

2.1.1 The Loader

The first part of Pintos that runs is the loader, in threads/loader.S. The PC BIOS loads the loader into memory. The loader, in turn, is responsible for initializing the CPU, loading the rest of Pintos into memory, and then jumping to its start. It's not important to understand exactly what the loader does, but if you're interested, read on. You should probably read along with the loader's source. You should also understand the basics of the 80x86 architecture as described by chapter 3 of [ IA32-v1].

Because the PC BIOS loads the loader, the loader has to play by the BIOS's rules. In particular, the BIOS only loads 512 bytes (one disk sector) into memory. This is a severe restriction and it means that, practically speaking, the loader has to be written in assembly language.

Pintos' loader first initializes the CPU. The first important part of this is to enable the A20 line, that is, the CPU's address line numbered 20. For historical reasons, PCs start out with this address line fixed at 0, which means that attempts to access memory beyond the first 1 MB (2 raised to the 20th power) will fail. Pintos wants to access more memory than this, so we have to enable it.

Next, the loader asks the BIOS for the PC's memory size. Again for historical reasons, the function that we call in the BIOS to do this can only detect up to 64 MB of RAM, so that's the practical limit that Pintos can support. The memory size is stashed away in a location in the loader that the kernel can read after it boots.

Third, the loader creates a basic page table. This page table maps the 64 MB at the base of virtual memory (starting at virtual address 0) directly to the identical physical addresses. It also maps the same physical memory starting at virtual address LOADER_PHYS_BASE, which defaults to 0xc0000000 (3 GB). The Pintos kernel only wants the latter mapping, but there's a chicken-and-egg problem if we don't include the former: our current virtual address is roughly 0x7c00, the location where the BIOS loaded us, and we can't jump to 0xc0007c00 until we turn on the page table, but if we turn on the page table without jumping there, then we've just pulled the rug out from under ourselves.

After the page table is initialized, we load the CPU's control registers to turn on protected mode and paging, and then we set up the segment registers. We aren't equipped to handle interrupts in protected mode yet, so we disable interrupts.

Finally it's time to load the kernel from disk. We use a simple but inflexible method to do this: we program the IDE disk controller directly. We assume that the kernel is stored starting from the second sector of the first IDE disk (the first sector normally contains the boot loader). We also assume that the BIOS has already set up the IDE controller for us. We read KERNEL_LOAD_PAGES pages of data (4 kB per page) from the disk directly into virtual memory, starting LOADER_KERN_BASE bytes past LOADER_PHYS_BASE, which by default means that we load the kernel starting 1 MB into physical memory.

Then we jump to the start of the compiled kernel image. Using the "linker script" in threads/kernel.lds.S, the kernel has arranged that it begins with the assembly module threads/start.S. This assembly module just calls main(), which never returns.

There's one more trick: the Pintos kernel command line is stored in the boot loader. The pintos program actually modifies the boot loader on disk each time it runs the kernel, putting in whatever command line arguments the user supplies to the kernel, and then the kernel at boot time reads those arguments out of the boot loader in memory. This is something of a nasty hack, but it is simple and effective.

2.1.2 Kernel Initialization

The kernel proper starts with the main() function. The main() function is written in C, as will be most of the code we encounter in Pintos from here on out.

When main() starts, the system is in a pretty raw state. We're in protected mode with paging enabled, but hardly anything else is ready. Thus, the main() function consists primarily of calls into other Pintos modules' initialization functions. These are usually named module_init(), where module is the module's name, module.c is the module's source code, and module.h is the module's header.

First we initialize kernel RAM in ram_init(). The first step is to clear out the kernel's so-called "BSS" segment. The BSS is a segment that should be initialized to all zeros. In most C implementations, whenever you declare a variable outside a function without providing an initializer, that variable goes into the BSS. Because it's all zeros, the BSS isn't stored in the image that the loader brought into memory. We just use memset() to zero it out. The other task of ram_init() is to read out the machine's memory size from where the loader stored it and put it into the ram_pages variable for later use.

Next, thread_init() initializes the thread system. We will defer full discussion to our discussion of Pintos threads below. It is called so early in initialization because the console, initialized just afterward, tries to use locks, and locks in turn require there to be a running thread.

Then we initialize the console so that we can use printf(). main() calls vga_init(), which initializes the VGA text display and clears the screen. It also calls serial_init_poll() to initialize the first serial port in "polling mode," that is, where the kernel busy-waits for the port to be ready for each character to be output. (We use polling mode until we're ready to set up interrupts later.) Finally we initialize the console device and print a startup message to the console.

main() calls read_command_line() to break the kernel command line into arguments, then parse_options() to read any options at the beginning of the command line. (Executing actions specified on the command line happens later.)

The next block of functions we call initialize the kernel's memory system. palloc_init() sets up the kernel page allocator, which doles out memory one or more pages at a time. malloc_init() sets up the allocator that handles odd-sized allocations. paging_init() sets up a page table for the kernel.

In projects 2 and later, main() also calls tss_init() and gdt_init(), but we'll talk about those later.

main() calls random_init() to initialize the kernel random number generator. If the user specified -rs on the pintos command line, parse_options() has already done this, but calling it a second time is harmless and has no effect.

We initialize the interrupt system in the next set of calls. intr_init() sets up the CPU's interrupt descriptor table (IDT) to ready it for interrupt handling (see section Interrupt Infrastructure), then timer_init() and kbd_init() prepare for handling timer interrupts and keyboard interrupts, respectively. In projects 2 and later, we also prepare to handle interrupts caused by user programs using exception_init() and syscall_init().

Now that interrupts are set up, we can start preemptively scheduling threads with thread_start(), which also enables interrupts. With interrupts enabled, interrupt-driven serial port I/O becomes possible, so we use serial_init_queue() to switch to that mode. Finally, timer_calibrate() calibrates the timer for accurate short delays.

If the file system is compiled in, as it will starting in project 2, we now initialize the disks with disk_init(), then the file system with filesys_init().

Boot is complete, so we print a message.

Function run_actions() now parses and executes actions specified on the kernel command line, such as run to run a test (in project 1) or a user program (in later projects).

Finally, if -q was specified on the kernel command line, we call power_off() to terminate the machine simulator. Otherwise, main() calls thread_exit(), which allows any other running threads to continue running.

2.2 Threads Project

2.2.1 Thread Support struct thread

The main Pintos data structure for threads is struct thread, declared in threads/thread.h.

Structure: struct thread
Represents a thread or a user process. In the projects, you will have to add your own members to struct thread. You may also change or delete the definitions of existing members.

Every struct thread occupies the beginning of its own page of memory. The rest of the page is used for the thread's stack, which grows downward from the end of the page. It looks like this:

        4 kB +---------------------------------+
             |          kernel stack           |
             |                |                |
             |                |                |
             |                V                |
             |         grows downward          |
             |                                 |
             |                                 |
             |                                 |
             |                                 |
             |                                 |
             |                                 |
             |                                 |
             |                                 |
             |              magic              |
             |                :                |
             |                :                |
             |              status             |
             |               tid               |
        0 kB +---------------------------------+

This has two consequences. First, struct thread must not be allowed to grow too big. If it does, then there will not be enough room for the kernel stack. The base struct thread is only a few bytes in size. It probably should stay well under 1 kB.

Second, kernel stacks must not be allowed to grow too large. If a stack overflows, it will corrupt the thread state. Thus, kernel functions should not allocate large structures or arrays as non-static local variables. Use dynamic allocation with malloc() or palloc_get_page() instead (see section 2.2.4 Memory Allocation).

Member of struct thread: tid_t tid
The thread's thread identifier or tid. Every thread must have a tid that is unique over the entire lifetime of the kernel. By default, tid_t is a typedef for int and each new thread receives the numerically next higher tid, starting from 1 for the initial process. You can change the type and the numbering scheme if you like.

Member of struct thread: enum thread_status status
The thread's state, one of the following:

The thread is running. Exactly one thread is running at a given time. thread_current() returns the running thread.

Thread State: THREAD_READY
The thread is ready to run, but it's not running right now. The thread could be selected to run the next time the scheduler is invoked. Ready threads are kept in a doubly linked list called ready_list.

The thread is waiting for something, e.g. a lock to become available, an interrupt to be invoked. The thread won't be scheduled again until it transitions to the THREAD_READY state with a call to thread_unblock().

Thread State: THREAD_DYING
The thread will be destroyed by the scheduler after switching to the next thread.

Member of struct thread: char name[16]
The thread's name as a string, or at least the first few characters of it.

Member of struct thread: uint8_t *stack
Every thread has its own stack to keep track of its state. When the thread is running, the CPU's stack pointer register tracks the top of the stack and this member is unused. But when the CPU switches to another thread, this member saves the thread's stack pointer. No other members are needed to save the thread's registers, because the other registers that must be saved are saved on the stack.

When an interrupt occurs, whether in the kernel or a user program, an struct intr_frame is pushed onto the stack. When the interrupt occurs in a user program, the struct intr_frame is always at the very top of the page. See section 2.2.3 Interrupt Handling, for more information.

Member of struct thread: int priority
A thread priority, ranging from PRI_MIN (0) to PRI_MAX (63). Lower numbers correspond to higher priorities, so that priority 0 is the highest priority and priority 63 is the lowest. Pintos as provided ignores thread priorities, but you will implement priority scheduling in project 1 (see section 3.2.3 Priority Scheduling).

Member of struct thread: struct list_elem elem
A "list element" used to put the thread into doubly linked lists, either the list of threads ready to run or a list of threads waiting on a semaphore. Take a look at lib/kernel/list.h for information on how to use Pintos doubly linked lists.

Member of struct thread: uint32_t *pagedir
Only present in project 2 and later.

Member of struct thread: unsigned magic
Always set to THREAD_MAGIC, which is just a random number defined in threads/thread.c, and used to detect stack overflow. thread_current() checks that the magic member of the running thread's struct thread is set to THREAD_MAGIC. Stack overflow will normally change this value, triggering the assertion. For greatest benefit, as you add members to struct thread, leave magic as the final member. Thread Functions

threads/thread.c implements several public functions for thread support. Let's take a look at the most useful:

Function: void thread_init (void)
Called by main() to initialize the thread system. Its main purpose is to create a struct thread for Pintos's initial thread. This is possible because the Pintos loader puts the initial thread's stack at the top of a page, in the same position as any other Pintos thread.

Before thread_init() runs, thread_current() will fail because the running thread's magic value is incorrect. Lots of functions call thread_current() directly or indirectly, including lock_acquire() for locking a lock, so thread_init() is called early in Pintos initialization.

Function: void thread_start (void)
Called by main() to start the scheduler. Creates the idle thread, that is, the thread that is scheduled when no other thread is ready. Then enables interrupts, which as a side effect enables the scheduler because the scheduler runs on return from the timer interrupt, using intr_yield_on_return() (see section External Interrupt Handling).

Function: void thread_tick (void)
Called by the timer interrupt at each timer tick. It keeps track of thread statistics and triggers the scheduler when a time slice expires.

Function: void thread_print_stats (void)
Called during Pintos shutdown to print thread statistics.

Function: tid_t thread_create (const char *name, int priority, thread_func *func, void *aux)
Creates and starts a new thread named name with the given priority, returning the new thread's tid. The thread executes func, passing aux as the function's single argument.

thread_create() allocates a page for the thread's struct thread and stack and initializes its members, then it sets up a set of fake stack frames for it (more about this later). The thread is initialized in the blocked state, so the final action before returning is to unblock it, which allows the new thread to be scheduled.

Type: void thread_func (void *aux)
This is the type of a thread function. Its aux argument is the value passed to thread_create().

Function: void thread_block (void)
Transitions the running thread from the running state to the blocked state. The thread will not run again until thread_unblock() is called on it, so you'd better have some way arranged for that to happen. Because thread_block() is so low-level, you should prefer to use one of the synchronization primitives instead (see section 2.2.2 Synchronization).

Function: void thread_unblock (struct thread *thread)
Transitions thread, which must be in the blocked state, to the ready state, allowing it to resume running. This is called when the event that the thread is waiting for occurs, e.g. when the lock that the thread is waiting on becomes available.

Function: struct thread *thread_current (void)
Returns the running thread.

Function: tid_t thread_tid (void)
Returns the running thread's thread id. Equivalent to thread_current ()->tid.

Function: const char *thread_name (void)
Returns the running thread's name. Equivalent to thread_current ()->name.

Function: void thread_exit (void) NO_RETURN
Causes the current thread to exit. Never returns, hence NO_RETURN (see section E.3 Function and Parameter Attributes).

Function: void thread_yield (void)
Yields the CPU to the scheduler, which picks a new thread to run. The new thread might be the current thread, so you can't depend on this function to keep this thread from running for any particular length of time.

Function: int thread_get_priority (void)
Function: void thread_set_priority (int new_priority)
Skeleton to set and get thread priority. See section 3.2.3 Priority Scheduling.

Function: int thread_get_nice (void)
Function: void thread_set_nice (int new_nice)
Function: int thread_get_recent_cpu (void)
Function: int thread_get_load_avg (void)
Skeletons for the advanced scheduler. See section B. 4.4BSD Scheduler. Thread Switching

schedule() is the function responsible for switching threads. It is internal to threads/thread.c and called only by the three public thread functions that need to switch threads: thread_block(), thread_exit(), and thread_yield(). Before any of these functions call schedule(), they disable interrupts (or ensure that they are already disabled) and then change the running thread's state to something other than running.

schedule() is simple but tricky. It records the current thread in local variable cur, determines the next thread to run as local variable next (by calling next_thread_to_run()), and then calls switch_threads() to do the actual thread switch. The thread we switched to was also running inside switch_threads(), as are all the threads not currently running, so the new thread now returns out of switch_threads(), returning the previously running thread.

switch_threads() is an assembly language routine in threads/switch.S. It saves registers on the stack, saves the CPU's current stack pointer in the current struct thread's stack member, restores the new thread's stack into the CPU's stack pointer, restores registers from the stack, and returns.

The rest of the scheduler is implemented as schedule_tail(). It marks the new thread as running. If the thread we just switched from is in the dying state, then it also frees the page that contained the dying thread's struct thread and stack. These couldn't be freed prior to the thread switch because the switch needed to use it.

Running a thread for the first time is a special case. When thread_create() creates a new thread, it goes through a fair amount of trouble to get it started properly. In particular, a new thread hasn't started running yet, so there's no way for it to be running inside switch_threads() as the scheduler expects. To solve the problem, thread_create() creates some fake stack frames in the new thread's stack:

2.2.2 Synchronization

If sharing of resources between threads is not handled in a careful, controlled fashion, then the result is usually a big mess. This is especially the case in operating system kernels, where faulty sharing can crash the entire machine. Pintos provides several synchronization primitives to help out. Disabling Interrupts

The crudest way to do synchronization is to disable interrupts, that is, to temporarily prevent the CPU from responding to interrupts. If interrupts are off, no other thread will preempt the running thread, because thread preemption is driven by the timer interrupt. If interrupts are on, as they normally are, then the running thread may be preempted by another at any time, whether between two C statements or even within the execution of one.

Incidentally, this means that Pintos is a "preemptible kernel," that is, kernel threads can be preempted at any time. Traditional Unix systems are "nonpreemptible," that is, kernel threads can only be preempted at points where they explicitly call into the scheduler. (User programs can be preempted at any time in both models.) As you might imagine, preemptible kernels require more explicit synchronization.

You should have little need to set the interrupt state directly. Most of the time you should use the other synchronization primitives described in the following sections. The main reason to disable interrupts is to synchronize kernel threads with external interrupt handlers, which cannot sleep and thus cannot use most other forms of synchronization (see section External Interrupt Handling).

Types and functions for disabling and enabling interrupts are in threads/interrupt.h.

Type: enum intr_level
One of INTR_OFF or INTR_ON, denoting that interrupts are disabled or enabled, respectively.

Function: enum intr_level intr_get_level (void)
Returns the current interrupt state.

Function: enum intr_level intr_set_level (enum intr_level level)
Turns interrupts on or off according to level. Returns the previous interrupt state.

Function: enum intr_level intr_enable (void)
Turns interrupts on. Returns the previous interrupt state.

Function: enum intr_level intr_disable (void)
Turns interrupts off. Returns the previous interrupt state. Semaphores

Pintos' semaphore type and operations are declared in threads/synch.h.

Type: struct semaphore
Represents a semaphore, a nonnegative integer together with two operators that manipulate it atomically, which are:

A semaphore initialized to 0 may be used to wait for an event that will happen exactly once. For example, suppose thread A starts another thread B and wants to wait for B to signal that some activity is complete. A can create a semaphore initialized to 0, pass it to B as it starts it, and then "down" the semaphore. When B finishes its activity, it "ups" the semaphore. This works regardless of whether A "downs" the semaphore or B "ups" it first.

A semaphore initialized to 1 is typically used for controlling access to a resource. Before a block of code starts using the resource, it "downs" the semaphore, then after it is done with the resource it "ups" the resource. In such a case a lock, described below, may be more appropriate.

Semaphores can also be initialized to values larger than 1. These are rarely used.

Function: void sema_init (struct semaphore *sema, unsigned value)
Initializes sema as a new semaphore with the given initial value.

Function: void sema_down (struct semaphore *sema)
Executes the "down" or "P" operation on sema, waiting for its value to become positive and then decrementing it by one.

Function: bool sema_try_down (struct semaphore *sema)
Tries to execute the "down" or "P" operation on sema, without waiting. Returns true if sema had a positive value that was successfully decremented, or false if it was already zero and thus could not be decremented. Calling this function in a tight loop wastes CPU time (use sema_down() instead, or find a different approach).

Function: void sema_up (struct semaphore *sema)
Executes the "up" or "V" operation on sema, incrementing its value. If any threads are waiting on sema, wakes one of them up.

Semaphores are internally built out of disabling interrupt (see section Disabling Interrupts) and thread blocking and unblocking (thread_block() and thread_unblock()). Each semaphore maintains a list of waiting threads, using the linked list implementation in lib/kernel/list.c. Locks

Lock types and functions are declared in threads/synch.h.

Type: struct lock
Represents a lock, a specialized semaphore with an initial value of 1 (see section Semaphores). The difference between a lock and such a semaphore is twofold. First, a semaphore does not have an owner, meaning that one thread can "down" the semaphore and then another one "up" it, but a single thread must both acquire and release a lock. Second, a semaphore can have a value greater than 1, but a lock can only be owned by a single thread at a time. If these restrictions prove onerous, it's a good sign that a semaphore should be used, instead of a lock.

Locks in Pintos are not "recursive," that is, it is an error for the thread currently holding a lock to try to acquire that lock.

Function: void lock_init (struct lock *lock)
Initializes lock as a new lock.

Function: void lock_acquire (struct lock *lock)
Acquires lock for use by the current thread, first waiting for any current owner to release it if necessary.

Function: bool lock_try_acquire (struct lock *lock)
Tries to acquire lock for use by the current thread, without waiting. Returns true if successful, false if the lock is already owned. Calling this function in a tight loop is a bad idea because it wastes CPU time (use lock_acquire() instead).

Function: void lock_release (struct lock *lock)
Releases lock, which the current thread must own.

Function: bool lock_held_by_current_thread (const struct lock *lock)
Returns true if the running thread owns lock, false otherwise. Condition Variables

Condition variable types and functions are declared in threads/synch.h.

Type: struct condition
Represents a condition variable, which allows one piece of code to signal a condition and cooperating code to receive the signal and act upon it. Each condition variable is associated with a lock. A given condition variable is associated with only a single lock, but one lock may be associated with any number of condition variables. A set of condition variables taken together with their lock is called a "monitor."

A thread that owns the monitor lock is said to be "in the monitor." The thread in the monitor has control over all the data protected by the lock. It may freely examine or modify this data. If it discovers that it needs to wait for some condition to become true, then it "waits" on the associated condition, which releases the lock and waits for the condition to be signaled. If, on the other hand, it has caused one of these conditions to become true, it "signals" the condition to wake up one waiter, or "broadcasts" the condition to wake all of them.

Pintos monitors are "Mesa" style, not "Hoare" style. That is, sending and receiving a signal are not an atomic operation. Thus, typically the caller must recheck the condition after the wait completes and, if necessary, wait again.

Function: void cond_init (struct condition *cond)
Initializes cond as a new condition variable.

Function: void cond_wait (struct condition *cond, struct lock *lock)
Atomically releases lock (the monitor lock) and waits for cond to be signaled by some other piece of code. After cond is signaled, reacquires lock before returning. lock must be held before calling this function.

Function: void cond_signal (struct condition *cond, struct lock *lock)
If any threads are waiting on cond (protected by monitor lock lock), then this function wakes up one of them. If no threads are waiting, returns without performing any action. lock must be held before calling this function.

Function: void cond_broadcast (struct condition *cond, struct lock *lock)
Wakes up all threads, if any, waiting on cond (protected by monitor lock lock). lock must be held before calling this function.

Monitor Example

The classical example of a monitor is handling a buffer into which one "producer" thread writes characters and out of which a second "consumer" thread reads characters. To implement this case we need, besides the monitor lock, two condition variables which we will call not_full and not_empty:

char buf[BUF_SIZE];     /* Buffer. */
size_t n = 0;           /* 0 <= n <= BUF_SIZE: # of characters in buffer. */
size_t head = 0;        /* buf index of next char to write (mod BUF_SIZE). */
size_t tail = 0;        /* buf index of next char to read (mod BUF_SIZE). */
struct lock lock;       /* Monitor lock. */
struct condition not_empty; /* Signaled when the buffer is not empty. */
struct condition not_full; /* Signaled when the buffer is not full. */

...initialize the locks and condition variables...

void put (char ch) {
  lock_acquire (&lock);
  while (n == BUF_SIZE)            /* Can't add to buf as long as it's full. */
    cond_wait (&not_full, &lock);
  buf[head++ % BUF_SIZE] = ch;     /* Add ch to buf. */
  cond_signal (&not_empty, &lock); /* buf can't be empty anymore. */
  lock_release (&lock);

char get (void) {
  char ch;
  lock_acquire (&lock);
  while (n == 0)                  /* Can't read buf as long as it's empty. */
    cond_wait (&not_empty, &lock);
  ch = buf[tail++ % BUF_SIZE];    /* Get ch from buf. */
  cond_signal (&not_full, &lock); /* buf can't be full anymore. */
  lock_release (&lock);
} Memory Barriers

Suppose we add a "feature" that, whenever a timer interrupt occurs, the character in global variable timer_put_char is printed on the console, but only if global Boolean variable timer_do_put is true.

If interrupts are enabled, this code for setting up x to be printed is clearly incorrect, because the timer interrupt could intervene between the two assignments:

timer_do_put = true;            /* INCORRECT CODE */
timer_put_char = 'x';

It might not be as obvious that the following code is just as incorrect:

timer_put_char = 'x';           /* INCORRECT CODE */
timer_do_put = true;

The reason this second example might be a problem is that the compiler is, in general, free to reorder operations when it doesn't have a visible reason to keep them in the same order. In this case, the compiler doesn't know that the order of assignments is important, so its optimization pass is permitted to exchange their order. There's no telling whether it will actually do this, and it is possible that passing the compiler different optimization flags or changing compiler versions will produce different behavior.

The following is not a solution, because locks neither prevent interrupts nor prevent the compiler from reordering the code within the region where the lock is held:

lock_acquire (&timer_lock);     /* INCORRECT CODE */
timer_put_char = 'x';
timer_do_put = true;
lock_release (&timer_lock);

Fortunately, real solutions do exist. One possibility is to disable interrupts around the assignments. This does not prevent reordering, but it makes the assignments atomic as observed by the interrupt handler. It also has the extra runtime cost of disabling and re-enabling interrupts:

enum intr_level old_level = intr_disable ();
timer_put_char = 'x';
timer_do_put = true;
intr_set_level (old_level);

A second possibility is to mark the declarations of timer_put_char and timer_do_put as volatile. This keyword tells the compiler that the variables are externally observable and allows it less latitude for optimization. However, the semantics of volatile are not well-defined, so it is not a good general solution.

Usually, the best solution is to use a compiler feature called a memory barrier, a special statement that prevents the compiler from reordering memory operations across the barrier. In Pintos, threads/synch.h defines the barrier() macro as a memory barrier. Here's how we would use a memory barrier to fix this code:

timer_put_char = 'x';
barrier ();
timer_do_put = true;

The compiler also treats invocation of any function defined externally, that is, in another source file, as a limited form of a memory barrier. Specifically, the compiler assumes that any externally defined function may access any statically or dynamically allocated data and any local variable whose address is taken. This often means that explicit barriers can be omitted, and, indeed, this is why the base Pintos code does not need any barriers.

A function defined in the same source file, or in a header included by the source file, cannot be relied upon as a memory barrier. This applies even to invocation of a function before its definition, because the compiler may read and parse the entire source file before performing optimization.

2.2.3 Interrupt Handling

An interrupt notifies the CPU of some event. Much of the work of an operating system relates to interrupts in one way or another. For our purposes, we classify interrupts into two broad categories:

Because the CPU treats all interrupts largely the same way, regardless of source, Pintos uses the same infrastructure for both internal and external interrupts, to a point. The following section describes this common infrastructure, and sections after that give the specifics of external and internal interrupts.

If you haven't already read chapter 3 in [ IA32-v1], it is recommended that you do so now. You might also want to skim chapter 5 in [ IA32-v3]. Interrupt Infrastructure

When an interrupt occurs while the kernel is running, the CPU saves its most essential state on the stack and jumps to an interrupt handler routine. The 80x86 architecture allows for 256 possible interrupts, each of which can have its own handler. The handler for each interrupt is defined in an array called the interrupt descriptor table or IDT.

In Pintos, intr_init() in threads/interrupt.c sets up the IDT so that each entry points to a unique entry point in threads/intr-stubs.S named intrNN_stub(), where NN is the interrupt number in hexadecimal. Because the CPU doesn't give us any other way to find out the interrupt number, this entry point pushes the interrupt number on the stack. Then it jumps to intr_entry(), which pushes all the registers that the processor didn't already save for us, and then calls intr_handler(), which brings us back into C in threads/interrupt.c.

The main job of intr_handler() is to call any function that has been registered for handling the particular interrupt. (If no function is registered, it dumps some information to the console and panics.) It does some extra processing for external interrupts that we'll discuss later.

When intr_handler() returns, the assembly code in threads/intr-stubs.S restores all the CPU registers saved earlier and directs the CPU to return from the interrupt.

A few types and functions apply to both internal and external interrupts.

Type: void intr_handler_func (struct intr_frame *frame)
This is how an interrupt handler function must be declared. Its frame argument (see below) allows it to determine the cause of the interrupt and the state of the thread that was interrupted.

Type: struct intr_frame
The stack frame of an interrupt handler, as saved by CPU, the interrupt stubs, and intr_entry(). Its most interesting members are described below.

Member of struct intr_frame: uint32_t edi
Member of struct intr_frame: uint32_t esi
Member of struct intr_frame: uint32_t ebp
Member of struct intr_frame: uint32_t esp_dummy
Member of struct intr_frame: uint32_t ebx
Member of struct intr_frame: uint32_t edx
Member of struct intr_frame: uint32_t ecx
Member of struct intr_frame: uint32_t eax
Member of struct intr_frame: uint16_t es
Member of struct intr_frame: uint16_t ds
Register values in the interrupted thread saved by intr_entry(). The esp_dummy value isn't actually used (refer to the description of PUSHA in [ IA32-v2b] for details).

Member of struct intr_frame: uint32_t vec_no
The interrupt vector number, ranging from 0 to 255.

Member of struct intr_frame: uint32_t error_code
The "error code" pushed on the stack by the CPU for some internal interrupts.

Member of struct intr_frame: void (*eip) (void)
The address of the next instruction to be executed by the interrupted thread.

Member of struct intr_frame: void *esp
The interrupted thread's stack pointer.

Function: const char *intr_name (uint8_t vec)
Returns the name of the interrupt numbered vec, or "unknown" if the interrupt has no registered name. Internal Interrupt Handling

When an internal interrupt occurs, it is because the running kernel thread (or, starting from project 2, the running user process) has caused it. Thus, because it is related to a thread (or process), an internal interrupt is said to happen in a "process context."

In an internal interrupt, it can make sense to examine the struct intr_frame passed to the interrupt handler, or even to modify it. When the interrupt returns, modified members in struct intr_frame become changes to the thread's registers. We'll use this in project 2 to return values from system call handlers.

There are no special restrictions on what an internal interrupt handler can or can't do. Generally they should run with interrupts enabled, just like other code, and so they can be preempted by other kernel threads. Thus, they do need to synchronize with other threads on shared data and other resources (see section 2.2.2 Synchronization).

Function: void intr_register_int (uint8_t vec, int dpl, enum intr_level level, intr_handler_func *handler, const char *name)
Registers handler to be called when internal interrupt numbered vec is triggered. Names the interrupt name for debugging purposes.

If level is INTR_OFF then handling of further interrupts will be disabled while the interrupt is being processed. Interrupts should normally be turned on during the handling of an internal interrupt.

dpl determines how the interrupt can be invoked. If dpl is 0, then the interrupt can be invoked only by kernel threads. Otherwise dpl should be 3, which allows user processes to invoke the interrupt as well (this is useful only starting with project 2). External Interrupt Handling

Whereas an internal interrupt runs in the context of the thread that caused it, external interrupts do not have any predictable context. They are asynchronous, so they can be invoked at any time that interrupts have not been disabled. We say that an external interrupt runs in an "interrupt context."

In an external interrupt, the struct intr_frame passed to the handler is not very meaningful. It describes the state of the thread or process that was interrupted, but there is no way to predict which one that is. It is possible, although rarely useful, to examine it, but modifying it is a recipe for disaster.

The activities of an external interrupt handler are severely restricted. First, only one external interrupt may be processed at a time, that is, nested external interrupt handling is not supported. This means that external interrupts must be processed with interrupts disabled (see section Disabling Interrupts) and that interrupts may not be enabled at any point during their execution.

Second, an interrupt handler must not call any function that can sleep, which rules out thread_yield(), lock_acquire(), and many others. This is because external interrupts use space on the stack of the kernel thread that was running at the time the interrupt occurred. If the interrupt handler tried to sleep and that thread resumed, then the two uses of the single stack would interfere, which cannot be allowed.

Because an external interrupt runs with interrupts disabled, it effectively monopolizes the machine and delays all other activities. Therefore, external interrupt handlers should complete as quickly as they can. Any activities that require much CPU time should instead run in a kernel thread, possibly a thread whose activity is triggered by the interrupt using some synchronization primitive.

External interrupts are also special because they are controlled by a pair of devices outside the CPU called programmable interrupt controllers, PICs for short. When intr_init() sets up the CPU's IDT, it also initializes the PICs for interrupt handling. The PICs also must be "acknowledged" at the end of processing for each external interrupt. intr_handler() takes care of that by calling pic_end_of_interrupt(), which sends the proper signals to the right PIC.

The following additional functions are related to external interrupts.

Function: void intr_register_ext (uint8_t vec, intr_handler_func *handler, const char *name)
Registers handler to be called when external interrupt numbered vec is triggered. Names the interrupt name for debugging purposes. The handler will run with interrupts disabled.

Function: bool intr_context (void)
Returns true if we are running in an interrupt context, otherwise false. Mainly used at the beginning of functions that might sleep or that otherwise should not be called from interrupt context, in this form:
ASSERT (!intr_context ());

Function: void intr_yield_on_return (void)
When called in an interrupt context, causes thread_yield() to be called just before the interrupt returns. This is used, for example, in the timer interrupt handler to cause a new thread to be scheduled when a thread's time slice expires.

2.2.4 Memory Allocation

Pintos contains two memory allocators, one that allocates memory in units of a page, and one that can allocate blocks of any size. Page Allocator

The page allocator declared in threads/palloc.h allocates memory in units of a page. It is most often used to allocate memory one page at a time, but it can also allocate multiple contiguous pages at once.

The page allocator divides the memory it allocates into two pools, called the kernel and user pools. By default, each pool gets half of system memory, but this can be changed with a kernel command line option (see Why PAL_USER?). An allocation request draws from one pool or the other. If one pool becomes empty, the other may still have free pages. The user pool should be used for allocating memory for user processes and the kernel pool for all other allocations. This will only become important starting with project 3. Until then, all allocations should be made from the kernel pool.

Each pool's usage is tracked with a bitmap, one bit per page in the pool. A request to allocate n pages scans the bitmap for n consecutive bits set to false, indicating that those pages are free, and then sets those bits to true to mark them as used. This is a "first fit" allocation strategy.

The page allocator is subject to fragmentation. That is, it may not be possible to allocate n contiguous pages even though n or more pages are free, because the free pages are separated by used pages. In fact, in pathological cases it may be impossible to allocate 2 contiguous pages even though n / 2 pages are free! Single-page requests can't fail due to fragmentation, so it is best to limit, as much as possible, the need for multiple contiguous pages.

Pages may not be allocated from interrupt context, but they may be freed.

When a page is freed, all of its bytes are cleared to 0xcc, as a debugging aid (see section E.8 Tips).

Page allocator types and functions are described below.

Type: enum palloc_flags
A set of flags that describe how to allocate pages. These flags may be combined in any combination.

Page Allocator Flag: PAL_ASSERT
If the pages cannot be allocated, panic the kernel. This is only appropriate during kernel initialization. User processes should never be permitted to panic the kernel.

Page Allocator Flag: PAL_ZERO
Zero all the bytes in the allocated pages before returning them. If not set, the contents of newly allocated pages are unpredictable.

Page Allocator Flag: PAL_USER
Obtain the pages from the user pool. If not set, pages are allocated from the kernel pool.

Function: void *palloc_get_page (enum palloc_flags flags)
Obtains and returns a single page, allocating it in the manner specified by flags. Returns a null pointer if no pages are free.

Function: void *palloc_get_multiple (enum palloc_flags flags, size_t page_cnt)
Obtains page_cnt contiguous free pages, allocating them in the manner specified by flags, and returns them. Returns a null pointer if no pages are free.

Function: void palloc_free_page (void *page)
Frees page, which must have been obtained using palloc_get_page() or palloc_get_multiple().

Function: void palloc_free_multiple (void *pages, size_t page_cnt)
Frees the page_cnt contiguous pages starting at pages. All of the pages must have been obtained using palloc_get_page() or palloc_get_multiple(). Block Allocator

The block allocator, declared in threads/malloc.h, can allocate blocks of any size. It is layered on top of the page allocator described in the previous section. Blocks returned by the block allocator are obtained from the kernel pool.

The block allocator uses two different strategies for allocating memory. The first of these applies to "small" blocks, those 1 kB or smaller (one fourth of the the page size). These allocations are rounded up to the nearest power of 2, or 16 bytes, whichever is larger. Then they are grouped into a page used only for allocations of the smae size.

The second strategy applies to allocating "large" blocks, those larger than 1 kB. These allocations (plus a small amount of overhead) are rounded up to the nearest page in size, and then the block allocator requests that number of contiguous pages from the page allocator.

In either case, the difference between the allocation requested size and the actual block size is wasted. A real operating system would carefully tune its allocator to minimize this waste, but this is unimportant in an instructional system like Pintos.

As long as a page can be obtained from the page allocator, small allocations always succeed. Most small allocations will not require a new page from the page allocator at all. However, large allocations always require calling into the page allocator, and any allocation that needs more than one contiguous page can fail due to fragmentation, as already discussed in the previous section. Thus, you should minimize the number of large allocations in your code, especially those over approximately 4 kB each.

The interface to the block allocator is through the standard C library functions malloc(), calloc(), and free().

When a block is freed, all of its bytes are cleared to 0xcc, as a debugging aid (see section E.8 Tips).

The block allocator may not be called from interrupt context.

2.3 User Programs Project

The tour for this project has not yet been written.

2.4 Virtual Memory Project

The tour for this project is under construction.

2.4.1 Hash Table

Most implementations of the virtual memory project use a hash table to translate virtual page frames to physical page frames. It is possible to do this translation without adding a new data structure, by modifying the code in userprog/pagedir.c. However, if you do that you'll need to carefully study and understand section 3.7 in [ IA32-v3], and in practice it is probably easier to add a new data structure. You may find other uses for hash tables as well.

Pintos provides a hash table data structure in lib/kernel/hash.c. To use it you will need to manually include its header file, lib/kernel/hash.h, with #include <hash.h>. Intentionally, no code provided with Pintos uses the hash table, which means that you are free to use it as is, modify its implementation for your own purposes, or ignore it, as you wish. Data Types

A hash table is represented by struct hash.

Type: struct hash
Represents an entire hash table. The actual members of struct hash are "opaque." That is, code that uses a hash table should not access struct hash members directly, nor should it need to. Instead, use hash table functions and macros.

The hash table operates on elements of type struct hash_elem.

Type: struct hash_elem
Embed a struct hash_elem member in the structure you want to include in a hash table. Like struct hash, struct hash_elem is opaque. All functions for operating on hash table elements actually take and return pointers to struct hash_elem, not pointers to your hash table's real element type.

You will often need to obtain a struct hash_elem given a real element of the hash table, and vice versa. Given a real element of the hash table, obtaining a pointer to its struct hash_elem is trivial: take the address of the struct hash_elem member. Use the hash_entry() macro to go the other direction.

Macro: type *hash_entry (struct hash_elem *elem, type, member)
Returns a pointer to the structure that elem, a pointer to a struct hash_elem, is embedded within. You must provide type, the name of the structure that elem is inside, and member, the name of the member in type that elem points to.

For example, suppose h is a struct hash_elem * variable that points to a struct thread member (of type struct hash_elem) named h_elem. Then, hash_entry (h, struct thread, h_elem) yields the address of the struct thread that h points within.

Each hash table element must contain a key, that is, data that identifies and distinguishes elements in the hash table. Every element in a hash table at a given time must have a unique key. (Elements may also contain non-key data that need not be unique.) While an element is in a hash table, its key data must not be changed. For each hash table, you must write two functions that act on keys: a hash function and a comparison function. These functions must match the following prototypes:

Type: unsigned hash_hash_func (const struct hash_elem *element, void *aux)
Returns a hash of element's data, as a value anywhere in the range of unsigned int. The hash of an element should be a pseudo-random function of the element's key. It must not depend on non-key data in the element or on any non-constant data other than the key. Pintos provides the following functions as a suitable basis for hash functions.

Function: unsigned hash_bytes (const void *buf, size_t *size)
Returns a hash of the size bytes starting at buf. The implementation is the general-purpose Fowler-Noll-Vo hash for 32-bit words.

Function: unsigned hash_string (const char *s)
Returns a hash of null-terminated string s.

Function: unsigned hash_int (int i)
Returns a hash of integer i.

If your key is a single piece of data of an appropriate type, it is sensible for your hash function to directly return the output of one of these functions. For multiple pieces of data, you may wish to combine the output of more than one call to them using, e.g., the ^ operator. Finally, you may entirely ignore these functions and write your own hash function from scratch, but remember that your goal is to build an operating system kernel, not to design a hash function.

See section Auxiliary Data, for an explanation of aux.

Type: bool hash_less_func (const struct hash_elem *a, const struct hash_elem *b, void *aux)
Compares the keys stored in elements a and b. Returns true if a is less than b, false if a is greater than or equal to b.

If two elements compare equal, then they must hash to equal values.

See section Auxiliary Data, for an explanation of aux.

A few functions that act on hashes accept a pointer to a third kind of function as an argument:

Type: void hash_action_func (struct hash_elem *element, void *aux)
Performs some kind of action, chosen by the caller, on element.

See section Auxiliary Data, for an explanation of aux. Basic Functions

These functions create and destroy hash tables and obtain basic information about their contents.

Function: bool hash_init (struct hash *hash, hash_hash_func *hash_func, hash_less_func *less_func, void *aux)
Initializes hash as a hash table using hash_func as hash function, less_func as comparison function, and aux as auxiliary data. Returns true if successful, false on failure. hash_init() calls malloc() and fails if memory cannot be allocated.

See section Auxiliary Data, for an explanation of aux, which is most often a null pointer.

Function: void hash_clear (struct hash *hash, hash_action_func *action)
Removes all the elements from hash, which must have been previously initialized with hash_init().

If action is non-null, then it is called once for each element in the hash table, which gives the caller an opportunity to deallocate any memory or other resources used by the element. For example, if the hash table elements are dynamically allocated using malloc(), then action could free() the element. This is safe because hash_clear() will not access the memory in a given hash element after calling action on it. However, action must not call any function that may modify the hash table, such as hash_insert() or hash_delete().

Function: void hash_destroy (struct hash *hash, hash_action_func *action)
If action is non-null, calls it for each element in the hash, with the same semantics as a call to hash_clear(). Then, frees the memory held by hash. Afterward, hash must not be passed to any hash table function, absent an intervening call to hash_init().

Function: size_t hash_size (struct hash *hash)
Returns the number of elements currently stored in hash.

Function: bool hash_empty (struct hash *hash)
Returns true if hash currently contains no elements, false if hash contains at least one element. Search Functions

Each of these functions searches a hash table for an element that compares equal to one provided. Based on the success of the search, they perform some action, such as inserting a new element into the hash table, or simply return the result of the search.

Function: struct hash_elem *hash_insert (struct hash *hash, struct hash_elem *element)
Searches hash for an element equal to element. If none is found, inserts element into hash and returns a null pointer. If the table already contains an element equal to element, returns the existing element without modifying hash.

Function: struct hash_elem *hash_replace (struct hash *hash, struct hash_elem *element)
Inserts element into hash. Any element equal to element already in hash is removed. Returns the element removed, or a null pointer if hash did not contain an element equal to element.

The caller is responsible for deallocating any resources associated with the element returned, as appropriate. For example, if the hash table elements are dynamically allocated using malloc(), then the caller must free() the element after it is no longer needed.

The element passed to the following functions is only used for hashing and comparison purposes. It is never actually inserted into the hash table. Thus, only the key data in the element need be initialized, and other data in the element will not be used. It often makes sense to declare an instance of the element type as a local variable, initialize the key data, and then pass the address of its struct hash_elem to hash_find() or hash_delete(). See section Hash Table Example, for an example. (Large structures should not be allocated as local variables. See section struct thread, for more information.)

Function: struct hash_elem *hash_find (struct hash *hash, struct hash_elem *element)
Searches hash for an element equal to element. Returns the element found, if any, or a null pointer otherwise.

Function: struct hash_elem *hash_delete (struct hash *hash, struct hash_elem *element)
Searches hash for an element equal to element. If one is found, it is removed from hash and returned. Otherwise, a null pointer is returned and hash is unchanged.

The caller is responsible for deallocating any resources associated with the element returned, as appropriate. For example, if the hash table elements are dynamically allocated using malloc(), then the caller must free() the element after it is no longer needed. Iteration Functions

These functions allow iterating through the elements in a hash table. Two interfaces are supplied. The first requires writing and supplying a hash_action_func to act on each element (see section Data Types).

Function: void hash_apply (struct hash *hash, hash_action_func *action)
Calls action once for each element in hash, in arbitrary order. action must not call any function that may modify the hash table, such as hash_insert() or hash_delete(). action must not modify key data in elements, although it may modify any other data.

The second interface is based on an "iterator" data type. Idiomatically, iterators are used as follows:

struct hash_iterator i;

hash_first (&i, h);
while (hash_next (&i))
    struct foo *f = hash_entry (hash_cur (&i), struct foo, elem);
    ...do something with f...

Type: struct hash_iterator
Represents a position within a hash table. Calling any function that may modify a hash table, such as hash_insert() or hash_delete(), invalidates all iterators within that hash table.

Like struct hash and struct hash_elem, struct hash_elem is opaque.

Function: void hash_first (struct hash_iterator *iterator, struct hash *hash)
Initializes iterator to just before the first element in hash.

Function: struct hash_elem *hash_next (struct hash_iterator *iterator)
Advances iterator to the next element in hash, and returns that element. Returns a null pointer if no elements remain. After hash_next() returns null for iterator, calling it again yields undefined behavior.

Function: struct hash_elem *hash_cur (struct hash_iterator *iterator)
Returns the value most recently returned by hash_next() for iterator. Yields undefined behavior after hash_first() has been called on iterator but before hash_next() has been called for the first time. Hash Table Example

Suppose you have a structure, called struct page, that you want to put into a hash table. First, define struct page to include a struct hash_elem member:

struct page
    struct hash_elem hash_elem; /* Hash table element. */
    void *addr;                 /* Virtual address. */
    /* ...other members... */

We write a hash function and a comparison function using addr as the key. A pointer can be hashed based on its bytes, and the < operator works fine for comparing pointers:

/* Returns a hash value for page p. */
page_hash (const struct hash_elem *p_, void *aux UNUSED) 
  const struct page *p = hash_entry (p_, struct page, hash_elem);
  return hash_bytes (&p->addr, sizeof p->addr);

/* Returns true if page a precedes page b. */
page_less (const struct hash_elem *a_, const struct hash_elem *b_,
           void *aux UNUSED) 
  const struct page *a = hash_entry (a_, struct page, hash_elem);
  const struct page *b = hash_entry (b_, struct page, hash_elem);
  return a->addr < b->addr;

(The use of UNUSED in these functions' prototypes suppresses a warning that aux is unused. See section E.3 Function and Parameter Attributes, for information about UNUSED. See section Auxiliary Data, for an explanation of aux.)

Then, we can create a hash table like this:

struct hash pages;

hash_init (&pages, page_hash, page_less, NULL);

Now we can manipulate the hash table we've created. If p is a pointer to a struct page, we can insert it into the hash table with:

hash_insert (&pages, &p->hash_elem);

If there's a chance that pages might already contain a page with the same addr, then we should check hash_insert()'s return value.

To search for an element in the hash table, use hash_find(). This takes a little setup, because hash_find() takes an element to compare against. Here's a function that will find and return a page based on a virtual address, assuming that pages is defined at file scope:

/* Returns the page containing the given virtual address,
   or a null pointer if no such page exists. */
struct page *
page_lookup (const void *address) 
  struct page p;
  struct hash_elem *e;

  p.addr = address;
  e = hash_find (&pages, &p.hash_elem);
  return e != NULL ? hash_entry (e, struct page, hash_elem) : NULL;

struct page is allocated as a local variable here on the assumption that it is fairly small. Large structures should not be allocated as local variables. See section struct thread, for more information.

A similar function could delete a page by address using hash_delete(). Auxiliary Data

In simple cases like the example above, there's no need for the aux parameters. In these cases, just pass a null pointer to hash_init() for aux and ignore the values passed to the hash function and comparison functions. (You'll get a compiler warning if you don't use the aux parameter, but you can turn that off with the UNUSED macro, as shown in the example, or you can just ignore it.)

aux is useful when you have some property of the data in the hash table that's both constant and needed for hashing or comparisons, but which is not stored in the data items themselves. For example, if the items in a hash table contain fixed-length strings, but the items themselves don't indicate what that fixed length is, you could pass the length as an aux parameter. Synchronization

The hash table does not do any internal synchronization. It is the caller's responsibility to synchronize calls to hash table functions. In general, any number of functions that examine but do not modify the hash table, such as hash_find() or hash_next(), may execute simultaneously. However, these function cannot safely execute at the same time as any function that may modify a given hash table, such as hash_insert() or hash_delete(), nor may more than one function that can modify a given hash table execute safely at once.

It is also the caller's responsibility to synchronize access to data in hash table elements. How to synchronize access to this data depends on how it is designed and organized, as with any other data structure.

2.5 File Systems Project

The tour for this project has not yet been written.

[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by CS3204 on February, 2 2006 using texi2html