Next | Prev | Up | Top | Contents | Index

Typical Driver Operations

There are five different kinds of operations that a device driver can support:

The following topics present a conceptual overview of the relationship between the user process, the kernel, and the kernel-level device driver. The software architecture that supports these interactions is documented in detail in Part III, "Kernel-Level Drivers," especially Chapter 8, "Structure of a Kernel-Level Driver."


Overview of Device Open

Before a user process can use a kernel-controlled device, the process must open the device as a file. A high-level overview of this process, as it applies to a character device driver, is shown in Figure 3-1.

Figure 3-1 : Overview of Device Open The steps illustrated in Figure 3-1 are:

  1. The user process calls the open() kernel function, passing the name of a device special file (see "Device Special Files" and the open(2) reference page).

  2. The kernel notes the device major and minor numbers from the inode of the device special file (see "Device Representation"). The kernel uses the major device number to select the device driver, and calls the driver's open entry point, passing the minor number and other data.

  3. The device driver verifies that the device is operable, and prepares whatever is needed to operate it.

  4. The device driver returns a return code to the kernel, which returns either an error code or a file descriptor to the process.
It is up to the device driver whether the device can be used by only one process at a time, or by more than one process. If the device can support only one user, and is already in use, the driver returns the EBUSY error code.

The open() interaction on a block device is similar, except that the operation is initiated from the filesystem code responding to a mount() request, rather than coming from a user process open() request (see the mount(1) reference page).

There is also a close() interaction so a process can terminate its connection to a device.


Overview of Device Control

After the user process has successfully opened a character device, it can request control operations. Figure 3-2 shows an overview of this operation.

Figure 3-2 : Overview of Device Control The steps illustrated in Figure 3-2 are:

  1. The user process calls the ioctl() kernel function, passing the file descriptor from open and one or more other parameters (see the ioctl(2) reference page).

  2. The kernel uses the major device number to select the device driver, and calls the device driver, passing the minor device number, the request number, and an optional third parameter from ioctl().

  3. The device driver interprets the request number and other parameter, notes changes in its own data structures, and possibly issues commands to the device.

  4. The device driver returns an exit code to the kernel, and the kernel (then or later) redispatches the user process.
Block device drivers are not asked to provide a control interaction. The user process is not allowed to issue ioctl() for a block device.

The interpretation of ioctl request codes and parameters is entirely up to the device driver. For examples of the range of ioctl functions, you might review some reference pages in volume 7, for example, termio(7), ei(7), and arp(7P).


Overview of Programmed Kernel I/O

Figure 3-3 shows a high-level overview of data transfer for a character device driver that uses programmed I/O.

Figure 3-3 : Overview of Programmed Kernel I/O The steps illustrated in Figure 3-3 are:

  1. The user process invokes the read() kernel function for the file descriptor returned by open() (see the read(2) and write(2) reference pages).

  2. The kernel uses the major device number to select the device driver, and calls the device driver, passing the minor device number and other information.

  3. The device driver directs the device to operate by storing into its registers in physical memory.

  4. The device driver retrieves data from the device registers and uses a kernel function to store the data into the buffer in the address space of the user process.

  5. The device driver returns to the kernel, which (then or later) dispatches the user process.
The operation of write() is similar. A kernel-level driver that uses programmed I/O is conceptually simple since it is basically a subroutine of the kernel.


Overview of Memory Mapping

It is possible to allow the user process to perform I/O directly, by mapping the physical addresses of device registers into the address space of the user process. Figure 3-4 shows a high-level overview of this interaction.

Figure 3-4 : Overview of Memory Mapping The steps illustrated in Figure 3-4 are:

  1. The user process calls the mmap() kernel function, passing the file descriptor from open and various other parameters (see the mmap(2) reference page).

  2. The kernel uses the major device number to select the device driver, and calls the device driver, passing the minor device number and certain other parameters from mmap().

  3. The device driver validates the request and uses a kernel function to map the necessary range of physical addresses into the address space of the user process.

  4. The device driver returns an exit code to the kernel, and the kernel (then or later) redispatches the user process.

  5. The user process accesses data in device registers by accessing the virtual address returned to it from the mmap() call.
Memory mapping can be supported by either a character or a block device driver. When a block device driver supports it, the memory mapping request comes from the filesystem when it responds to an mmap() call to map a file into memory. The filesystem calls the driver to map different pages of the file into memory, as required.

Memory mapping by a character device driver has the purpose of making device registers directly accessible to the process as memory addresses.

The Silicon Graphics device drivers for the VME and EISA buses support memory mapping. This enables user-level processes to perform PIO to devices on these buses, as described under "EISA Mapping Support" and "VME Mapping Support".

A memory-mapping character device driver is very simple; it needs to support only open(), mmap(), and close() interactions. Data throughput can be higher when PIO is performed in the user process, since the overhead of the read() and write() system calls is avoided.

It is possible to write a kernel-level driver that only maps memory, and controls no device at all. Such drivers are called pseudo-device drivers. For examples of psuedo-device drivers, see the prf(7) and imon(7) reference pages.


Overview of DMA I/O

Block devices and block device drivers normally use DMA (see "Direct Memory Access"). With DMA, the driver can avoid the time-consuming process of transferring data between memory and device registers. Figure 3-5 shows a high-level overview of a DMA transfer.

Figure 3-5 : Overview of DMA I/O The steps illustrated in Figure 3-5 are:

  1. The user process invokes the read() kernel function for a normal file descriptor (not necessarily a device special file). The filesystem (not shown) asks for a block of data.

  2. The kernel uses the major device number to select the device driver, and calls the device driver, passing the minor device number and other information.

  3. The device driver uses kernel functions to locate the filesystem buffer in physical memory; then programs the device with target addresses by storing into its registers.

  4. The device driver returns to the kernel after telling it to block the user process from running.

  5. The device itself stores the data to the physical memory locations that represent the buffer in the user process address space. During this time the kernel may dispatch other processes.

  6. When the device presents a hardware interrupt, the kernel invokes the device driver. The driver notifies the kernel that the user process can now resume execution. The filesystem code moves the requested data into the user process buffer.
DMA is fundamentally asynchronous. There is no necessary timing relation between the operation of the device performing its operation and the operation of the various user processes. A DMA device driver has a more complex structure because it must deal with such factors as

When a DMA driver permits multiple processes to open its device, it must be able to cope with the possibility that it can receive several requests from different processes while the device is busy handling one operation. This implies that the driver must implement some method of queuing requests until they can be serviced in turn.

The mapping between physical memory and process address space can be complicated. For example, the buffer can span multiple pages, and the pages need not be in contiguous locations in physical memory. If the device supports scatter/gather, it can be programmed with the starting addresses and lengths of each page in the buffer. If it does not support scatter/gather, the device driver has to program a separate DMA operation for each page or part of a page--or else has to obtain a contiguous buffer in the kernel address space, do the I/O from that buffer, and copy the data from that buffer to the process buffer.

The reward for the extra complexity of DMA is the possibility of much higher performance. The device can store or read data from memory at its maximum rated speed, while other processes can execute in parallel.


Next | Prev | Up | Top | Contents | Index