Memory-mapped I/O
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For more generic meanings of input/output port, see Computer port (hardware).
Memory-mapped I/O (MMIO) and port I/O (also called port-mapped I/O or PMIO) are two complementary methods of performing input/output between the CPU and I/O devices in a computer. Another method is using dedicated I/O processors (channels, used in IBM mainframe computers).
Memory-mapped I/O (not to be confused with memory-mapped file I/O) uses the same bus to address both memory and I/O devices, and the CPU instructions used to read and write to memory are also used in accessing I/O devices. In order to accommodate the I/O devices, areas of CPU addressable space must be reserved for I/O rather than memory. This does not have to be permanent, for example the Commodore 64 could bank switch between its I/O devices and regular memory. The I/O devices monitor the CPU's address bus and respond to any CPU access of their assigned address space, mapping the address to their hardware registers.
Port-mapped I/O uses a special class of CPU instructions specifically for performing I/O. This is generally found on Intel microprocessors, specifically the IN and OUT instructions which can read and write a single byte to an I/O device. I/O devices have a separate address space from general memory, either accomplished by an extra "I/O" pin on the CPU's physical interface, or an entire bus dedicated to I/O.
[edit] Relative merits of the two I/O methods
The main advantage of using port-mapped I/O is on CPUs with a limited addressing capability. Because port-mapped I/O separates I/O access from memory access, the full address space can be used for memory. It is also obvious to a person reading an assembly language program listing when I/O is being performed, due to the special instructions that can only be used for that purpose.
The advantage of using memory mapped I/O is that, by discarding the extra complexity that port I/O brings, a CPU requires less internal logic and is thus cheaper, faster and easier to build; this follows the basic tenets of reduced instruction set computing. As 16-bit CPU architectures have become obsolete and replaced with 32-bit and 64-bit architectures in general use, reserving space on the memory map for I/O devices is no longer a problem. The fact that regular memory instructions are used to address devices also means that all of the CPUs addressing modes are available for the I/O as well as the memory.
[edit] Example
Consider a simple system built around an 8-bit microprocessor. Such a CPU might provide 16-bit address lines, allowing it to address up to 64 kilobytes (65,536 bytes) of memory. On such a system, perhaps the first 32K of address space would be allotted to random access memory (RAM), a further 16K to read only memory (ROM) and the remainder to a variety of other devices such as timers, counters, video display chips, sound generating devices, and so forth. The hardware of the system is arranged so that devices on the address bus will only respond to particular addresses which are intended for them; all other addresses are ignored. This is the job of the address decoding circuitry, and it is this that establishes the memory map of the system. Sometimes the microprocessor's architecture imposes limitations on the specific configuration of the memory map; for instance, a MOS Technology 6502 needs ROM in at least the last 10 bytes of the memory map in order to hold the Reset, NMI, and IRQ/BRK vectors during the processor's cold start sequence. However, where those vectors initially point is entirely up to the underlying computer's designer, just as long as they initially point to ROM. This processor can also read these vectors from RAM after cold start, provided the underlying computer supports bank switching.
Addresses may be decoded completely or incompletely. Complete decoding involves checking every line of the address bus, causing an open data bus when the CPU accesses an unmapped region of memory. Incomplete decoding uses simpler and often cheaper logic that examines only some address lines. Such simple decoding circuitry might allow a device to respond to several different addresses, effectively creating virtual copies of the device at different places in the memory map. All of these copies refer to the same real device, so there is no particular advantage in doing this, except to simplify the decoder. The decoding itself may be programmable, allowing the system to reconfigure its own memory map as required. This is commonly done. (See bank switching.)
Thus we might end up with a memory map like so:
Device Address range Size
RAM $0000 - $7FFF 32 KiB
General purpose I/O $8000 - $80FF 256 bytes
Sound controller $9000 - $90FF 256 bytes
Video controller/text-mapped display RAM $A000 - $A7FF 2048 bytes
ROM $C000 - $FFFF 16 KiB
Note that this memory map contains gaps; that is also quite common. Addresses with a $ are in hexadecimal notation.
For example if the fourth register of the video controller sets the background colour of the screen, the CPU can set this colour by writing a value to the memory location $A003 using its standard memory write instruction. Using the same method, glyphs can be displayed on a screen by writing character values into a special area of RAM within the video controller. Prior to cheap RAM that allowed for bit-mapped displays, this character cell method was a popular technique for computer video displays (see Text user interface).
阅读(1877) | 评论(0) | 转发(1) |