No More Posts Available.

No more pages to load.

    • Source: Word addressing
    • In computer architecture, word addressing means that addresses of memory on a computer uniquely identify words of memory. It is usually used in contrast with byte addressing, where addresses uniquely identify bytes. Almost all modern computer architectures use byte addressing, and word addressing is largely only of historical interest. A computer that uses word addressing is sometimes called a word machine.


      Basics


      Consider a computer which provides 524,288 (219) bits of memory. If that memory is arranged in a byte-addressable flat address space using 8-bit bytes, then there are 65,536 (216) valid addresses, from 0 to 65,535, each denoting an independent 8 bits of memory. If instead it is arranged in a word-addressable flat address space using 32-bit words, then there are 16,384 (214) valid addresses, from 0 to 16,383, each denoting an independent 32 bits.
      More generally, the minimum addressable unit (MAU) is a property of a specific memory abstraction. Different abstractions within a computer may use different MAUs, even when they are representing the same underlying memory. For example, a computer might use 32-bit addresses with byte addressing in its instruction set, but the CPU's cache coherence system might work with memory only at a granularity of 64-byte cache lines, allowing any particular cache line to be identified with only a 26-bit address and decreasing the overhead of the cache.
      The address translation done by virtual memory often affects the structure and width of the address space, but it does not change the MAU.


      Trade-offs of different minimum addressable units


      The size of the minimum addressable unit of memory can have complex trade-offs. Using a larger MAU allows the same amount of memory to be covered with a smaller address, which can substantially decrease the memory requirements of a program. However, using a smaller MAU makes it easier to work efficiently with small items of data.
      Suppose a program wishes to store one of the 12 traditional signs of Western astrology. A single sign can be stored in 4 bits. If a sign is stored in its own MAU, then 4 bits will be wasted with byte addressing (50% efficiency), while 28 bits will be wasted with 32-bit word addressing (12.5% efficiency). If a sign is "packed" into a MAU with other data, then it may be relatively more expensive to read and write. For example, to write a new sign into a MAU that other data has been packed into, the computer must read the current value of the MAU, overwrite just the appropriate bits, and then store the new value back. This will be especially expensive if it is necessary for the program to allow other threads to concurrently modify the other data in the MAU.
      A more common example is a string of text. Common string formats such as UTF-8 and ASCII store strings as a sequence of 8-bit code points. With byte addressing, each code point can be placed in its own independently-addressable MAU with no overhead. With 32-bit word addressing, placing each code point in a separate MAU would increase the memory usage by 300%, which is not viable for programs that work with large amounts of text. Packing adjacent code points into a single word avoids this cost. However, many algorithms for working with text prefer to be able to independently address code points; to do this with packed code points, the algorithm must use a "wide" address which also stores the offset of the character within the word. If this wide address needs to be stored elsewhere within the program's memory, it may require more memory than an ordinary address.
      To evaluate these effects on a complete program, consider a web browser displaying a large and complex page. Some of the browser's memory will be used to store simple data such as images and text; the browser will likely choose to store this data as efficiently as possible, and it will occupy about the same amount of memory regardless of the size of the MAU. Other memory will represent the browser's model of various objects on the page, and these objects will include many references: to each other, to the image and text data, and so on. The amount of memory needed to store these object will depend greatly on the address width of the computer.
      Suppose that, if all the addresses in the program were 32-bit, this web page would occupy about 10 Gigabytes of memory.

      If the web browser is running on a computer with 32-bit addresses and byte-addressable memory, the address space will cover 4 Gigabytes of memory, which is insufficient. The browser will either be unable to display this page, or it will need to be able to opportunistically move some of the data to slower storage, which will substantially hurt its performance.
      If the web browser is running on a computer with 64-bit addresses and byte-addressable memory, it will require substantially more memory in order to store the larger addresses. The exact overhead will depend on how much of the 10 Gigabytes is simple data and how much is object-like and dense with references, but a figure of 40% is not implausible, for a total of 14 Gigabytes required. This is, of course, well within the capabilities of a 64-bit address space. However, the browser will generally exhibit worse locality and make worse use of the computer's memory caches within the computer, assuming equal resources with the alternatives.
      If the web browser is running on a computer with 32-bit addresses and 32-bit-word-addressable memory, it will likely require extra memory because of suboptimal packing and the need for a few wide addresses. This impact is likely to be relatively small, as the browser will use packing and non-wide addresses for most important purposes, and the browser will fit comfortably within the maximum addressable range of 16 Gigabytes. However, there may be a significant runtime overhead due to the widespread use of packed data for images and text. More importantly, 16 Gigabytes is a relatively low limit, and if the web page grows significantly, this computer will exhaust its address space and begin to have some of the same difficulties as the byte-addressed computer.
      If the web browser is running on a computer with 64-bit addresses and 32-bit-word-addressable memory, it will suffer from both of the above runtime overheads: it require substantially more memory to accommodate the larger 64-bit addresses, hurting locality, while also incurring the runtime overhead of working with extensive packing of text and image data. Word addressing means that the program can theoretically address up to 64 Exabytes of memory instead of only 16 Exabytes, but since the program is nowhere near needing this much memory (and in practice no real computer is capable of providing it), this provides no benefit.
      Thus, word addressing allows a computer to address substantially more memory without increasing its address width and incurring the corresponding large increase in memory usage. However, this is valuable only within a relatively narrow range of working set sizes, and it can introduce substantial runtime overheads depending on the application. Programs which do relatively little work with byte-oriented data like images, text, files, and network traffic may be able to benefit most.


      Sub-word accesses and wide addresses


      A program running on a computer that uses word addressing can still work with smaller units of memory by emulating an access to the smaller unit. For a load, this requires loading the enclosing word and then extracting the desired bits. For a store, this requires loading the enclosing word, shifting the new value into place, overwriting the desired bits, and then storing the enclosing word.
      Suppose that four consecutive code points from a UTF-8 string need to be packed into a 32-bit word. The first code point might occupy bits 0ā€“7, the second 8ā€“15, the third 16ā€“23, and the fourth 24ā€“31. (If the memory were byte-addressable, this would be a little endian byte order.)
      In order to clearly elucidate the code necessary for sub-word accesses without tying the example too closely to any particular word-addressed architecture, the following examples use MIPS assembly. In reality, MIPS is a byte-addressed architecture with direct support for loading and storing 8-bit and 16-bit values, but the example will pretend that it only provides 32-bit loads and stores and that offsets within a 32-bit word must be stored separately from an address. MIPS has been chosen because it is a simple assembly language with no specialized facilities that would make these operations more convenient.
      Suppose that a program wishes to read the third code point into register r1 from the word at an address in register r2. In the absence of any other support from the instruction set, the program must load the full word, right-shift by 16 to drop the first two code points, and then mask off the fourth code point:

      ldw $r1, 0($r2) # Load the full word
      srl $r1, $r1, 16 # Shift right by 16
      andi $r1, $r1, 0xFF # Mask off other code points
      If the offset is not known statically, but instead a bit-offset is stored in the register r3, a slightly more complex approach is required:

      ldw $r1, 0($r2) # Load the full word
      srlv $r1, $r1, $r3 # Shift right by the bit offset
      andi $r1, $r1, 0xFF # Mask off other code points
      Suppose instead that the program wishes to assign the code point in register r1 to the third code point in the word at the address in r2. In the absence of any other support from the instruction set, the program must load the full word, mask off the old value of that code point, shift the new value into place, merge the values, and store the full word back:

      sll $r1, $r1, 16 # Shift the new value left by 16
      lhi $r5, 0x00FF # Construct a constant mask to select the third byte
      nor $r5, $r5, $zero # Flip the mask so that it clears the third byte
      ldw $r4, 0($r2) # Load the full word
      and $r4, $r5, $r4 # Clear the third byte from the word
      or $r4, $r4, $r1 # Merge the new value into the word
      stw $r4, 0($r2) # Store the result as the full word
      Again, if the offset is instead stored in r3, a more complex approach is required:

      sllv $r1, $r1, $r3 # Shift the new value left by the bit offset
      llo $r5, 0x00FF # Construct a constant mask to select a byte
      sllv $r5, $r5, $r3 # Shift the mask left by the bit offset
      nor $r5, $r5, $zero # Flip the mask so that it clears the selected byte
      ldw $r4, 0($r2) # Load the full word
      and $r4, $r5, $r4 # Clear the selected byte from the word
      or $r4, $r4, $r1 # Merge the new value into the word
      stw $r4, 0($r2) # Store the result as the full word
      This code sequence assumes that another thread cannot modify other bytes in the word concurrently. If concurrent modification is possible, then one of the modifications might be lost. To solve this problem, the last few instructions must be turned into an atomic compare-exchange loop so that a concurrent modification will simply cause it to repeat the operation with the new value. No memory barriers are required in this case.
      A pair of a word address and an offset within the word is called a wide address (also known as a fat address or fat pointer). (This should not be confused with other uses of wide addresses for storing other kinds of supplemental data, such as the bounds of an array.) The stored offset may be either a bit offset or a byte offset. The code sequences above benefit from the offset being denominated in bits because they use it as a shift count; an architecture with direct support for selecting bytes might prefer to just store a byte offset.
      In these code sequences, the additional offset would have to be stored alongside the base address, effectively doubling the overall storage requirements of an address. This is not always true on word machines, primarily because addresses themselves are often not packed with other data to make accesses more efficient. For example, the Cray X1 uses 64-bit words, but addresses are only 32 bits; when an address is stored in memory, it is stored in its own word, and so the byte offset can be placed in the upper 32 bits of the word. The inefficiency of using wide addresses on that system is just all the extra logic to manipulate this offset and extract and insert bytes within words; it has no memory-use impact.


      Related concepts


      The minimum addressable unit of a computer isn't necessarily the same as the minimum memory access size of the computer's instruction set. For example, a computer might use byte addressing without providing any instructions to directly read or write a single byte. Programs would be expected to emulate those operations in software with bit-manipulations, just like the example code sequences above do. This is relatively common in 64-bit computer architectures designed as successors to 32-bit supercomputers or minicomputers, such the DEC Alpha and the Cray X1.
      The C standard states that a pointer is expected to have the usual representation of an address. C also allows a pointer to be formed to any object except a bit-field; this includes each individual element of an array of bytes. C compilers for computers that use word addressing often use different representations for pointers to different types depending on their size. A pointer to a type that's large enough to fill a word will be a simple address, while a pointer such as char* or void* will be a wide pointer: a pair of the address of a word and the offset of a byte within that word. Converting between pointer types is therefore not necessarily a trivial operation and can lose information if done incorrectly.
      Because the size of a C struct is not always known when deciding the representation of a pointer to that struct, it is not possible to reliably apply the rule above. Compilers may need to align the start of a struct so that it can use a more efficient pointer representation.


      Examples


      The ERA 1103 uses word addressing with 36-bit words. Only addresses 0-1023 refer to random-access memory; others are either unmapped or refer to drum memory.
      The PDP-10 uses word addressing with 36-bit words and 18-bit addresses.
      Most Cray supercomputers from the 1980s and 1990s use word addressing with 64-bit words. The Cray-1 and Cray X-MP use 24-bit addresses, while most others use 32-bit addresses.
      The Cray X1 uses byte addressing with 64-bit addresses. It does not directly support memory accesses smaller than 64 bits, and such accesses must be emulated in software. The C compiler for the X1 was the first Cray compiler to support emulating 16-bit accesses.
      The DEC Alpha uses byte addressing with 64-bit addresses. Early Alpha processors do not provide any direct support for 8-bit and 16-bit memory accesses, and programs are required to e.g. load a byte by loading the containing 64-bit word and then separately extracting the byte. Because the Alpha uses byte addressing, this offset is still represented in the least significant bits of the address (rather than separately as a wide address), and the Alpha conveniently provides load and store unaligned instructions (ldq_u and stq_u) which ignore those bits and simply load and store the containing aligned word. The later byte-word extensions to the architecture (BWX) added 8-bit and 16-bit loads and stores, starting with the Alpha 21164a. Again, this extension was possible without serious software incompatibilities because the Alpha had always used byte addressing.


      See also


      Byte addressing


      References

    Kata Kunci Pencarian: