- Source: IBM hexadecimal floating-point
Hexadecimal floating point (now called HFP by IBM) is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.
In comparison to IEEE 754 floating point, the HFP format has a longer significand, and a shorter exponent. All HFP formats have 7 bits of exponent with a bias of 64. The normalized range of representable numbers is from 16−65 to 1663 (approx. 5.39761 × 10−79 to 7.237005 × 1075).
The number is represented as the following formula: (−1)sign × 0.significand × 16exponent−64.
Single-precision 32-bit
A single-precision HFP number (called "short" by IBM) is stored in a 32-bit word:
In this format the initial bit is not suppressed, and the
radix (hexadecimal) point is set to the left of the significand (fraction in IBM documentation and the figures).
Since the base is 16, the exponent in this form is about twice as large as the equivalent in IEEE 754, in order to have similar exponent range in binary, 9 exponent bits would be required.
= Example
=Consider encoding the value −118.625 as an HFP single-precision floating-point value.
The value is negative, so the sign bit is 1.
The value 118.62510 in binary is 1110110.1012. This value is normalized by moving the radix point left four bits (one hexadecimal digit) at a time until the leftmost digit is zero, yielding 0.011101101012. The remaining rightmost digits are padded with zeros, yielding a 24-bit fraction of .0111 0110 1010 0000 0000 00002.
The normalized value moved the radix point two hexadecimal digits to the left, yielding a multiplier and exponent of 16+2. A bias of +64 is added to the exponent (+2), yielding +66, which is 100 00102.
Combining the sign, exponent plus bias, and normalized fraction produces this encoding:
In other words, the number represented is −0.76A00016 × 1666 − 64 = −0.4633789… × 16+2 = −118.625
= Largest representable number
=The number represented is +0.FFFFFF16 × 16127 − 64 = (1 − 16−6) × 1663 ≈ +7.2370051 × 1075
= Smallest positive normalized number
=The number represented is +0.116 × 160 − 64 = 16−1 × 16−64 ≈ +5.397605 × 10−79.
= Zero
=Zero (0.0) is represented in normalized form as all zero bits, which is arithmetically the value +0.016 × 160 − 64 = +0 × 16−64 ≈ +0.000000 × 10−79 = 0. Given a fraction of all-bits zero, any combination of positive or negative sign bit and a non-zero biased exponent will yield a value arithmetically equal to zero. However, the normalized form generated for zero by CPU hardware is all-bits zero. This is true for all three floating-point precision formats. Addition or subtraction with other exponent values can lose precision in the result.
Precision issues
Since the base is 16, there can be up to three leading zero bits in the binary significand. That means when the number is converted into binary, there can be as few as 21 bits of precision. Because of the "wobbling precision" effect, this can cause some calculations to be very inaccurate. This has caused considerable criticism.
A good example of the inaccuracy is representation of decimal value 0.1. It has no exact binary or hexadecimal representation. In hexadecimal format, it is represented as 0.19999999...16 or 0.0001 1001 1001 1001 1001 1001 1001...2, that is:
This has only 21 bits, whereas the binary version has 24 bits of precision.
Six hexadecimal digits of precision is roughly equivalent to six decimal digits (i.e. (6 − 1) log10(16) ≈ 6.02). A conversion of single precision hexadecimal float to decimal string would require at least 9 significant digits (i.e. 6 log10(16) + 1 ≈ 8.22) in order to convert back to the same hexadecimal float value.
Double-precision 64-bit
The double-precision HFP format (called "long" by IBM) is the same as the "short" format except that the fraction field is wider and the double-precision number is stored in a double word (8 bytes):
The exponent for this format covers only about a quarter of the range as the corresponding IEEE binary format.
14 hexadecimal digits of precision is roughly equivalent to 17 decimal digits. A conversion of double precision hexadecimal float to decimal string would require at least 18 significant digits in order to convert back to the same hexadecimal float value.
Extended-precision 128-bit
Called extended-precision by IBM, a quadruple-precision HFP format was added to the System/370 series and was available on some S/360 models (S/360-85, -195, and others by special request or simulated by OS software). The extended-precision fraction field is wider, and the extended-precision number is stored as two double words (16 bytes):
28 hexadecimal digits of precision is roughly equivalent to 32 decimal digits. A conversion of extended precision HFP to decimal string would require at least 35 significant digits in order to convert back to the same HFP value. The stored exponent in the low-order part is 14 less than the high-order part, unless this would be less than zero.
Arithmetic operations
Available arithmetic operations are add and subtract, both normalized and unnormalized, and compare. Prenormalization is done based on the exponent difference. Multiply and divide prenormalize unnormalized values, and truncate the result after one guard digit. There is a halve operation to simplify dividing by two. Starting in ESA/390, there is a square root operation. All operations have one hexadecimal guard digit to avoid precision loss. Most arithmetic operations truncate like simple pocket calculators. Therefore, 1 − 16−8 = 1. In this case, the result is rounded away from zero.
IEEE 754 on IBM mainframes
Starting with the S/390 G5 in 1998, IBM mainframes have also included IEEE binary floating-point units which conform to the IEEE 754 Standard for Floating-Point Arithmetic. IEEE decimal floating-point was added to IBM System z9 GA2 in 2007 using millicode and in 2008 to the IBM System z10 in hardware.
Modern IBM mainframes support three floating-point radices with 3 hexadecimal (HFP) formats, 3 binary (BFP) formats, and 3 decimal (DFP) formats. There are two floating-point units per core; one supporting HFP and BFP, and one supporting DFP; there is one register file, FPRs, which holds all 3 formats. Starting with the z13 in 2015, processors have added a vector facility that includes 32 vector registers, each 128 bits wide; a vector register can contain two 64-bit or four 32-bit floating-point numbers. The traditional 16 floating-point registers are overlaid on the new vector registers so some data can be manipulated with traditional floating-point instructions or with the newer vector instructions.
Special uses
The IBM HFP format is used in:
SAS 5 Transport files (.XPT) as required by the Food and Drug Administration (FDA) for New Drug Application (NDA) study submissions,
GRIB (GRIdded Binary) data files to exchange the output of weather prediction models (IEEE single-precision floating-point format in current version),
GDS II (Graphic Database System II) format files (OASIS is the replacement), and
SEG Y (Society of Exploration Geophysicists Y) format files (IEEE single-precision floating-point was added to the format in 2002).
As IBM is the only remaining provider of hardware using the HFP format, and as the only IBM machines that support that format are their mainframes, few file formats require it. One exception is the SAS 5 Transport file format, which the FDA requires; in that format, "All floating-point numbers in the file are stored using the IBM mainframe representation. [...] Most platforms use the IEEE representation for floating-point numbers. [...] To assist you in reading and/or writing transport files, we are providing routines to convert from IEEE representation (either big endian or little endian) to transport representation and back again." Code for IBM's format is also available under LGPLv2.1.
Systems that use the IBM floating-point format
IBM System/360 and successors
RCA Spectra 70
English Electric System 4
GEC 4000 series minicomputers
Interdata 16- and 32-bit computers
SDS Sigma series
Data General minicomputers
ICL 2900 Series computers
Siemens 7.700 and 7.500 series mainframes and successors
The decision for hexadecimal floating-point
The article "Architecture of the IBM System/360" explains the choice as being because "the frequency of pre-shift, overflow, and precision-loss post-shift on floating-point addition are substantially reduced by this choice." This allowed higher performance for the large System/360 models, and reduced cost for the small ones. The authors were aware of the potential for precision loss, but assumed that this would not be significant for 64-bit floating-point variables. Unfortunately, the designers seem not to have been aware of Benford's Law which means that a large proportion of numbers will suffer reduced precision.
The book "Computer Architecture" by two of the System/360 architects quotes Sweeney's study of 1958-65 which showed that using a base greater than 2 greatly reduced the number of shifts required for alignment and normalisation, in particular the number of different shifts needed. They used a larger base to make the implementations run faster, and the choice of base 16 was natural given 8-bit bytes. The intention was that 32-bit floats would only be used for calculations that would not propagate rounding errors, and 64-bit double precision would be used for all scientific and engineering calculations. The initial implementation of double precision lacked a guard digit to allow proper rounding, but this was changed soon after the first customer deliveries.
See also
IEEE 754 Standard for Floating-Point Arithmetic
Microsoft Binary Format
References
Further reading
Sweeney, D. W. (1965). "An analysis of floating-point addition". IBM Systems Journal. 4 (1): 31–42. doi:10.1147/sj.41.0031.
Tomayko, J. (Summer 1995). "System 360 Floating-Point Problems". IEEE Annals of the History of Computing. 17 (2): 62–63. doi:10.1109/MAHC.1995.10006. ISSN 1058-6180.
Harding, L. J. (1966), "Idiosyncrasies of System/360 Floating-Point", Proceedings of SHARE 27, August 8–12, 1966, Presented at SHARE XXVII, Toronto, Canada doi:10.5281/zenodo.10566524.
Harding, L. J. (1966), "Modifications of System/360 Floating-Point", Proceedings of SHARE 27, August 8-12, 1966, Presented at SHARE XXVII, Toronto, Canada doi:10.5281/zenodo.10566780.
Harding, L. J. (1966), "Proposed Modification Of Floating-Point Multiplication", proposed Engineering Change to IBM Corporation, doi:10.5281/zenodo.10567044.
Anderson, Stanley F.; Earle, John G.; Goldschmidt, Robert Elliott; Powers, Don M. (January 1967). "The IBM System/360 Model 91: Floating-Point Execution Unit". IBM Journal of Research and Development. 11 (1): 34–53. doi:10.1147/rd.111.0034.
Padegs, A. (1968). "Structural aspects of the System/360 Model 85, III: Extensions to floating-point architecture". IBM Systems Journal. 7 (1): 22–29. doi:10.1147/sj.71.0022.
Schwarz, E. M.; Sigal, L.; McPherson, T. J. (July 1997). "CMOS floating-point unit for the S/390 Parallel Enterprise Server G4". IBM Journal of Research and Development. 41 (4.5): 475–488. doi:10.1147/rd.414.0475.
Kata Kunci Pencarian:
- IBM hexadecimal floating-point
- Hexadecimal floating point
- Floating-point unit
- Hexadecimal
- IEEE 754
- Hex file
- IBM Enterprise Systems Architecture
- Extended precision
- Floating-point arithmetic
- Double-precision floating-point format