- Source: Precision (computer science)
In computer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.
Some of the standardized precision formats are
Half-precision floating-point format
Single-precision floating-point format
Double-precision floating-point format
Quadruple-precision floating-point format
Octuple-precision floating-point format
Of these, octuple-precision format is rarely used. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format has been increasing especially in the field of machine learning since many machine learning algorithms are inherently error-tolerant.
Rounding error
Precision is often the source of rounding errors in computation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store "sin(0.1)" in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made using the data (although it can also be reduced).
See also
Arbitrary-precision arithmetic
Extended precision
Granularity
IEEE754 (IEEE floating point standard)
Integer (computer science)
Significant figures
Truncation
Approximate computing
References
Kata Kunci Pencarian:
- Kurs (sistem navigasi penyandaran)
- Notasi ilmiah
- Universitas Tōkai
- Logaritma biner
- Logaritma
- 2024
- PlayStation 3
- Elektron
- Sumitomo Mitsui Banking Corporation
- Java (platform perangkat lunak)
- Precision (computer science)
- Precision
- Arbitrary-precision arithmetic
- Half-precision floating-point format
- Precision Neuroscience
- Integer (computer science)
- Heuristic (computer science)
- Precision and recall
- Truncation
- Granularity