Information | |
---|---|
has gloss | eng: A scale factor is used in computer science when a real world set of numbers needs to be represented on a different scale in order to fit a specific number format. For instance, a 16 bit unsigned integer (uint16) can only hold a value as large as 65,53510. If uint16's are to be used to represent values from 0 to 131,07010, then a scale factor of 1/2 would be introduced. Notice that while the scale factor extends the range, it also decreases the precision. In this example, for instance, the number 3 could not be represented because a stored 1 represents a real world 2, and a stored 2 represents a real world 4. |
lexicalization | eng: Scale Factor |
instance of | c/Data types |
Lexvo © 2008-2024 Gerard de Melo. Contact Legal Information / Imprint