Value | Binary | |
Sign | ||
Integral | ||
Fractional | ||
Exponent | ||
Exponent + bias | ||
Mantissa | ||
All combined | ||
Hexadecimal | 0x |
Bits/notes | Value | |
All | ||
Sign | ||
Exponent (w bias) | ||
Mantissa | ||
Exponent - bias | - | |
Result | (-1)^ * (1 + ) * 2^ | |
Decimal (input value) | ||
% error | | Decimal input - Result | / Decimal input |
This was made to help visualise the steps for manual conversion of decimal numbers to IEEE floating-point format. An excellent video walkthrough of these steps can be found here and here.
Decimal to hex results use raw calculated values. Changing the last bit can sometimes produce a binary representation with smaller error.
For this reason, some results will be "wrong" when compared with the IEEE754 standard.
E.g. Here 0.001 => 0x3a83126e. This has an error of 6.892e-8 (try it out).
0.001 could also be 0x3a83126f, which has an error of 4.750e-8 (try it out).
Since this error is smaller, this is the "correct" representation.
Compare the results to this tool.
Each precision can only represent decimal values within a certain range, and only up to a certain number of significant figures. You can enter values outside of this range, but behaviour will be undefined (for this converter at least). Some values are represented with specific bit patterns that don't follow the general rules. 0 is an example. Others are not covered here. Some discussion about this here.
Other references not already mentioned that helped when creating this: