Single-precision floating-point format (IEEE 754)
Calculation method
$Value = (-1)^{b_{31}} \times 2^{(b_{30}b_{29}...b_{23})2 - 127} \times (1.b{22}b_{21}...b_{0})_2$
Double-precision floating-point format (IEEE754)
1 bit Sign + 8 bit Exponent + 8 bit Fraction (7 bit explicitly stored)
Google Brain
commonly used in machine learning