Conclusion first
Why not equal?
Because floating point numbers have precision loss when representing decimals
Why there is a loss of accuracy
Because computer hardware stores data in binary (10101010) form
For example, each byte is 8 bits, and the int type occupies 4 bytes, which is 32 bits of precision; then the 32-bit computer precision can store 2 to the 32nd power of data. As shown below:
Two binary data can be placed on each bit, that is, 0 or 1; generally the highest bit is the sign bit (1 means negative number, 0 means positive number), so the signed type data should be 31 2
2 2 2 ... 2 (31 2s), plus the range of symbols is -2147483648 ~ 2147483647; of course, there are also unsigned integers, which will not be discussed for now
So how do you store decimals? Decimals are called floating point types in computers, and JS will eventually be converted into C++ by browser engines, but there is only one numerical type in JS, that is number, so what type of number is in C++?
For the time being, we think that it is a double-precision type, that is, double, which occupies four bytes in C++, that is, 64-bit storage. For integer storage, please refer to the above, and focus on floating-point storage.
Similarly, 64 bits can be divided into three parts, and its format is based on IEEE 754:
The first part: sign bit (S), occupying 1 bit, that is, the 63rd bit;
The second part: exponent field (E), occupying 11 bits, that is, 52 bits to 62 bits, including 52 and 62;
The third part: the mantissa field (F), which occupies 52 bits, that is, the 0th to 51st bits, including 51;
If you convert a decimal to binary 64-bit how to represent it, take 12.52571 as an example
First convert to binary (decimal to binary) ( webmaster tools binary conversion )
- 12.52571 => 1100.100001101001010011101110001110010010111000011111
shift its decimal point three places to the left
- 1.100100001101001010011101110001110010010111000011111 * 2^3
get conclusion
- Because it is an integer, the sign bit S is 0;
Because it is shifted by three bits to the left, E = 1023 + 3 = 1026 (converted to binary) => 10000000010, there are 11 bits, not enough to add 0 in front
- Why add 1023? Why is left shift adding 3, not subtracting 3?
- The mantissa is (F) (after the decimal point) 100100001101001010011101110001110010010111000011111;
Final representation: 0 10000000010 100100001101001010011101110001110010010111000011111;
The total length above is 63 bits, one bit difference, and zeros are added at the end, that is,
0 10000000010 1001000011010010100111011100011100100101110000111110;
Then the 64-bit computer storage form of 12.52571 is the above;
Looking back at 0.1 + 0.2
The above expression may be a little confusing, for sure, after all, the author is also a reference (right as a note for later review), let's not list it for the time being; then how do 0.1 and 0.2 turn
There is a problem here, the conversion of 0.1 and 0.2 into binary decimal points is followed by a cycle
// 0.1 转化为二进制
0.0 0011 0011 0011 0011...(0011无限循环)
// 0.2 转化为二进制
0.0011 0011 0011 0011 0011...(0011无限循环)
Since the mantissa is only 52 bits (after 52 bits are truncated by the computer)
E = -4; F =1001100110011001100110011001100110011001100110011010 (52位)
E = -3; F =1.1001100110011001100110011001100110011001100110011010 (52位)
To add two numbers, first E needs to be the same, so we get the following
E = -3; F =0.1100110011001100110011001100110011001100110011001101 (52位) //多余位截掉
E = -3; F =1.1001100110011001100110011001100110011001100110011010 (52位)
Adding the above two gives
E = -3; F = 10.0110011001100110011001100110011001100110011001100111
-------------------------------------------------------------------
E = -2; F = 1.00110011001100110011001100110011001100110011001100111
The conclusion is that
2^-2 * 1.00110011001100110011001100110011001100110011001100111
This value is converted to a true value, the result is: 0.30000000000000004
how to be accurate
JavaScript's type bigInt (ES8)
TypeScript also has such types
There are big.js, bigInt libraries to solve the accuracy problem
The same language with a lack of precision
python
Summarize
Because JavaScript will eventually be converted to C++ to execute
Common floating-point numerical representations in the IEEE754 standard are: single-precision (32-bit) and double-precision (64-bit), JS uses the latter. Floating-point numbers are different from integers. A floating-point number contains both an integer part and a fractional part. Because of their different representations, they need to be analyzed into integers and fractional parts, and then added to get the result. 0.1 and 0.2 are converted to binary first, and then converted to the same dimension for calculation. After getting binary, and then converted to decimal, it becomes 0.30000000000000004
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。