Traceability
Digital computers are stored in binary, so here we need to know the rules of decimal to binary, and the rules of binary to decimal
There are two rules for converting decimal to binary <br>:
- Integer to binary: divide by 2, take the remainder, and arrange in reverse order.
- Decimal to binary: multiply by 2 and round up, in positive order
How to understand these two sentences: For example, the number 9.375, the integer is 9, and the decimal is 0.375
Integer 9 converted to binary is: 1001
Then the decimal 0.375 converted into binary is: 011
Then 9.375 converted to binary is 1001.011
It can be verified by the following method
console.log((9.375).toString(2))
console.log(Number.prototype.toString.call(9.375,2));
console.log(Number.prototype.toString.call(Number(9.375),2));
The same binary to decimal
- Before the decimal point: from right to left, multiply each number in binary by the corresponding power of 2.
- After the decimal point: multiply each number in binary by the corresponding negative power of 2 from left to right.
IEEE 754 double-precision 64-bit floating point number
There is a decimal notation called scientific notation:
so
0.1 in binary
e = -4;
m = 1.1001100110011001100110011001100110011001100110011010 (52 bits)
0.2 binary
e = -3;
m = 1.1001100110011001100110011001100110011001100110011010 (52 bits)
Then we add it up. There is a problem here, that is, when the exponents are inconsistent, it is generally shifted to the right, because even if the right side overflows, the loss of precision is far less than the overflow when shifting to the left, so it is converted to
e = -3; m = 0.1100110011001100110011001100110011001100110011001101 (52位)
+
e = -3; m = 1.1001100110011001100110011001100110011001100110011010 (52位)
get
e = -3; m = 10.0110011001100110011001100110011001100110011001100111 (52位)
保留一位整数
e = -2; m = 1.00110011001100110011001100110011001100110011001100111 (53位)
If more than 52 bits, do rounding to the nearest number to the original number, the principle is to keep the final binary number of the even number
1.0011001100110011001100110011001100110011001100110100 * 2 ^ -2
=0.010011001100110011001100110011001100110011001100110100
Converted to decimal as 0.30000000000000004
How to avoid this from happening
The number we think is understandable, but inside the computer it is an infinite loop of decimals,
It has no way. Its mantissa has only 52 bits. It only truncates, normalizes, and rounds. It may cause a precision deviation, so it gets a relatively accurate result but not absolute accuracy.
Is there any way to make 0.1+0.2 equal to 0.3
(0.2*100 + 0.1*100)/100
That is to avoid decimal places when calculating
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。