头图

Traceability

Digital computers are stored in binary, so here we need to know the rules of decimal to binary, and the rules of binary to decimal
There are two rules for converting decimal to binary <br>:

  1. Integer to binary: divide by 2, take the remainder, and arrange in reverse order.
  2. Decimal to binary: multiply by 2 and round up, in positive order

How to understand these two sentences: For example, the number 9.375, the integer is 9, and the decimal is 0.375

Integer 9 converted to binary is: 1001
image.png

Then the decimal 0.375 converted into binary is: 011
image.png
Then 9.375 converted to binary is 1001.011

It can be verified by the following method

 console.log((9.375).toString(2))
console.log(Number.prototype.toString.call(9.375,2));
console.log(Number.prototype.toString.call(Number(9.375),2));

image.png

The same binary to decimal

  1. Before the decimal point: from right to left, multiply each number in binary by the corresponding power of 2.
    image.png
  2. After the decimal point: multiply each number in binary by the corresponding negative power of 2 from left to right.
    image.png

IEEE 754 double-precision 64-bit floating point number
image.png
There is a decimal notation called scientific notation:
image.png
image.png

so

0.1 in binary
e = -4;
m = 1.1001100110011001100110011001100110011001100110011010 (52 bits)
image.png
0.2 binary
e = -3;
m = 1.1001100110011001100110011001100110011001100110011010 (52 bits)
Then we add it up. There is a problem here, that is, when the exponents are inconsistent, it is generally shifted to the right, because even if the right side overflows, the loss of precision is far less than the overflow when shifting to the left, so it is converted to

 e = -3; m = 0.1100110011001100110011001100110011001100110011001101 (52位)
+
e = -3; m = 1.1001100110011001100110011001100110011001100110011010 (52位)

get

 e = -3; m = 10.0110011001100110011001100110011001100110011001100111 (52位)

保留一位整数

e = -2; m = 1.00110011001100110011001100110011001100110011001100111 (53位)

If more than 52 bits, do rounding to the nearest number to the original number, the principle is to keep the final binary number of the even number

 1.0011001100110011001100110011001100110011001100110100 * 2 ^ -2

=0.010011001100110011001100110011001100110011001100110100

Converted to decimal as 0.30000000000000004

How to avoid this from happening

The number we think is understandable, but inside the computer it is an infinite loop of decimals,
It has no way. Its mantissa has only 52 bits. It only truncates, normalizes, and rounds. It may cause a precision deviation, so it gets a relatively accurate result but not absolute accuracy.
Is there any way to make 0.1+0.2 equal to 0.3
(0.2*100 + 0.1*100)/100
That is to avoid decimal places when calculating


HappyCodingTop
526 声望847 粉丝

Talk is cheap, show the code!!