19

Interview history of blood and tears

Don't ask me if I am an architect. I'm just the low-level front end of the face. It's not easy to sum up and like it.

css

1. GPU acceleration

The full name of GPU in English is Graphic Processing Unit, and the Chinese translation is "graphics processor". Unlike the CPU, the GPU is a chip specially produced for processing graphics tasks. From the perspective of this task positioning, GPU can be seen not only on the graphics card of the computer, but also in various places with multimedia processing requirements such as mobile phones and game consoles.

For the GPU, its task is to synthesize and display images of millions of pixels on the screen-that is, to have millions of tasks at the same time that need to be processed in parallel, so the GPU is designed to process many tasks in parallel, rather than like The CPU completes a single task like that.

Therefore, the architecture of CPU and GPU is very different. There are many functional modules of CPU, which can adapt to complex computing environment; the structure of GPU is relatively simple.

The GPU contains more processing units than the CPU and has a larger bandwidth, which enables it to exert greater efficiency in the multimedia processing process. For example: The current top-level CPU has only 4 cores or 6 cores, and 8 or 12 processing threads are simulated to perform calculations, but ordinary GPUs contain hundreds or thousands of processing units, and high-end ones are even more. It has a natural advantage for a large number of repetitive processes in multimedia computing.

css hardware acceleration:

After the browser receives the page document, it will parse the markup language in the document into a DOM tree. The DOM tree and CSS are combined to form a rendering tree for the browser to build a page. The render tree contains a large number of rendering elements, each rendering element will be divided into a layer, each layer will be loaded into the GPU to form a rendering texture, and the transformation of the layer in the GPU will not trigger repaint , This is very similar to the 3D drawing function, and finally these layers using transform will be processed by an independent synthesizer process.

The difference between 3D and 2D transform is that the browser creates a separate composite layer for the 3D animation before the page is rendered, and creates it for the 2D animation during the runtime. When the animation starts, a new composite layer is generated and loaded as a GPU texture to initialize the repaint. Then the GPU's compositor manipulates the execution of the entire animation. Finally, when the animation is over, perform the repaint operation again to delete the composite layer.

Disadvantages:

  • Adding redraw is to upgrade the element level to the composite layer. Sometimes this is very slow (ie we get a full-layer redraw instead of an increment).
  • The drawing layer must be transferred to the GPU. Depending on the number and size of these layers, the transfer can also be very slow. This can cause elements to flicker on low-end and mid-market devices.
  • Each composite layer consumes additional memory. Memory is a valuable resource on mobile devices. Excessive memory usage may cause the browser to crash.
  • If you do not consider implicit synthesis and use slow redrawing, in addition to additional memory usage, the chance of browser crash is also very high.
  • We will have visual artifacts, such as text rendering in Safari, and in some cases the content of the page will disappear or become distorted.

2. Cascading context

The stacking context is a three-dimensional concept of HTML elements. These HTML elements extend on an imaginary z-axis relative to the user facing the (computer screen) window or web page. HTML elements occupy the space of the stacking context in order of priority according to their own attributes. .

Therefore, there is often more than one stacking context in a page (because there are many ways to generate stacking contexts). Within a stacking context, we stack elements according to the rules of the stacking level.

stacking context order:

  1. First look at whether the two elements to be compared are in the same stacking context:

1.1 If so, who has the highest cascading level, and who is on top (how to judge the cascading level?-see the "stacking order" diagram).
1.2 If two elements are not in a unified stacking context, please compare the stacking levels of the stacking contexts they are in first.

  1. When two elements have the same stacking level and the same stacking order, the latter element in the DOM structure has a stacking level above the previous element.

image.png

JS

1. What is the difference between the packaging type and the ordinary type? The difference between new String() and String()

Answer: : js provides 3 packaging types: new String() , new Number() , new Boolean() . Since the basic type cannot add attributes and methods, the function of the js wrapper class is to package the basic type into an object, so that it can have attributes and methods.

tips: When we call properties and methods on some basic data type values, the browser will temporarily use the wrapper class to convert them to objects, and then call the properties and methods of the object; after the call, it will be converted to Basic data type.

2. How does promise.then implement chain call

Answer : Chain call is realized by returning a new Promise

3. Call bind several times, such as bind a bind b. Finally, who is this pointing to?

Answer: always points to the context passed in when bind is called for the first time, because all subsequent calls to bind are bound to this context.

4. Brief description of the v8 engine recovery mechanism

Answer: v8 garbage collection mainly adopts two strategies:

  • Mark clear
  • Reference count

    mark clearing is the most commonly used garbage collection mechanism for js. When the garbage collection program runs, it will mark all the variables stored in the memory. Then, it will remove all the variables in the context and the tags of the variables referenced by the variables in the context. After this, the marked variables are to be deleted, because any variables in the context cannot access them. The garbage collection program then does a memory cleanup, destroying all marked values and reclaiming their memory.

    reference count records the number of times it has been referenced for each value. When you declare a variable and assign a reference value to it, the number of references to this value is 1. If the same value is assigned to another variable, the reference number is increased by 1. Similarly, if the variable that saves the reference to the value is overwritten by other values, then the number of references is reduced by one. When the reference count of a value is 0, it means that there is no way to access the value, so its memory can be safely recovered. The next time the garbage collector runs, the memory with a value of 0 will be released. (The above is taken from the fourth edition of the js Red Book)

v8 recycling algorithm block js when it is running? Why

Answer: will block.

6. How to optimize the garbage collection mechanism

Answer: https://www.cnblogs.com/chengxs/p/10919311.html summary is to use the new generation algorithm

7. Brief description of scope chain. How can I get the internal variables of a function?

Answer: The is the process of looking up variables. You can return a closure that carries the internal variables of the function through the function inside, so that the outside can access the variables inside the function

8. Brief description of closures, how to avoid memory leaks

Answer: Whenever you declare a new function and assign it to a variable, you must store the function definition and closure. The closure contains all the variables in the scope when the function is created. Similar to a backpack, the function definition comes with a small backpack. , His package stores all the variables in the scope when the function was created. Pointing the pointer to null in time can avoid memory leaks.

9. Can class be enumerated? What does the class instanceof Function output?

Answer: All the internal methods defined in the class are not enumerable. class is the function, and the class itself points to the constructor. code show as below:

class Point {
  constructor(x, y) {
    // ...
  }

  toString() {
    // ...
  }
}

Object.keys(Point.prototype)
// []
Object.getOwnPropertyNames(Point.prototype)
// ["constructor","toString"]
class Fn{}
Fn instanceof Function // true

const a = new Fn()
a instanceof Function // false

webpack

1. How does webpack realize sub-module packaging?

Answer: can be implemented splitChunks

The following three common code splitting methods in webpack:

  • Entry starting point: Use entry configuration to manually separate code.
  • Dynamic import: Separate code through inline function calls of modules.
  • Prevent duplication: use splitChunks to remove duplicates and separate chunks. The first way is very simple, just configure multiple entries in the entry.

splitChunks code split

splitChunks: {
    // 表示选择哪些 chunks 进行分割,可选值有:async,initial和all
    chunks: "async",
    // 表示新分离出的chunk必须大于等于minSize,默认为30000,约30kb。
    minSize: 30000,
    // 表示一个模块至少应被minChunks个chunk所包含才能分割。默认为1。
    minChunks: 1,
    // 表示按需加载文件时,并行请求的最大数目。默认为5。
    maxAsyncRequests: 5,
    // 表示加载入口文件时,并行请求的最大数目。默认为3。
    maxInitialRequests: 3,
    // 表示拆分出的chunk的名称连接符。默认为~。如chunk~vendors.js
    automaticNameDelimiter: '~',
    // 设置chunk的文件名。默认为true。当为true时,splitChunks基于chunk和cacheGroups的key自动命名。
    name: true,
    // cacheGroups 下可以可以配置多个组,每个组根据test设置条件,符合test条件的模块,就分配到该组。
    // 模块可以被多个组引用,但最终会根据priority来决定打包到哪个组中。默认将所有来自 
    // node_modules目录的模块打包至vendors组,将两个以上的chunk所共享的模块打包至default组。
    cacheGroups: {
        vendors: {
            test: /[\\/]node_modules[\\/]/,
            priority: -10 // 缓存组优先级
        },
        // 
    default: {
            minChunks: 2,
            priority: -20,
            reuseExistingChunk: true  // 可设置是否重用该chunk
        }
    }
}

Through cacheGroups, we can define custom chunk groups, filter modules through test conditions, and assign modules that meet the conditions to the same group.

2. What is tree-shaking of webpack4? How did it happen? Under what circumstances will it fail? why?

Answer: tree-shaking is essentially used to discard useless code when webpack is packaged.

working principle: before ES6, we can use CommonJS to import the module: require() , this introduction is dynamic, also means that we can import the required code based on conditions

let module
if(true){
    module = require('a')
}else{
    module = require('b')
}

CommonJS specification cannot determine whether certain modules are needed or not required before actual operation, so CommonJS not suitable for tree-shaking mechanism

ES6's import syntax can perfectly use tree shaking, because it can analyze unwanted code without the code running.

Because tree shaking can only work under static modules. ECMAScript 6 module loading is static, so the entire dependency tree can be statically derived from the parse syntax tree.

side effects refers to those that will perform some actions when importing, but there may not necessarily be any export

tree shaking cannot automatically identify which codes belong to side effects , so it is very important to specify these codes manually. If all the code does not contain side effects, we can simply mark this attribute as false to inform webpack that it can safely delete unused exports.

summary: ES6 Module is introduced for static analysis, so it is correct to judge which modules are loaded when compiling. Then judge those modules and variables that are not used or referenced, and then delete the corresponding code.

In addition, webpack can add a "sideEffects" attribute to the project package.json file to manually specify the scripts caused by side effects.

3. Do you know env? What is it for? Does the project need to be installed separately? why?

Env is a built-in object in nodejs, you can use process.env to get information about the current project's operating environment. No need to install it separately, because it is a built-in object of nodejs.

4. The difference between import and require

Answer:

  1. The output of the CommonJS module is a copy of the value, and the output of the ES6 module is the reference of the value.
  2. The CommonJS module is loaded at runtime, and the ES6 module is the output interface at compile time.
  3. CommonJs is a single value export, ES6 Module can export multiple
  4. CommonJs is a dynamic grammar that can be written in the judgment, and ES6 Module static grammar can only be written at the top level
  5. This of CommonJs is the current module, and this of ES6 Module is undefined

5. Do you know what static analysis is?

Answer: es modules can analyze the code when the code is not running, and you can know which modules are used or not.

6. How does webpack babel work?

Answer:

    1. Lexical analysis. The code in the form of a string is converted into Tokens. Tokens can be regarded as an array of some grammatical fragments.
    1. syntax analysis. Convert Tokens to Abstract Syntax Tree AST
    1. Conversion phase. The AST will be traversed, and nodes will be added, deleted, and modified during this process. All Babel plugins work at this stage, such as syntax conversion and code compression.
    1. output stage. The converted AST is converted into js code through babel-generator, the process is to traverse the entire AST depth first, and then construct a string that can represent the converted code. At the same time, a Source Map will be generated at this stage.

7. When is the execution time of webpack plugins?

Answer: loading the file, before outputting the file, different plugins have different execution timings.

8. webpack optimizes first screen loading

node

1. Know the koa source code, how is it achieved?

Answer: koa implements the concept of a context internally by encapsulating the http module, putting res and req on ctx, and performing elegant setter/getter processing on req and res, making the calling method easier.

The onion model recursively calls the asynchronous methods in the middleware array through dispatch, because the next middleware is called when the next method is called in app.use.

Onion model implementation pseudo code

function compose(middlewares){
    return function(){
        return dispatch(0)
        function dispatch(i){
            let fn = middlewares[i]
            if(!fn) return Promise.resolve()
            return Promise.resolve(fn(function next(){
                // promise 完成之后在执行下一个
                return dispatch(i+1)
            }))
        }
    }
}

2. koa onion model

Answer: see above

3. How does cdn acceleration work?

Answer: simply means cache + load balancing

  1. The browser sends a resolution request to the domain name resolution server. Because the CDN has adjusted the domain name resolution process, the client generally gets the CNAME record corresponding to the domain name. At this time, the browser needs to analyze the obtained CNAME domain name again to get the cache The actual IP address of the server. Note: In this process, the global load balancing DNS resolution server will locate the user's access request to the shortest route from the user according to the source IP address of the user, such as geographic location (Beijing or Shanghai), and type of access network (Telecom or Netcom) , The nearest location, the lightest load Cache node (cache server), to achieve the nearest location. The principle of positioning priority can be based on location, route, load, etc.
  2. After parsing again, the browser obtains the actual IP address of the CDN cache server for the domain name, and sends an access request to the cache server.
  3. According to the domain name provided by the browser, the cache server obtains the real IP address of the source server of the domain name through the dedicated DNS inside the Cache, and then the cache server submits an access request to the real IP address.
  4. After the cache server obtains the content from the real IP address, on the one hand, it saves it locally for later use, and at the same time sends the obtained data to the client browser to complete the access response process.

4. Cdn source server file modification, does load balancing still work?

Answer: generally used to store static resources. Take a website as an example. When a user visits the website, static resources are loaded from the CDN. CDN requests resources from the origin server and caches them. This request process is periodic and automatic, called back to the origin. When you update a file and it happens to be before the CDN automatic update, if you want the user to see the new one immediately, you have to manually flash the CDN. Generally, the CDN console has this option.

5. What are the modes of load balancing

  1. polling (default) each request is allocated to different back-end servers one by one . If the back-end server is down, it can be automatically eliminated.
  2. weight specifies the polling probability, and the weight is proportional to the access ratio, which is used when the back-end server performance is uneven. The higher the weight, the greater the probability of being visited.
  3. ip_hash If the customer has already visited a certain server, when the user visits again, the request will be automatically located to the server through the hash algorithm.
  4. fair (third party) allocates requests according to the response time of the back-end server, and the short response time is given priority.
  5. url_hash (third party) allocates requests according to the hash result of the access URL, so that each URL is directed to the same back-end server, which is more effective when the back-end server is cached.

6. How to configure load balancing

Answer: Redis, Zookeeper (ps: nt question)

7. How does the a server node service access the script of the b server

Answer: (the big guy told me) RPC RPC refers to remote procedure calls, that is to say, two servers A and B, one application is deployed on server A, and I want to call the function provided by the application on server B/ The method, because it is not in a memory space, cannot be called directly, it needs to express the semantics of the call and convey the data of the call through the network.

8. Can websocket keep heartbeat after gateway forwarding?

Answer: can’t

9. A strong cache is set on the front-end page, but something has been updated. How to ensure that the user's page is the latest

Answer:

  1. Negotiation cache for static resource settings.
  2. Configure the version number, timestamp, etc. behind the static resources.
  3. (I hope everyone can help provide a better solution)

10. Node event loop

Answer:

Micro and macro tasks in node

  1. Common macro-tasks such as: setTimeout, setInterval, setImmediate, script (overall code), I/O operations, etc.
  2. Common micro-tasks such as: process.nextTick, new Promise().then (callback), etc.

microtask is executed between the various stages of the event loop.

  • Timers phase: This phase executes the callback of timer (setTimeout, setInterval)
  • I/O callbacks phase: handle a few unexecuted I/O callbacks in the previous cycle
  • idle, prepare phase: only used internally by node
  • Polling phase: get new I/O events, node will block here under appropriate conditions
  • check phase: execute setImmediate() callback
  • close callbacks phase: execute the socket close event callback
  1. timers phase

The timers phase will execute setTimeout and setInterval callbacks, and is controlled by the poll phase. Similarly, the time specified by the timer in Node is not an accurate time, it can only be executed as soon as possible.

  1. callbacks stage

This stage performs callbacks for certain system operations, such as TCP errors.

  1. Poll stage
  • Calculate the time that should be blocked and I/O polling
  • Process events in the poll queue

When the event loop enters the poll phase and there are no timers scheduled, one of two things will happen:

  • If the poll queue is not empty, the event loop will traverse its callback queue to execute synchronously until the queue is exhausted or the system-related hard limit is reached
  • If the polling queue is empty: If scripts have been scheduled through setImmediate, the event loop will end the poll phase and continue to execute the check phase to execute those scheduled scripts. If the script does not set the callback with setImmediate, the event loop will wait for the callbacks in the poll queue and execute them immediately.
  1. Check

This stage allows the callback to be executed immediately after the poll stage is completed. If the poll phase is idle and the script has used setImmediate to enter the check queue, the event loop may enter the check phase instead of waiting in the poll phase.

  1. close callbacks phase

setImmediate vs setTimeout

setImmediate and setTimeout similar, but they behave differently depending on the call time

  • setImmediate designed to execute the script after the current poll phase is completed.
  • setTimeout plans to run the script after the minimum threshold in milliseconds has passed.

11. What happened to the node during require

Answer:

  1. Get the absolute path of the file to be loaded. Try adding a suffix without a suffix
  2. Try to read the exported content from the cache. If the cache has it, return the cached content. No, the next step
  3. Create a new module instance and enter it into the cache object
  4. Try to load the module
  5. According to the file type, classification processing
  6. If it is a js file, read the file content, splice the self-executing function text, use the vm module to create a sandbox instance to load the function text, get the exported content, and return the content
  7. If it is a json file, read the content of the file, use the JSON.parse function to convert it into a js object, and return the content
  8. Get the export return value.

http

1. The difference between http and https

Answer: HTTPS is to run HTTP under the encryption security measures of TLS/SSL.

  • https needs to apply for a CA certificate
  • https is more secure. Encryption is used
  • https port 443 http is 80

2. The difference between udp and tcp

Answer: See my other article TCP/IP

3. http3.0 is based on udp, why would udp choose udp for connectionless?

Answer: because udp is efficient. Moreover, the unreliability problem of udp is solved at the application layer.

4. How does http3.0 solve the udp packet loss problem?

Answer: http3 is more than simply replacing the transmission protocol with UDP. It also implements the QUIC protocol in the "application layer" based on the UDP protocol. It has network characteristics similar to TCP connection management, congestion window, and flow control, which is equivalent to turning the unreliable transmission UDP protocol into "reliable", so there is no need to worry about the problem of data packet loss. Moreover, the QUIC protocol guarantees the reliability of data packets, and each data packet has a unique sequence number to identify it. When a data packet in a flow is lost, even if other data packets of the flow arrive, the data cannot be read by HTTP/3. The data will not be handed over to HTTP/3 until QUIC retransmits the lost packet.

5. In addition to the window control you just mentioned, what other controls do tcp have?

Answer: Retransmission control, flow control, congestion control

6. Which time node is the tcp retransmission mechanism based on?

Answer: introduces two concepts:

  • RTT (Round Trip Time): Round trip delay, that is, the time from when a data packet is sent to when the corresponding ACK is received. RTT is for connection, and each connection has its own independent RTT.
  • RTO (Retransmission Time Out): Retransmission timeout, which is the timeout period mentioned earlier.

I generally think it is twice the RTT.

7. The role of http

The server cannot determine the size of the message when generating the HTTP response. At this time, Content-Length cannot write the length in advance, but the message length needs to be generated in real time. At this time, the server generally uses Chunked encoding. When performing Chunked encoding transmission, the header of the reply message has transfer-coding and is set as Chunked , which means that the Chunked encoding.

The encoding is composed of several chunks, and ends with a chunk with a length of 0. Each chunk is composed of two parts, the first part is the length of the chunk, the second part is the content of the specified length, and each part is separated by CRLF. The block transfer encoding is only available in the 1.1 version of the HTTP protocol (HTTP/1.1).

8. How does tcp ensure the reliability of data transmission?

TCP mainly provides methods such as checksum, sequence number/confirmation response, timeout retransmission, maximum message length, and sliding window control to achieve reliable transmission.

1. Checksum: Through the checksum method, the receiving end can detect whether the data has errors or abnormalities. If there is an error, it will directly discard the TCP segment and resend it. When TCP calculates the checksum, it adds a 12-byte pseudo header to the TCP header. The checksum is calculated in 3 parts: TCP header, TCP data, TCP pseudo header

2. Sequence number/Confirmation response: As long as the sender has a packet transmission, the receiver does not respond to the confirmation packet (ACK packet), it will be retransmitted. Or the receiving end's response packet, the sending end will retransmit the data if it is not received. This can ensure the integrity of the data.

In using window control, if there is a segment loss, first consider the situation that the confirmation response fails to return. In this case, the data has arrived at the opposite end, and there is no need to retransmit.

However, when does not use window control, the data that has not received the confirmation response will be retransmitted. Secondly, consider the loss during sending. If the receiving host receives data other than the sequence number that it should receive, it will return an acknowledgement response to the data received so far.

If the sending host receives the same confirmation response three times in a row, it will retransmit the corresponding data. This mechanism is more efficient than the aforementioned timeout management, so it is also called high-speed retransmission control.

3. Flow control: TCP provides a mechanism that allows the sender to control the amount of data sent according to the actual receiving capability of the receiver. This is the so-called flow control. Its specific operation is that the receiving host informs the sending host of the size of the data that it can receive, so the sending end will send data that does not exceed this limit. This size limit is called the window size.

4. Congestion control: If the network is very congested, sending data at this time will increase the burden on the network. Then the sent data segment is likely to exceed the maximum survival time and does not reach the receiver, which will cause packet loss. For this reason, TCP introduces a slow start mechanism of out the current network congestion state, it then decides at what speed to transmit data.

Slow start: Increase exponentially at the beginning of the startup; set a threshold for slow start, stop exponential growth when the exponential growth reaches the threshold, and increase to the congestion window according to linear growth; immediately set the congestion window when linear growth reaches network congestion Set it back to 1, perform a new round of "slow start", and at the same time the threshold of the new round becomes half of the original.

image.png

tcp transmission

Every time the sending host sends data, TCP assigns a sequence number to each data packet and waits for the receiving host to confirm the assigned sequence number within a specific time. If the sending host does not receive the reception within a specific time If the host confirms, the sending host will retransmit the packet. The receiving host uses the serial number to confirm the received data in order to detect whether the data sent by the other party is lost or out of order, etc. Once the receiving host receives the sequenced data, it reassembles the data into the data stream in the correct order And pass it to the higher level for processing.

10. Thread, process

1. The difference between thread and process?

  • Address space: Threads of the same process share the address space of this process, and the process is an independent address space.
  • Resource ownership: Threads in the same process share the resources of the process, such as memory, I/O, cpu, etc., but the resources between processes are independent.
  • Robustness: After a process crashes, it will not affect other processes in protected mode, but a thread crashes and the entire process dies. So multi-process is more robust than multi-thread.
  • Performance: When the process is switched, the resource consumption is large and the efficiency is high. So when it comes to frequent switching, it is better to use threads than processes. Similarly, if concurrent operations are required to be performed at the same time and some variables are to be shared, you can only use threads but not processes
  • Execution process: Each independent process has a program entry, sequential execution sequence and program entry. But threads cannot be executed independently, and must be stored in the application program, and the application program provides multiple thread execution control.
  • Basic difference: Thread is the basic unit of processor scheduling, but process is not.

2. What is multithreading, and its advantages and disadvantages

In order to solve the load balancing problem and make full use of CPU resources. In order to increase the utilization rate of the CPU, multiple threads are used to complete several things at the same time without interfering with each other. It takes a lot of time to handle a large number of IO operations or processing situations And so on, such as: reading and writing files, video image acquisition, processing, display, saving, etc.

Benefits of multithreading:

1. Use threads to put tasks in long-time programs in the background for processing

2. The user interface is more attractive, so for example, if the user clicks a button to trigger the processing of an event, a progress bar can pop up to show the progress of the processing

3. The operating efficiency of the program may be improved

4. In the realization of some waiting tasks, such as user input, file reading and network sending and receiving data, threads are more useful.

Disadvantages of multi-threading:

(1) The running speed of the program slows down while waiting for the use of shared resources. These shared resources are mainly exclusive resources, such as printers.

(2) To manage threads requires additional CPU overhead. The use of threads will bring an additional burden of context switching to the system. When this burden exceeds a certain level, the characteristics of multi-threading are mainly manifested in its shortcomings, such as using independent threads to update each element in the array.

(3) Deadlock of thread. That is, a long time of waiting or resource contention, and deadlocks and other multi-threaded symptoms.

(4) Simultaneous reading or writing of public variables. When multiple threads need to write to public variables, the latter thread will often modify the data stored in the previous thread, so that the parameters of the previous thread are modified; in addition, when the read and write operations of public variables are non-atomic On different machines, the uncertainty of the interrupt time will cause errors in the operation of data in a thread, resulting in inexplicable errors, and such errors are unpredictable by the programmer.

Why is JavaScript single-threaded?

Because Javascript is a browser-side scripting language, its main task is to handle user interaction, and user interaction is nothing more than responding to some events on the DOM/addition, deletion, and modification of elements in the DOM.

Response events are processed asynchronously, but the sequence of events is also carried out in a single thread. All (maybe less accurate) are added to the macro event queue, and only one event response is processed in an event loop.

So the main reason why Javascript is designed to be single-threaded is to manipulate the DOM, including operating the DOM in an asynchronous event handler.

Imagine that if Javascript is designed as a multi-threaded program, then the operation of the DOM will inevitably involve resource competition. Then such a language will inevitably be implemented very bloated, then running such a language program on the Client side will consume resources and The performance will not be optimistic. At the same time, the realization of multi-threading in the Client is really not so rigid; but if it is designed as a single thread and implemented with a complete asynchronous queue, the running cost will be higher than that of a multi-threaded design. It's much smaller.

In order to utilize the computing power of multi-core CPUs, HTML5 proposes the Web Worker standard, which allows JavaScript scripts to create multiple threads, but the child threads are completely controlled by the main thread and must not operate the DOM. Therefore, this new standard does not change the nature of JavaScript single-threaded.

Performance monitoring

1. What are the ways to catch errors?

1) try catch

try...catch can only catch synchronous runtime errors, and cannot do anything about syntax and asynchronous errors.

2) window.onerror and window.addEventListener('error') capture js runtime errors

3) Resource loading error Use addEventListener to monitor error event capture

4) promise error does not apply to catch, you can use the window.addEventListener('unhandledrejection') method for capture processing

5) "script error”

Sometimes called a cross-domain error. This error may be encountered when a website requests and executes a script hosted under a third-party domain name. The most common scenario is to use CDN to host JS resources. solution:

Add the crossorigin="anonymous" attribute to the script tag. Add cross-domain HTTP response headers. Then you can use window.onerror to capture the error message of the cross-domain script.

2. H5 white screen problem analysis

possible problems:

  • The user did not open the network
  • DNS domain hijacking
  • http hijacking
  • cdn or other resource file access error
  • Server Error
  • Front-end code error
  • Front-end compatibility issues
  • User operation error

Handwriting

1. Judging the beautiful number: 4 or more continuously and repeatedly increasing

// 判断4个连续重复数字
var reg1 = /(\d)\1{3,}/
// 判断4个连续递增
var reg2 = /(?:0(?=1)|1(?=2)|2(?=3)|3(?=4)|4(?=5)|5(?=6)|6(?=7)|7(?=8)|8(?=9)){5}\d/

React

react interview test site see my react interview test site article React interview


greet_eason
482 声望1.4k 粉丝

技术性问题欢迎加我一起探讨:zhi794855679