74
头图
The source of this article is the public number: Programmer successfully

This article records the various pits that have been stepped on when using flv.js to play surveillance video. Although the Getting Started given by the official website is only a few lines of code, it is easy to run a demo that can play videos, but various abnormalities during playback will make you doubt your life.

The reason is that, on the one hand, the documents on GitHub are relatively obscure and the instructions are relatively simple; on the other hand, due to the influence of the "video playback" thinking, there is not enough understanding of the and the of experience in processing streams.

Below I will summarize the pits I have stepped on and the relevant knowledge supplemented in the process of stepping on the pits.

Outline preview

The content presented in this article includes the following aspects:

  • Live and on-demand
  • Static Data vs Streaming Data
  • Why choose flv?
  • Protocol and Basic Implementation
  • Detail handling points
  • style customization

On-demand and live

What is live broadcast? What is on-demand?

Needless to say, live broadcast, everyone knows what live broadcast is for under the popularity of Douyin. On-demand is actually video playback, which is exactly the same as watching videos on Bilibili. It is to play the videos prepared in advance, which is called on-demand.

For our front-end, VOD is to take an mp4 link address and put it in the video tag. The browser will help us handle a series of things such as video parsing and playback. We can drag the progress bar to select any time we want to watch.

But the live broadcast is different. The live broadcast has two characteristics:

  1. get stream data
  2. require real-time

Let’s first look at what is streaming data. Most of the front-end students who have not done audio and video, the data we often come into contact with is the json data obtained from the interface by ajax, especially file upload. The characteristic of these data is that they are all data that can be obtained at one time. We get a request, a response, and the complete data back.

But the stream is different. The stream data acquisition is frame by frame, you can understand it as a small piece. Like the data of the live stream, it is not a complete video segment, it is just a small binary data, which needs to be spliced bit by bit before it is possible to output a video.

Look at its real-time nature. If it is on-demand, we directly store the complete video on the server, then return the link, and use the video or player on the front end to broadcast it. However, the real-time nature of live broadcast determines that the data source cannot be on the server, but on a certain client.

The data source is on the client side, so how does it reach other clients?

For this question, please see the flow chart below:

process

As shown in the figure, the client that initiates the live broadcast is connected to the streaming media server, and the video stream generated by the live broadcast will be pushed to the server in real time. This process is called streaming. Other clients are also connected to this streaming media server. The difference is that they are the playback end and will pull the video stream of the live client in real time. This process is called streaming.

push stream -> server -> pull stream , this is the current popular and standard live broadcast solution. As you can see, the entire process of live broadcast is all streaming data transmission, and data processing faces binary, which is several orders of magnitude more complicated than on-demand.

Specifically, the real-time monitoring preview of the camera in our business is actually the same as the above, except that the client that initiates the live broadcast is the camera, and the client that watches the live broadcast is the browser.

Static Data vs Streaming Data

The text, json, pictures, etc. that we often come into contact with are all static data. The data that the front end requests from the interface with ajax is static data.

As mentioned above, the video and audio generated by the live broadcast belong to streaming data. Streaming data is frame-by-frame, and its essence is binary data. Because it is small, the data flows continuously like a water stream, so it is very suitable for real-time transmission.

For static data, there are corresponding data types in the front-end code, such as string,json,array and so on. So what is the data type of stream data (binary data)? How to store in the front end? How does it work?

First of all, it is clear that the front end can store and manipulate binary. The most basic binary object is ArrayBuffer , which represents a fixed length, such as:

let buffer = new ArrayBuffer(16) // 创建一个 16 字节 的 buffer,用 0 填充
alert(buffer.byteLength) // 16

ArrayBuffer is just for storing binary data, if you want to manipulate it, you need to use view object .

View objects, which do not store any data, are used to structure the data of the ArrayBuffer so that we can manipulate these data. To put it bluntly, they are the interface for manipulating binary data.

View objects include:

  • Uint8Array : 1 byte per item
  • Uint16Array : 2 bytes per item
  • Uint32Array : 4 bytes per item
  • Float64Array : 8 bytes per item

According to the above standard, a 16-byte ArrayBuffer, the convertible view object and its length are:

  • Uint8Array: length 16
  • Uint16Array: length 8
  • Uint32Array: length 4
  • Float64Array: length 2

Here is just a brief introduction to how stream data is stored in the front end, in order to avoid seeing a long ArrayBuffer in the browser and not knowing what it is, remember that it must be binary data.

Why choose flv?

As mentioned earlier, live broadcasts need real-time performance, and of course the shorter the delay, the better. Of course, there are many factors that determine the transmission speed, one of which is the size of the video data itself.

For on-demand scenarios, our most common mp4 format has the best compatibility with the front end. But relatively speaking, the volume of mp4 is relatively large, and the analysis will be more complicated. This is the disadvantage of mp4 in live broadcast scenarios.

flv is different. Its header file is very small, its structure is simple, and its parsing is blocky. It is very advantageous under the real-time requirements of live broadcast, so it has become one of the most commonly used live broadcast solutions.

Of course, there are other formats besides flv, corresponding to the live broadcast protocol, let's compare them one by one:

  • RTMP : The bottom layer is based on TCP and relies on Flash on the browser side.
  • HTTP-FLV : Based on HTTP streaming IO to transmit FLV, it depends on browser support to play FLV.
  • WebSocket-FLV : Transmit FLV based on WebSocket, and rely on browser support to play FLV.
  • HLS : Http Live Streaming, Apple's HTTP-based streaming media transmission protocol. HTML5 can be directly opened for playback.
  • RTP : UDP based, 1 second delay, not supported by browsers.

In fact, the commonly used live broadcast solution in the early days is RTMP , which has good compatibility, but it relies on Flash. Currently, Flash is disabled by default in browsers, which has been eliminated by the times, so it is not considered.

HLS protocol is also very common, and the corresponding video format is m3u8 . It was launched by Apple and has very good support for mobile phones, but the fatal disadvantage is that the delay is high (10~30 seconds), so it is not considered.

Needless to say, RTP is not supported by browsers, and only flv is left.

But flv is divided into HTTP-FLV and WebSocket-FLV , they look like brothers, what is the difference?

As we said earlier, the live stream is transmitted in real time, and the connection will not be interrupted after the connection is created, and continuous push-pull streaming is required. In this scenario that requires a long connection, the first solution we think of is WebSocket, because WebSocket is originally a technology for real-time mutual transmission of long connections.

However, with the expansion of the native capabilities of js, black technologies such as fetch that are stronger than ajax have emerged. It not only supports Promise that is more friendly to us, but also can process streaming data naturally, with good performance, and it is simple enough to use, which is more convenient for our developers, so there is an http version of the flv solution.

To sum up, flv is the most suitable for browser live broadcast, but flv is not a panacea. Its disadvantage is that the front-end video tag cannot be played directly and needs to be processed.

The solution is our protagonist today: flv.js

Protocol and Basic Implementation

As we mentioned earlier, flv supports both WebSocket and HTTP transmission methods. Fortunately, flv.js also supports both protocols.

Whether you choose to use http or ws, in fact, there is not much difference in function and performance. The key is to see what protocol the back-end classmates give us. My choice here is http, which is more convenient for front-end and back-end processing.

Next, we introduce the specific access process of flv.js, the official website is here

Assuming that there is a live stream address: http://test.stream.com/fetch-media.flv , the first step is to quickly start building a demo according to the official website:

import flvjs from 'flv.js'
if (flvjs.isSupported()) {
  var videoEl = document.getElementById('videoEl')
  var flvPlayer = flvjs.createPlayer({
    type: 'flv',
    url: 'http://test.stream.com/fetch-media.flv'
  })
  flvPlayer.attachMediaElement(videoEl)
  flvPlayer.load()
  flvPlayer.play()
}

First install flv.js . The first line of the code is to detect whether the browser supports flv.js. In fact, most browsers do. The next step is to get the DOM element of the video tag. flv will output the processed flv stream to the video element, and then realize the video stream playback on the video.

The next key point is to create the flvjs.Player object, which we call the player instance. The player instance is created by the flvjs.createPlayer function. The parameter is a configuration object, which is commonly used as follows:

  • type : media type, flv or mp4 , default flv
  • isLive : optional, whether it is a live stream, the default is true
  • hasAudio : Is there any audio
  • hasVideo : Is there a video
  • url : Specifies the stream address, which can be https(s) or ws(s)

Whether the above has audio and video configuration depends on whether the stream address has audio and video. For example, the monitoring stream only has video stream without audio, then even if you configure hasAudio: true, it is impossible to have sound.

After the player instance is created, the next three steps are:

  • Mount element: flvPlayer.attachMediaElement(videoEl)
  • load stream: flvPlayer.load()
  • Play stream: flvPlayer.play()

There are so many basic implementation processes. Let's talk about the details and main points of the processing process.

Detail handling points

The basic usage is described above, and the key issues in practice are discussed below.

Pause and play

Pause and play in on-demand is very easy. There will be a play/pause button below the player. You can pause whenever you want, and when you click play again, it will continue playing from the place where it was paused last time. But it's different in live broadcast.

Under normal circumstances, the live broadcast should have no play/pause button and progress bar. Because we are watching real-time information, if you pause the video, when you click play again, you cannot resume the playback from the paused place. why? Because you are in real time, when you click play again, you should get the latest live stream and play the latest video.

In terms of technical details, the video tag on the front end has a progress bar and a pause button by default, and flv.js outputs the live stream to the video tag. At this time, if the pause button is clicked, the video will also stop, which is consistent with the on-demand logic. . But if you click play again, the video will continue from where it was paused, which is wrong.

So let's look at the play/pause logic of the live broadcast from another angle.

Why does the live broadcast need to be paused? Take our video surveillance as an example, a page will put surveillance videos of several cameras. If each player keeps connected to the server and continues to pull streams, it will cause a lot of connections and consumption, and all the money will be lost. .

Then can we do this: When entering the web page, find the camera you want to watch, click play and then pull the stream. When you don't want to watch, click pause and the player disconnects, so that you can save useless data consumption.

Therefore, the core logic of play/pause in live broadcast is to pull/disconnect .

Understanding this, our solution should be to hide the pause/play button of the video, and then implement the logic of play and pause by ourselves.

Taking the above code as an example, the player instance (the flvPlayer variable above) does not need to be changed, and the play/pause code is as follows:

const onClick = isplay => {
  // 参数 isplay 表示当前是否正在播放
  if (isplay) {
    // 在播放,断流
    player.unload()
    player.detachMediaElement()
  } else {
    // 已断流,重新拉流播放
    player.attachMediaElement(videoEl.current)
    player.load()
    player.play()
  }
}

exception handling

The process of using flv.js to access the live stream will encounter various problems, some of which are the problem of back-end data flow, and some of which are the problem of front-end processing logic. Because the stream is acquired in real time, and the flv is also converted and output in real time, once an error occurs, the browser console will print the exception continuously.

If you use react and ts, the screen is full of exceptions, and you can't continue to develop. There are many exceptions that may occur in live streaming, so error handling is very critical.

The official description of exception handling is not very obvious. Let me briefly summarize:

First, the exceptions of flv.js are divided into two levels, which can be regarded as first-level exceptions and second-level exceptions.

Furthermore, flv.js has a special feature. Its events and errors are represented by enumerations, as follows:

  • flvjs.Events : Indicates an event
  • flvjs.ErrorTypes : Indicates a first-level exception
  • flvjs.ErrorDetails : Indicates a secondary exception

The exceptions and events described below are based on the above enumeration, you can understand it as a key value under the enumeration.

There are three types of first-level anomalies:

  • NETWORK_ERROR : network error, indicating a connection problem
  • MEDIA_ERROR : Media error, format or decoding problem
  • OTHER_ERROR : other errors

There are three types of secondary exceptions commonly used:

  • NETWORK_STATUS_CODE_INVALID : HTTP status code error, indicating that the url address is incorrect
  • NETWORK_TIMEOUT : Connection timed out, network or background issues
  • MEDIA_FORMAT_UNSUPPORTED : The media format is not supported, generally the stream data is not in the flv format

Knowing this, we listen for exceptions on the player instance:

// 监听错误事件
flvPlayer.on(flvjs.Events.ERROR, (err, errdet) => {
  // 参数 err 是一级异常,errdet 是二级异常
  if (err == flvjs.ErrorTypes.MEDIA_ERROR) {
    console.log('媒体错误')
    if(errdet == flvjs.ErrorDetails.MEDIA_FORMAT_UNSUPPORTED) {
      console.log('媒体格式不支持')
    }
  }
  if (err == flvjs.ErrorTypes.NETWORK_ERROR) {
    console.log('网络错误')
    if(errdet == flvjs.ErrorDetails.NETWORK_STATUS_CODE_INVALID) {
      console.log('http状态码异常')
    }
  }
  if(err == flvjs.ErrorTypes.OTHER_ERROR) {
    console.log('其他异常:', errdet)
  }
}

In addition to this, custom play/pause logic also needs to know the loading state. You can monitor the video stream loading completion by the following methods:

player.on(flvjs.Events.METADATA_ARRIVED, () => {
  console.log('视频加载完成')
})

style customization

Why is there style customization? As we said earlier, the play/pause logic of live streaming is different from VOD, so we need to hide the operation bar elements of video and implement related functions through custom elements.

First, hide the play/pause button, progress bar, and volume button, which can be implemented with css:

/* 所有控件 */
video::-webkit-media-controls-enclosure {
  display: none;
}
/* 进度条 */
video::-webkit-media-controls-timeline {
  display: none;
}
video::-webkit-media-controls-current-time-display {
  display: none;
}
/* 音量按钮 */
video::-webkit-media-controls-mute-button {
  display: none;
}
video::-webkit-media-controls-toggle-closed-captions-button {
  display: none;
}
/* 音量的控制条 */
video::-webkit-media-controls-volume-slider {
  display: none;
}
/*  播放按钮 */
video::-webkit-media-controls-play-button {
  display: none;
}

The logic of play and pause is mentioned above, and you can customize a button here in the style. In addition, we may also need a full screen button. Let's see how the full screen logic is written:

const fullPage = () => {
  let dom = document.querySelector('.video')
  if (dom.requestFullscreen) {
    dom.requestFullscreen()
  } else if (dom.webkitRequestFullScreen) {
    dom.webkitRequestFullScreen()
  }
}

For other custom styles, for example, if you want to make a bullet screen, you can implement it yourself by covering a layer of elements on the video.

i want to learn more

In order to better protect the originality, I will first publish the WeChat public front-end for future articles. This official account only does original work, with at least one high-quality article per week, and the direction is front-end engineering and architecture, front-end boundary exploration at the BFF layer, and practice and thinking such as integrated development and delivery.

In addition, I also set up a WeChat group to provide exchanges and learning for students who are interested in this direction. In the group, there are big factory bosses, Nuggets lv6 gods, and more students who want to study this direction. Let's communicate and share learning together~

If you are also interested and want to learn more, please add me on WeChat ruidoc to pull you into the group~


杨成功
3.9k 声望12k 粉丝

分享小厂可落地的前端工程与架构