1

Recently, it has realized the screen recording function based on WebRTC video stream, and the essence is to directly use the native MediaRecorder API.

For MediaRecorder, please see the document: MediaRecorder

<!--more-->

Solve some problems encountered

The progress bar cannot be loaded when the webm format video is played for the first time, and the progress bar (video duration) is displayed only after the second time.

Chrome officially marked Won't Fix, and it is speculated that Chrome does not consider this to be a bug. If the video length is not given in the file header, the entire file needs to be read. The reason may be unfavorable for the loading of larger-size videos.

solution :

Manually calculate the video length and assign it to blob .

Use fix-webm-duration library to complete the duration field, you need to record the duration yourself, which is not very accurate, there are still errors, the error is less than 1s, but the intrusiveness is low, and it is easy to solve

After the webm video completes the progress bar, it still cannot be automatically focused, and then use the keyboard left and right keys to accelerate and decelerate

Normal video uses the focus method of the native video tag to use the left and right keys of the keyboard to accelerate and decelerate the video, but because webm video is inherently not supported, it still does not work even with a progress bar.

solution :

Set currentTime through JS, directly set the current playback progress to the end, and then set the current playback progress to the beginning to simulate the completion of playback, and fix the keyboard left and right fast forward and backward.

// 修复webm视频键盘事件聚焦及播放速度控制

  useEffect(() => {

    const videoEle = document.querySelector(

      '#video-homework-popup',

    ) as HTMLVideoElement

    const duration = videoEle?.duration

    if (typeof duration === 'number' && !isNaN(duration)) {

      videoEle.currentTime = duration

 videoEle.currentTime = 0

    }

    videoEle?.focus()

    videoEle?.play()

  }, [homeworkVideoUrl])

Remove useVideoRecorder

The screen recording here does not call the computer camera or use the screen sharing API, but is based on the remote video, using the canvas to continuously draw the video, and the stream drawn by the canvas is passed into the MediaRecorder method.

The following simplifies the code other than the business, which is purely the code that realizes the front-end screen recording. Of course, the code has a lot of room for optimization, just for reference:

import React, { useEffect, useRef } from 'react'
import throttle from 'lodash/throttle'
import ysFixWebmDuration from 'fix-webm-duration'

const TimeInterval = 16
const DefaultMaxRecordMinutes = 15 // 默认最大录制时长约15分钟
const WatermarkParams = {
  width: 118,
  height: 42,
  marginRight: 25,
  marginTop: 17,
}
enum BitsPerSecond {
  '360P' = 1000000,
  '480P' = 2500000,
  '720P' = 5000000,
  '1080P' = 8000000,
}

interface RecorderOptions {
  videoRef: React.MutableRefObject<HTMLVideoElement | null> // 视频 video 标签
  videoContainerRef: React.MutableRefObject<HTMLDivElement | null> // video 标签外层的div
  watermark?: string
  maxRecordMinutes?: number // 视频最大录制时长(分)
  debug?: boolean
  getResolution: () => { width: number; height: number }
}

interface StartRecorderOptions {
  bitrate?: number
}

type CanvasCaptureMediaStreamTrack = MediaStreamTrack & {
  requestFrame: () => void
}

// 录屏当前的状态
enum RecordingState {
  INACTIVE = 'inactive', // 没有进行录制,原因可能是录制没有开始或已经停止
  PAUSED = 'paused', // 录制已开始,当前处于暂停状态
  RECORDING = 'recording', // 录制正在进行
}

const useVideoRecorder = ({
  videoRef,
  videoContainerRef,
  watermark,
  maxRecordMinutes = DefaultMaxRecordMinutes,
  debug,
  getResolution,
}: RecorderOptions) => {
  const recorder = useRef<MediaRecorder>()
  const recorderCanvas = useRef<HTMLCanvasElement>()
  const recorderChunks = useRef<Blob[]>([])
  const recorderStream = useRef<MediaStream | null>(null)
  const recorderVideoTrack = useRef<CanvasCaptureMediaStreamTrack>()
  const recorderContext = useRef<CanvasRenderingContext2D>()

  const watermarkImage = useRef<HTMLImageElement>()
  const cursorImage = useRef<HTMLImageElement>()
  const cursorContainer = useRef<HTMLDivElement>()
  const mousePosition = useRef<{ x: number; y: number }>({ x: 0, y: 0 })

  const refreshTimer = useRef<number>()
  const refreshTicks = useRef<number>(0)
  // 录制最大时长计算
  const recordTimer = useRef<number>()
  const durationTicks = useRef<number>(0)
  // 录制时长计算
  const startRecordTime = useRef<number>(0)
  const durationTime = useRef<number>(0)

  const isRecording = useRef<boolean>(false)

  // 初始化创建canvas
  useEffect(() => {
    recorderCanvas.current = document.createElement('canvas')
    const $recorderCanvas = recorderCanvas.current
    $recorderCanvas.setAttribute('style', 'display: none')
    $recorderCanvas.id = 'video-recorder-canvas'
    recorderContext.current = ($recorderCanvas.getContext(
      '2d',
    ) as unknown) as CanvasRenderingContext2D
    // debug canvas
    debug &&
      recorderCanvas.current.setAttribute(
        'style',
        'display: block; position: fixed; bottom: 0; left: 0; height: 350px; background: #fff; z-index: 10; border: 1px solid #fff',
      )

    document.body.appendChild(recorderCanvas.current)
    // 水印
    watermarkImage.current = document.createElement('img')
    watermark && watermarkImage.current.setAttribute('src', watermark)
    // 鼠标光标
    cursorImage.current = document.createElement('img')
    cursorContainer.current = document.createElement('div')
    cursorContainer.current.setAttribute(
      'style',
      'pointer-events: none; z-index: 100; display: inline-block; position: absolute;',
    )
    cursorContainer.current.appendChild(cursorImage.current)
  }, [])

  useEffect(() => {
    videoContainerRef.current?.addEventListener('mousemove', handleMousemove)

    return () => {
      videoContainerRef.current?.removeEventListener(
        'mousemove',
        handleMousemove,
      )
    }
  }, [])

  // 监听是否断网
  useEffect(() => {
    window.addEventListener('offline', resetVideoRecord)

    return () => {
      window.removeEventListener('offline', resetVideoRecord)
    }
  }, [])

  const handleMousemove = throttle((e: MouseEvent) => {
    mousePosition.current.x = e.offsetX
    mousePosition.current.y = e.offsetY
  }, 16)

  const onRefreshTimer = () => {
    refreshTicks.current++
    // 录屏
    if (
      isRecording.current &&
      refreshTicks.current % Math.round(64 / TimeInterval) === 0
    ) {
      recorderVideoTrack.current?.requestFrame()
      recorderDrawFrame()
    }
  }

  // 记录录屏时长
  const onRecordTimer = () => {
    durationTicks.current++
    if (durationTicks.current >= maxRecordMinutes * 60) {
      pauseRecord()
    }
  }

  const recorderDrawFrame = () => {
    const $recorderCanvas = recorderCanvas.current!
    const $player = videoRef.current!
    const ctx = recorderContext.current!
    const { width, height } = getResolution() // 获取视频实时宽高的方法
    $recorderCanvas.width = width // $player.videoWidth
    $recorderCanvas.height = height // $player.videoHeight

    ctx.drawImage(
      $player,
      0,
      0,
      $player.videoWidth,
      $player.videoHeight,
      0,
      0,
      $recorderCanvas.width,
      $recorderCanvas.height,
    )
    drawWatermark(ctx, width)
  }

  // 添加水印,图片水印需为base64格式
  const drawWatermark = (
    ctx: CanvasRenderingContext2D,
    canvasWidth: number,
  ) => {
    if (watermark) {
      ctx.drawImage(
        watermarkImage.current!,
        canvasWidth - WatermarkParams.width - WatermarkParams.marginRight,
        WatermarkParams.marginTop,
      )
    }
  }

  // 开始录屏
  const startRecord = (options: StartRecorderOptions = {}) => {
    if (
      recorder.current?.state === RecordingState.RECORDING ||
      recorder.current?.state === RecordingState.PAUSED
    ) {
      return
    }

    console.log('start record')
    recorderStream.current = recorderCanvas.current!.captureStream(0)
    recorderVideoTrack.current = recorderStream.current!.getVideoTracks()[0] as CanvasCaptureMediaStreamTrack
    const audioTrack = videoRef.current?.srcObject?.getAudioTracks()[0]
    if (audioTrack) {
      recorderStream.current!.addTrack(audioTrack) // 录入声音
    }

    if (!window.MediaRecorder) {
      return false
    }

    const mimeType = 'video/webm;codecs=vp8'
    recorder.current = new MediaRecorder(recorderStream.current, {
      mimeType,
      // 指定音频和视频的比特率
      bitsPerSecond: options.bitrate || BitsPerSecond['360P'],
    })
    isRecording.current = true
    refreshTimer.current = window.setInterval(onRefreshTimer, 16)
    recordTimer.current = window.setInterval(onRecordTimer, 1000)
    recorder.current.ondataavailable = handleRecordData // 停止录像以后的回调函数,返回一个存储Blob内容的录制数据
    recorder.current.start(10000) // 开始录制媒体
    startRecordTime.current = Date.now()
  }

  // 暂停录屏 - 适用于录屏超过录制最大时长
  const pauseRecord = () => {
    if (
      recorder.current &&
      recorder.current?.state === RecordingState.RECORDING
    ) {
      recorder.current.pause()
      isRecording.current = false
      clearInterval(recordTimer.current)
      clearInterval(refreshTimer.current)
      durationTime.current = Date.now() - startRecordTime.current
    }
  }

  // 停止录屏
  const stopRecord = () => {
    return new Promise((resolve, reject) => {
      if (
        recorder.current?.state === RecordingState.RECORDING ||
        recorder.current?.state === RecordingState.PAUSED
      ) {
        console.log('stop record')
        if (!window.MediaRecorder) {
          reject(new Error('Your Browser are not support MediaRecorder API'))
        }

        recorder.current?.stop()
        recorderVideoTrack.current!.stop()
        clearInterval(refreshTimer.current)
        clearInterval(recordTimer.current)
        isRecording.current = false
        recorder.current.onstop = () => {
          if (!durationTime.current) {
            durationTime.current = Date.now() - startRecordTime.current
          }

          // 修复 webm 视频录制无时长,赋值时长给 blob
          ysFixWebmDuration(
            new Blob(recorderChunks.current, { type: 'video/webm' }),
            durationTime.current,
            function (fixedBlob: Blob) {
              resolve(fixedBlob)
              recorderChunks.current = []
              durationTime.current = 0
            },
          )
        }
      } else {
        reject(new Error('Recorder is not started'))
      }
    })
  }

  const resetVideoRecord = () => {
    if (
      recorder.current?.state === RecordingState.RECORDING ||
      recorder.current?.state === RecordingState.PAUSED
    ) {
      recorder.current?.stop()
      recorderVideoTrack.current!.stop()
      recorder.current.onstop = () => {
        recorderChunks.current = []
        recorderStream.current = null
      }
    }
    isRecording.current = false
    clearInterval(refreshTimer.current)
    clearInterval(recordTimer.current)
  }

  // 处理录屏视频流数据
  const handleRecordData = (e: BlobEvent) => {
    if (e.data.size > 0 && recorderChunks.current) {
      recorderChunks.current.push(e.data)
    }
  }

  // 下载视频
  const download = (blob: Blob) => {
    if (recorder.current && blob.size > 0) {
      const name = new Date().getTime()
      const a = document.createElement('a')
      a.href = URL.createObjectURL(blob)
      a.download = `${name}.webm`
      document.body.appendChild(a)
      a.click()
    }
  }

  return {
    startRecord,
    stopRecord,
    resetVideoRecord,
    download,
  }
}

export default useVideoRecorder

compatibility

If you want to record the front-end screen, you need to consider compatibility issues

MediaRecorder API

  • Compatibility with lower versions of Safari (mainly considering Mac WeChat browser)

Webm format


JackySummer
538 声望239 粉丝