Welcome to my GitHub

(including supporting source code) are classified and summarized here: 161f1ee994558c https://github.com/zq2599/blog_demos

About the "JavaCV Camera Actual" series

  • "JavaCV's Camera Combat", as the name implies, is a collection of actual combats that use the JavaCV framework to process various cameras. This is an original series by Xinchen as a Java programmer in the field of computer vision. Through continuous coding Actual combat, learn with you various operations of video, audio, pictures and other resources
  • In addition, it should be noted that the camera used in the whole series is a USB camera or the built-in camera of the notebook, <font color="red">not a smart camera based on network access</font>

    Overview of this article

  • As the beginning of the whole series, this article is very important. From the environment to the code, it will lay a good foundation for the subsequent articles. In short, this article consists of the following contents:
  • Environment and version information
  • Basic routine analysis
  • Basic frame coding
  • Deploy the media server
  • Let's start with the environment and version information.

Environment and version information

  • Now let's explain the software and hardware environment involved in actual combat clearly, you can use it for reference:
  1. Operating system: win10
  2. JDK:1.8.0_291
  3. maven:3.8.1
  4. IDEA:2021.2.2(Ultimate Edition)
  5. JavaCV:1.5.6
  6. Media server: nginx-rtmp deployed based on dockek, the image is: <font color="blue">alfg/nginx-rtmp:v1.3.1</font>

Source code download

  • The complete source code of "Camera Combat of JavaCV" can be downloaded from GitHub. The address and link information are shown in the following table ( https://github.com/zq2599/blog_demos ):
nameLinkRemark
Project homepagehttps://github.com/zq2599/blog_demosThe project's homepage on GitHub
git repository address (https)https://github.com/zq2599/blog_demos.gitThe warehouse address of the source code of the project, https protocol
git repository address (ssh)git@github.com:zq2599/blog_demos.gitThe warehouse address of the project source code, ssh protocol
  • There are multiple folders in this git project. The source code of this article is in the <font color="blue">javacv-tutorials</font> folder, as shown in the red box below:

在这里插入图片描述

  • There are multiple sub-projects in <font color="blue">javacv-tutorials</font>. The code of "JavaCV Camera Actual" series is in <font color="red"> simple-grab-push </font> Under the project:

在这里插入图片描述

Basic routine analysis

  • The whole series has multiple camera-based actual combat, such as window preview, saving video as file, pushing video to media server, etc. The basic routines are roughly the same, and the simplest flowchart is as follows:

在这里插入图片描述

  • As can be seen from the above figure, the whole process is to continuously take frames from the camera, and then process and output

Basic frame coding

  • After reading the above basic routines, you may have this idea if you are smart: since the routines are fixed, the code can also be fixed according to the routines.
  • That's right, the next step is to consider how to fix the code according to the <font color="blue">routine</font>. My idea is to develop an abstract class named <font color="red">AbstractCameraApplication</font> , as the parent class of each application in the "JavaCV Camera Actual" series, it is responsible for building the entire process of initialization, frame acquisition, processing, and output, and its subclasses focus on the specific processing and output of frame data, and the UML diagram of the entire system. As follows:

在这里插入图片描述

  • Next, it is time to develop the abstract class <font color="red">AbstractCameraApplication.java</font>, which should be designed before coding. The following figure shows the main method and execution flow of AbstractCameraApplication. The bold font is all the method name, and the red block represents Abstract methods left to subclasses to implement:

在这里插入图片描述

  • The next step is to create a project. What I created here is a maven project. The pom.xml is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>javacv-tutorials</artifactId>
        <groupId>com.bolingcavalry</groupId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.bolingcavalry</groupId>
    <version>1.0-SNAPSHOT</version>
    <artifactId>simple-grab-push</artifactId>
    <packaging>jar</packaging>

    <properties>
        <!-- javacpp当前版本 -->
        <javacpp.version>1.5.6</javacpp.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.2.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-to-slf4j</artifactId>
            <version>2.13.3</version>
        </dependency>

        <!-- javacv相关依赖,一个就够了 -->
        <dependency>
            <groupId>org.bytedeco</groupId>
            <artifactId>javacv-platform</artifactId>
            <version>${javacpp.version}</version>
        </dependency>
    </dependencies>
</project>
  • Next is the complete code of <font color="red">AbstractCameraApplication.java</font>. The process and method names of these codes are consistent with the above figure, and detailed comments are added. There are several points to pay attention to. Mentioned later:
package com.bolingcavalry.grabpush.camera;

import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.bytedeco.ffmpeg.global.avutil;
import org.bytedeco.javacv.*;
import org.bytedeco.opencv.global.opencv_imgproc;
import org.bytedeco.opencv.opencv_core.Mat;
import org.bytedeco.opencv.opencv_core.Scalar;

import java.text.SimpleDateFormat;
import java.util.Date;

/**
 * @author will
 * @email zq2599@gmail.com
 * @date 2021/11/19 8:07 上午
 * @description 摄像头应用的基础类,这里面定义了拉流和推流的基本流程,子类只需实现具体的业务方法即可
 */
@Slf4j
public abstract class AbstractCameraApplication {

    /**
     * 摄像头序号,如果只有一个摄像头,那就是0
     */
    protected static final int CAMERA_INDEX = 0;

    /**
     * 帧抓取器
     */
    protected FrameGrabber grabber;

    /**
     * 输出帧率
     */
    @Getter
    private final double frameRate = 30;

    /**
     * 摄像头视频的宽
     */
    @Getter
    private final int cameraImageWidth = 1280;

    /**
     * 摄像头视频的高
     */
    @Getter
    private final int cameraImageHeight = 720;

    /**
     * 转换器
     */
    private final OpenCVFrameConverter.ToIplImage openCVConverter = new OpenCVFrameConverter.ToIplImage();

    /**
     * 实例化、初始化输出操作相关的资源
     */
    protected abstract void initOutput() throws Exception;

    /**
     * 输出
     */
    protected abstract void output(Frame frame) throws Exception;

    /**
     * 释放输出操作相关的资源
     */
    protected abstract void releaseOutputResource() throws Exception;

    /**
     * 两帧之间的间隔时间
     * @return
     */
    protected int getInterval() {
        // 假设一秒钟15帧,那么两帧间隔就是(1000/15)毫秒
        return (int)(1000/ frameRate);
    }

    /**
     * 实例化帧抓取器,默认OpenCVFrameGrabber对象,
     * 子类可按需要自行覆盖
     * @throws FFmpegFrameGrabber.Exception
     */
    protected void instanceGrabber() throws FrameGrabber.Exception {
        grabber = new OpenCVFrameGrabber(CAMERA_INDEX);
    }

    /**
     * 用帧抓取器抓取一帧,默认调用grab()方法,
     * 子类可以按需求自行覆盖
     * @return
     */
    protected Frame grabFrame() throws FrameGrabber.Exception {
        return grabber.grab();
    }

    /**
     * 初始化帧抓取器
     * @throws Exception
     */
    protected void initGrabber() throws Exception {
        // 实例化帧抓取器
        instanceGrabber();

        // 摄像头有可能有多个分辨率,这里指定
        // 可以指定宽高,也可以不指定反而调用grabber.getImageWidth去获取,
        grabber.setImageWidth(cameraImageWidth);
        grabber.setImageHeight(cameraImageHeight);

        // 开启抓取器
        grabber.start();
    }

    /**
     * 预览和输出
     * @param grabSeconds 持续时长
     * @throws Exception
     */
    private void grabAndOutput(int grabSeconds) throws Exception {
        // 添加水印时用到的时间工具
        SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

        long endTime = System.currentTimeMillis() + 1000L *grabSeconds;

        // 两帧输出之间的间隔时间,默认是1000除以帧率,子类可酌情修改
        int interVal = getInterval();

        // 水印在图片上的位置
        org.bytedeco.opencv.opencv_core.Point point = new org.bytedeco.opencv.opencv_core.Point(15, 35);

        Frame captureFrame;
        Mat mat;

        // 超过指定时间就结束循环
        while (System.currentTimeMillis()<endTime) {
            // 取一帧
            captureFrame = grabFrame();

            if (null==captureFrame) {
                log.error("帧对象为空");
                break;
            }

            // 将帧对象转为mat对象
            mat = openCVConverter.convertToMat(captureFrame);

            // 在图片上添加水印,水印内容是当前时间,位置是左上角
            opencv_imgproc.putText(mat,
                    simpleDateFormat.format(new Date()),
                    point,
                    opencv_imgproc.CV_FONT_VECTOR0,
                    0.8,
                    new Scalar(0, 200, 255, 0),
                    1,
                    0,
                    false);

            // 子类输出
            output(openCVConverter.convert(mat));

            // 适当间隔,让肉感感受不到闪屏即可
            if(interVal>0) {
                Thread.sleep(interVal);
            }
        }

        log.info("输出结束");
    }

    /**
     * 释放所有资源
     */
    private void safeRelease() {
        try {
            // 子类需要释放的资源
            releaseOutputResource();
        } catch (Exception exception) {
            log.error("do releaseOutputResource error", exception);
        }

        if (null!=grabber) {
            try {
                grabber.close();
            } catch (Exception exception) {
                log.error("close grabber error", exception);
            }
        }
    }

    /**
     * 整合了所有初始化操作
     * @throws Exception
     */
    private void init() throws Exception {
        long startTime = System.currentTimeMillis();

        // 设置ffmepg日志级别
        avutil.av_log_set_level(avutil.AV_LOG_INFO);
        FFmpegLogCallback.set();

        // 实例化、初始化帧抓取器
        initGrabber();

        // 实例化、初始化输出操作相关的资源,
        // 具体怎么输出由子类决定,例如窗口预览、存视频文件等
        initOutput();

        log.info("初始化完成,耗时[{}]毫秒,帧率[{}],图像宽度[{}],图像高度[{}]",
                System.currentTimeMillis()-startTime,
                frameRate,
                cameraImageWidth,
                cameraImageHeight);
    }

    /**
     * 执行抓取和输出的操作
     */
    public void action(int grabSeconds) {
        try {
            // 初始化操作
            init();
            // 持续拉取和推送
            grabAndOutput(grabSeconds);
        } catch (Exception exception) {
            log.error("execute action error", exception);
        } finally {
            // 无论如何都要释放资源
            safeRelease();
        }
    }
}
  • The above code has the following points to note:
  1. Responsible for taking data from the camera is the OpenCVFrameGrabber object, the frame grabber
  2. In the initGrabber method, the setImageWidth and setImageHeight methods are used to set the width and height of the image for the frame grabber. In fact, the frame grabber can automatically adapt without setting the width and height, but considering that some cameras support multiple resolutions, so Or set it according to your actual situation
  3. In the grabAndOutput method, a while loop is used to continuously take frames, process, and output. The end condition of this while loop is the specified duration. Such an end condition may not meet your needs. Please adjust it according to your actual situation (such as detecting whether a key is pressed)
  4. In the grabAndOutput method, convert the acquired frame into a Mat object, then add text on the Mat object, the content is the current time, then convert the Mat object into a frame object, and pass the frame object to the subclass's <font color=" blue">output</font> method, in this way, when subclasses do processing and output, the frames they get have time watermarks
  • At this point, the parent class has been completed. In the next actual combat, we only need to focus on processing and outputting frame data with the subclass.

Deploy the media server

  • Some of the actual combat of the "JavaCV Camera Actual Combat" series involves streaming and remote playback, which requires the use of a streaming media server. The role of the streaming media server is as follows, and we also deploy it in advance in this article:

在这里插入图片描述

  • Regarding the type of media server, I chose the commonly used <font color="blue">nginx-rtmp</font>. For simplicity, I found a linux computer and deployed it with docker, which is a one-line command. thing:
docker run -d --name nginx_rtmp -p 1935:1935 -p 18080:80 alfg/nginx-rtmp:v1.3.1
  • In addition, there is a special case, that is, I have an idle Raspberry Pi 3B, which can also be used as a media server, and it is also deployed with docker. Here, it should be noted that the image should be selected <font color="blue">shamelesscookie/ nginx-rtmp-ffmpeg:latest</font>, this image has an ARM64 version, suitable for use on Raspberry Pi:
docker run -d --name nginx_rtmp -p 1935:1935 -p 18080:80 shamelesscookie/nginx-rtmp-ffmpeg:latest
  • At this point, the preparations for the "JavaCV Camera Actual Combat" series have been completed. For the next article, let's start a wonderful experience journey. Xinchen original will not let you down~

    You are not alone, Xinchen Original is with you all the way

    https://github.com/zq2599/blog_demos


程序员欣宸
147 声望24 粉丝

热爱Java和Docker