Abstract: In order to facilitate the development of AI applications in video scenarios, the Modelarts inference platform extracts some common processes in video inference scenarios and presets them in the basic image. The friends only need to simply write preprocessing and post-processing By processing the script, you can develop the video type AI service just like the image type AI service.

This article is shared from the Huawei Cloud Community " on the Modelarts Platform", the original author: HW007.

Friends who are familiar with Modelarts reasoning know that by simply customizing the preprocessing, reasoning and post-processing scripts of the model on the Modelarts platform, you can easily deploy an AI service to reason about pictures, text, audio and video inputs. However, for the reasoning of video types, users were required to download video files, decode video in their own scripts, and transfer the processed files to OBS by themselves. In order to facilitate the development of AI applications for video scenarios, Modelarts inference platform extracts some common processes in video inference scenarios and presets them in the basic image. Friends only need to write pre-processing and post-processing scripts. You can develop video-type AI services just like image-type AI services.

1. General design description

The general reasoning process for extracting video scenes is as follows:
image.png

As shown in the figure above, the flow of the video processing scene can be divided into six parts: "video source input", "video decoding", "preprocessing", "model reasoning", "post-processing", and "reasoning result output". Modelarts has prepared the three gray parts of "video source input", "video decoding", and "inference result output" in advance. The three parts of "pre-processing", "model reasoning" and "post-processing" can be customized freely by users. The specific customization methods are as follows:

1) Customized model: Modelarts has provided a way to load the model. Users only need to place their own model in "saved_model" format into the designated model directory.

2) Custom preprocessing: Modelarts will provide the decoded video frame data to the user, and the user only needs to rewrite the static method "_preprocess" of the "VideoService" class in "customize_service.py", the "_preprocess" function The input parameters and the constraints on the output parameters are as follows:
image.png

3) Customized post-processing: Modelarts will provide the user with the output after model inference and the decoded video frame data. The user only needs to rewrite the static method "_postprocess" of the "VideoService" class in "customize_service.py" Well, the input parameters of the "_postprocess" function and the constraints on the output parameters are as follows:
image.png

2. Demo experience

1) Download the attachment of this article, as shown in the figure below. The attachment provides a "model" folder of a video reasoning model package that has been debugged OK, and it also provides a verification case written based on the tox framework for users to debug their own model package offline. .
image.png

2) Transfer the "model" folder in the accessory package to HUAWEI CLOUD OBS.

Place the "test/test_data/input" and "test/test_data/output" folders in the accessory package in the same level of the Huawei Cloud OBS folder as the previous "model" folder.
image.png

3) Import model: In the Modelarts import model interface, select Import from OBS, and select the model directory that was just transferred to OBS. As shown below:
image.png

Configure each configuration of the model as follows and click Create Model:
image.png

You can see that the model was created successfully:
image.png

4). Deploy the service, deploy the above model as an online service, and select resource nodes with GPUs in the deployment (both public pools and dedicated pools can be used):
image.png

You can see that the service has been successfully deployed:
image.png

5) Create job: select Create job in the service interface
image.png

Select the input video, and select the video files uploaded to the input folder in OBS in step 2) as follows:
image.png

Select the output path and select the output folder uploaded to OBS in step 2) as follows:
image.png

6) Wait for the completion of the video processing:
image.png

Check the output folder in OBS, you can see the inference result after the video has been disassembled into pictures.
image.png

7) According to their own needs, users can replace the "saved_model" format model file in the model folder, and modify the "_preprocess" and "_postprocess" functions in "customize_service.py" to complete their business logic. After the modification, you can first run "test/run_test.sh" to verify in advance whether the modified model package can be inferred normally. After the offline debugging is completed, the model package can be inferred normally and then the model package is submitted to OBS for deployment according to the above steps. Modelarts service.

Among them, the model package requirements for video inference are as follows:

Model package structure requirements:

└── model

├── config.json (Required, Modelarts inference related configuration file)

├── customize_service.py (Required, reasoning file)

├── saved_model.pb (Required, model file in SavedModel format)

└── variables (Required, model file in SavedModel format)

├── variables.data-00000-of-00001

└── variables.index

The format of the config.json file follows the Modelarts specification, https://support.huaweicloud.com/engineers-modelarts/modelarts_23_0092.html

Currently, only tensorflow's "tf1.13-python3.7-gpu-async" runtime supports video inference, that is, the "model_type" field in the config.json file must be "TensorFlow", and the "runtime" field must be "tf1.13" -python3.7-gpu-async".

The "customize_service.py" text must have a "VideoService" class, and the "VideoService" class must have two static methods "_preprocess" and "_postprocess". The corresponding function signature constraints are as follows:
image.png
image.png

Click to follow and learn about Huawei Cloud's fresh technology for the first time~


华为云开发者联盟
1.4k 声望1.8k 粉丝

生于云,长于云,让开发者成为决定性力量