Introduction

I believe that most of the students have already known or contacted OpenAtom OpenHarmony (hereinafter referred to as "OpenHarmony"), but you must not have implemented the face recognition function on OpenHarmony. Follow this article to take you quickly on OpenHarmony standard devices based on SeetaFace2 and OpenCV Implement face recognition.

Project effect

This project realizes the three functions of importing face model, face frame selection and face recognition. The operation process is as follows:

  1. Click the button in the lower right corner of the entry page to jump to the shooting page to take a photo;
  2. Select one or more faces as the training model, and set the corresponding name;
  3. Select an unregistered face image and click the frame selection button to realize the face image frame selection function;
  4. Finally, click Recognition, the application will match the current picture, and finally display the recognition result in the interface.

    Get started quickly

    Device-side development The device-side processes the image through OpenCV and recognizes the face and avatar of the graphic data through Seetaface2, and finally outputs the corresponding NAPI interface for the application to call. Therefore, device-side development mainly involves the transplantation of OpenCV and Seetaface2 and the development of NAPI interface.
    OpenCV library porting
    OpenCV is a very powerful open source computer vision library. This library has been ported to OpenHarmony by the Knowledge System Working Group, and will be merged into the main repository later. Before this library is on the main warehouse, we only need the following steps to implement the porting and use of OpenCV.

    1. Download the ported OpenCV with the following command
 git clone git@gitee.com:zhong-luping/ohos_opencv.git
  1. Copy OpenCV to third_party in the OpenHarmony directory
 cp -raf opencv ~/openharmony/third_party/
  1. Open the BUILD.gn in the OpenCV directory with appropriate cropping and compilation options, as follows:
    Video and flann functions are not required, just annotate the corresponding modules.
 import("//build/ohos.gni")
group("opencv") {
    deps = [
        "//third_party/opencv/modules/core:opencv_core",
      //  "//third_party/opencv/modules/flann:opencv_flann",
        "//third_party/opencv/modules/imgproc:opencv_imgproc",
        "//third_party/opencv/modules/ml:opencv_ml",
        "//third_party/opencv/modules/photo:opencv_photo",
        "//third_party/opencv/modules/dnn:opencv_dnn",
        "//third_party/opencv/modules/features2d:opencv_features2d",
        "//third_party/opencv/modules/imgcodecs:opencv_imgcodecs",
        "//third_party/opencv/modules/videoio:opencv_videoio",
        "//third_party/opencv/modules/calib3d:opencv_calib3d",
        "//third_party/opencv/modules/highgui:opencv_highgui",
        "//third_party/opencv/modules/objdetect:opencv_objdetect",
        "//third_party/opencv/modules/stitching:opencv_stitching",
        "//third_party/opencv/modules/ts:opencv_ts",
     //   "//third_party/opencv/modules/video:opencv_video",
       "//third_party/opencv/modules/gapi:opencv_gapi",
    ]
  1. Add the part_name of the dependent subsystem, and the compilation framework subsystem will copy the compiled library to the system file.
    In this project, we have created a new subsystem of SeetaFaceApp. The part_name in this subsystem is SeetafaceApi, so we need to add part_name="SeetafaceApi" to BUILD.gn in the corresponding module
    Take module/core as an example:
 ohos_shared_library("opencv_core"){
 sources = [ ... ]
configs = [  ... ]
deps = [ ... ]
part_name = "SeetafaceApi"
}
  1. To compile the project, you need to add OpenCV dependencies.
    Add the following dependencies to BUILD.gn that generates NAPI:
 deps += [ "//third_party/opencv:opencv" ]

So far, the transplantation of OpenCV in face recognition is completed.
SeetaFace2 library porting
SeetaFace2 is the second-generation face recognition library open sourced by Zhongke Shituo. It includes three core modules required to build a fully automatic face recognition system, namely: face detection module FaceDetector, face key point location module FaceLandmarker and face feature extraction and comparison module FaceRecognizer.
For SeetaFace2 porting, please refer to the document: SeetaFace2 porting development document.
NAPI Interface Development <br>For NAPI development in OpenHarmony, refer to the video:
Video tutorial on developing napi in OpenHarmony. This article will focus on how the NAPI interface implements OpenCV and SeetaFace calls.

  1. The implementation of the NAPI interface obtained by the face frame.
    int GetRecognizePoints(const char *image_path);
    This interface mainly inputs a picture through the application layer, obtains the picture data through the imread interface of OpenCV, and obtains all the face rectangles in the picture through the face detection module FaceDetector analysis (the rectangle box is x, y, w, h method) and return the face frame rectangle to the application layer in the form of an array.
    The main code for obtaining the face frame rectangle is as follows:
 static int RecognizePoint(string image_path, FaceRect *rect, int num)
{
    if (rect == nullptr) {
        cerr << "NULL POINT!" << endl;
        LOGE("NULL POINT! \n");
        return -1;
    }
    seeta::ModelSetting::Device device = seeta::ModelSetting::CPU;
    int id = 0;

    /* 设置人脸识别模型。*/
    seeta::ModelSetting FD_model( "/system/usr/model/fd_2_00.dat", device, id );
    seeta::ModelSetting FL_model( "/system/usr/model/pd_2_00_pts81.dat", device, id );

    seeta::FaceDetector FD(FD_model);
    seeta::FaceLandmarker FL(FL_model);

    FD.set(seeta::FaceDetector::PROPERTY_VIDEO_STABLE, 1);

    /* 读取图片数据 */
    auto frame = imread(image_path);
    seeta::cv::ImageData simage = frame;
    if (simage.empty()) {
        cerr << "Can not open image: " << image_path << endl;
        LOGE("Can not open image: %{public}s", image_path.c_str());
        return -1;
    }
    /* 图片数据进行人脸识别处理 ,获取所有的人脸框数据对象*/
    auto faces = FD.detect(simage);
    if (faces.size <= 0) {
        cerr << "detect " << image_path << "failed!" << endl;
        LOGE("detect image: %s failed!", image_path.c_str());
        return -1;
    }
    for (int i = 0; (i < faces.size && i < num); i++) {
        /* 将所有人脸框对象数据以坐标形式输出*/
        auto &face = faces.data[i];
        memcpy(&rect[i], &(face.pos), sizeof(FaceRect));
    }
    return faces.size;
}

Among them, FD_model is a face detection model, and FL_model is a facial key point positioning model (this model is divided into 5-point positioning and 81-point positioning, and the 81-point positioning model is used in this project). These models are obtained from open source projects for free.
After the corresponding face rectangle is obtained through the above method, the rectangle is returned to the application in the form of an array:

 string image = path;
    p = (FaceRect *)malloc(sizeof(FaceRect) * MAX_FACE_RECT);
    /* 根据图片进行人脸识别并获取人脸框坐标点 */
    int retval = RecognizePoint(image, p, MAX_FACE_RECT);
    if (retval <= napi_ok) {
        LOGE("GetNapiValueString failed!");
        free(p);
        return result;
    }  
    /*将所有坐标点以数组方式返回到应用端*/
    for (int i = 0; i < retval; i++) {
        int arry_int[4] = {p[i].x, p[i].y, p[i].w, p[i].h};
        int arraySize = (sizeof(arry_int) / sizeof(arry_int[0]));
        for (int j = 0; j < arraySize; j++) {
            napi_value num_val;
            if (napi_create_int32(env, arry_int[j], &num_val) != napi_ok) {
                LOGE("napi_create_int32 failed!");
                return result;
            }
            napi_set_element(env, array, i*arraySize + j, num_val);
        }
    }
    if (napi_create_object(env, &result) != napi_ok) {
        LOGE("napi_create_object failed!");
        free(p);
        return result;
    }
    if (napi_set_named_property(env, result, "recognizeFrame", array) != napi_ok) {
        LOGE("napi_set_named_property failed!");
        free(p);
        return result;
    }
    LOGI("");
    free(p);
    return result;

Among them, array is a NAPI array object created by napi_create_array. All rectangular frame data is saved in the array object by napi_set_element. Finally, the array is converted into the object type result recognized by the application side by napi_set_named_property and returned.

  1. Face search recognition initialization and inverse initialization.

    1. int FaceSearchInit();
    2. int FaceSearchDeinit();

These two interfaces are mainly provided for face search and recognition calls. The initialization mainly includes the registration of the model and the initialization of the recognition module:

 static  int FaceSearchInit(FaceSearchInfo *info)
{
    if (info == NULL) {
        info = (FaceSearchInfo *)malloc(sizeof(FaceSearchInfo));
        if (info == nullptr) {
            cerr << "NULL POINT!" << endl;
            return -1;
        }
    }

    seeta::ModelSetting::Device device = seeta::ModelSetting::CPU;
    int id = 0;
    seeta::ModelSetting FD_model( "/system/usr/model/fd_2_00.dat", device, id );
    seeta::ModelSetting PD_model( "/system/usr//model/pd_2_00_pts5.dat", device, id );
    seeta::ModelSetting FR_model( "/system/usr/model/fr_2_10.dat", device, id );

    info->engine = make_shared<seeta::FaceEngine>(FD_model, PD_model, FR_model, 2, 16);
    info->engine->FD.set( seeta::FaceDetector::PROPERTY_MIN_FACE_SIZE, 80);

    info->GalleryIndexMap.clear();

    return 0;
}

And inverse initialization is to do some memory release.

 static void FaceSearchDeinit(FaceSearchInfo *info, int need_delete)
{
    if (info != nullptr) {
        if (info->engine != nullptr) {
        }

        info->GalleryIndexMap.clear();
        if (need_delete) {
            free(info);
            info = nullptr;
        }
    }
}
  1. The implementation of face search recognition registration interface.
    int FaceSearchRegister(const char *value);
    It should be noted that this interface requires the application side to pass in a parameter of json data, which mainly includes the name of the registered face, the picture and the number of pictures, such as {"name":"Andy Lau","sum":"2", "image":{"11.jpg","12.jpg"}}. When parsing parameters, you need to call napi_get_named_property to parse each object of json data. The specific code is as follows:
 napi_get_cb_info(env, info, &argc, &argv, &thisVar, &data);
    napi_value object = argv;
    napi_value value = nullptr;

    if (napi_get_named_property(env, object, (const char *)"name", &value) == napi_ok) {
        char name[64] = {0};
        if (GetNapiValueString(env, value, (char *)name, sizeof(name)) < 0) {
            LOGE("GetNapiValueString failed!");
            return result;
        }
        reg_info.name = name;
    }
    LOGI("name = %{public}s", reg_info.name.c_str());
    if (napi_get_named_property(env, object, (const char *)"sum", &value) == napi_ok) {
        
        if (napi_get_value_uint32(env, value, &sum) != napi_ok) {
            LOGE("napi_get_value_uint32 failed!");
            return result;
        }
    }
    LOGI("sum = %{public}d", sum);
    if (napi_get_named_property(env, object, (const char *)"image", &value) == napi_ok) {
        bool res = false;
        if (napi_is_array(env, value, &res) != napi_ok || res == false) {
            LOGE("napi_is_array failed!");
            return result;
        }
        for (int i = 0; i < sum; i++) {
            char image[256] = {0};
            napi_value imgPath = nullptr;
            if (napi_get_element(env, value, i, &imgPath) != napi_ok) {
                LOGE("napi_get_element failed!");
                return result;
            }
            if (GetNapiValueString(env, imgPath, (char *)image, sizeof(image)) < 0) {
                LOGE("GetNapiValueString failed!");
                return result;
            }
            reg_info.path = image;
            if (FaceSearchRegister(g_FaceSearch, reg_info) != napi_ok) {
                retval = -1;
                break;
            }
        }
    }

Obtain the parameters from the application through napi_get_cb_info, and obtain the corresponding name and number of pictures through napi_get_named_property, and finally obtain each image in the picture array through napi_get_element, and register the name and image to the SeetaFace2 module through the FaceSearchRegister interface. in the recognition engine. The specific implementation is as follows:

 static int FaceSearchRegister(FaceSearchInfo &info, RegisterInfo &gegister)
{
    if (info.engine == nullptr) {
        cerr << "NULL POINT!" << endl;
        return -1;
    }

    seeta::cv::ImageData image = cv::imread(gegister.path);
    auto id = info.engine->Register(image);
    if (id >= 0) {
        info.GalleryIndexMap.insert(make_pair(id, gegister.name));
    }

    return 0;
}

After the data is registered, the corresponding image can be identified by the engine later.

  1. Get the implementation of the face search recognition result interface.
 char *FaceSearchGetRecognize(const char *image_path);

This interface implements the search and recognition in the recognition engine by passing in a picture. If there is a similar face registration in the recognition engine, it will return the name of the corresponding face registration, otherwise it will return the word "ignored". This method is implemented through asynchronous callbacks:

 // 创建async work,创建成功后通过最后一个参数(commandStrData->asyncWork)返回async work的handle
    napi_value resourceName = nullptr;
    napi_create_string_utf8(env, "FaceSearchGetPersonRecognizeMethod", NAPI_AUTO_LENGTH, &resourceName);
    napi_create_async_work(env, nullptr, resourceName, FaceSearchRecognizeExecuteCB, FaceSearchRecognizeCompleteCB,
            (void *)commandStrData, &commandStrData->asyncWork);

    // 将刚创建的async work加到队列,由底层去调度执行
    napi_queue_async_work(env, commandStrData->asyncWork);

Among them, FaceSearchRecognizeExecuteCB implements face recognition

 static void FaceSearchRecognizeExecuteCB(napi_env env, void *data)
{
    CommandStrData *commandStrData = dynamic_cast<CommandStrData*>((CommandStrData *)data);
    if (commandStrData == nullptr) {
        HILOG_ERROR("nullptr point!", __FUNCTION__, __LINE__);
        return;
    }

    FaceSearchInfo faceSearch = *(commandStrData->mFaceSearch);
    commandStrData->result = FaceSearchSearchRecognizer(faceSearch, commandStrData->filename);
    LOGI("Recognize result : %s !", __FUNCTION__, __LINE__, commandStrData->result.c_str());
}

The FaceSearchRecognizeCompleteCB function returns the recognition result to the application through the napi_resolve_deferred interface.

 static void FaceSearchRecognizeCompleteCB(napi_env env, napi_status status, void *data)
{
    CommandStrData *commandStrData = dynamic_cast<CommandStrData*>((CommandStrData *)data);
    napi_value result;

    if (commandStrData == nullptr || commandStrData->deferred == nullptr) {
        LOGE("nullptr", __FUNCTION__, __LINE__);
        if (commandStrData != nullptr) {
            napi_delete_async_work(env, commandStrData->asyncWork);
            delete commandStrData;
        }

        return;
    }

    const char *result_str = (const char *)commandStrData->result.c_str();
    if (napi_create_string_utf8(env, result_str, strlen(result_str), &result) != napi_ok) {
        LOGE("napi_create_string_utf8 failed!", __FUNCTION__, __LINE__);
        napi_delete_async_work(env, commandStrData->asyncWork);
        delete commandStrData;
        return;
    }

    napi_resolve_deferred(env, commandStrData->deferred, result);
    napi_delete_async_work(env, commandStrData->asyncWork);

    delete commandStrData;
}

Through the face feature extraction and comparison module, the incoming data is compared with the registered data, and the similarity of the comparison is returned to determine whether the current face is recognizable, and finally the recognition result is returned. Specific implementation code:

 static string FaceSearchSearchRecognizer(FaceSearchInfo &info, string filename)
{
    if (info.engine == nullptr) {
        cerr << "NULL POINT!" << endl;
        return "recognize error 0";
    }
    string name;
    float threshold = 0.7f;
    seeta::QualityAssessor QA;
    auto frame = cv::imread(filename);
    if (frame.empty()) {
        LOGE("read image %{public}s failed!", filename.c_str());
        return "recognize error 1!";
    }
    seeta::cv::ImageData image = frame;
    std::vector<SeetaFaceInfo> faces = info.engine->DetectFaces(image);

    for (SeetaFaceInfo &face : faces) {
        int64_t index = 0;
        float similarity = 0;

        auto points = info.engine->DetectPoints(image, face);

        auto score = QA.evaluate(image, face.pos, points.data());
        if (score == 0) {
            name = "ignored";
        } else {
            auto queried = info.engine->QueryTop(image, points.data(), 1, &index, &similarity);
            // no face queried from database
            if (queried < 1) continue;
                // similarity greater than threshold, means recognized
            if( similarity > threshold ) {
                name = info.GalleryIndexMap[index];
            }
        }
    }
    LOGI("name : %{public}s \n", name.length() > 0 ? name.c_str() : "null");
    return name.length() > 0 ? name : "recognize failed";
}

So far, all NAPI interfaces have been developed.

  1. After compiling and developing the NAPI interface, we need to add the library we wrote to the system for compilation, and we need to add a subsystem of our own.
    First create a new ohos.build in the library directory
 {
    "subsystem": "SeetafaceApp",
    "parts": {
        "SeetafaceApi": {
            "module_list": [
               "//seetaface:seetafaceapp_napi"
            ],
            "test_list": [ ]
        }
    }
}

Next, create a new BUILD.gn in the same directory, add the library source file and the corresponding dependencies, as follows:

 import("//build/ohos.gni")

config("lib_config") {
    cflags_cc = [
        "-frtti",
        "-fexceptions",
        "-DCVAPI_EXPORTS",
        "-DOPENCV_ALLOCATOR_STATS_COUNTER_TYPE=int",
        "-D_USE_MATH_DEFINES",
        "-D__OPENCV_BUILD=1",
        "-D__STDC_CONSTANT_MACROS",
        "-D__STDC_FORMAT_MACROS",
        "-D__STDC_LIMIT_MACROS",
        "-O2",
        "-Wno-error=header-hygiene",
    ]
}

ohos_shared_library("seetafaceapp_napi") {
    sources = [
        "app.cpp",
    ]

    include_dirs = [
        "./",
        "//third_party/opencv/include",
        "//third_party/opencv/common",
        "//third_party/opencv/modules/core/include",
        "//third_party/opencv/modules/highgui/include",
        "//third_party/opencv/modules/imgcodecs/include",
        "//third_party/opencv/modules/imgproc/include",
        "//third_party/opencv/modules/calib3d/include",
        "//third_party/opencv/modules/dnn/include",
        "//third_party/opencv/modules/features2d/include",
        "//third_party/opencv/modules/flann/include",
        "//third_party/opencv/modules/ts/include",
        "//third_party/opencv/modules/video/include",
        "//third_party/opencv/modules/videoio/include",
        "//third_party/opencv/modules/ml/include",
        "//third_party/opencv/modules/objdetect/include",
        "//third_party/opencv/modules/photo/include",
        "//third_party/opencv/modules/stitching/include",
        "//third_party/SeetaFace2/FaceDetector/include",
        "//third_party/SeetaFace2/FaceLandmarker/include",
        "//third_party/SeetaFace2/FaceRecognizer/include",
        "//third_party/SeetaFace2/QualityAssessor/include",
        "//base/accessibility/common/log/include",
        "//base/hiviewdfx/hilog_lite/interfaces/native/innerkits"
    ]

    deps = [ "//foundation/ace/napi:ace_napi" ]
    deps += [ "//third_party/opencv:opencv" ]
    deps += [ "//third_party/SeetaFace2:SeetaFace2" ]

    external_deps = [
        "hiviewdfx_hilog_native:libhilog",
    ]

    configs = [
       ":lib_config"
    ]

    # 指定库生成的路径
    relative_install_dir = "module"
    # 子系统及其组件,后面会引用
    subsystem_name = "SeetafaceApp"
    part_name = "SeetafaceApi"
}

After adding the corresponding files, we need to add our subsystem to the system for compilation, open build/subsystem_config.json and add the following code at the end:

 "SeetafaceApp": {
    "path": "seetaface",
    "name": "SeetafaceApp"
  }

After adding the subsystem, modify the product configuration corresponding to the product, open productdefine/common/products/rk3568.json and add the following code at the end:

 "SeetafaceApp:SeetafaceApi":{}

After making the above modifications, we can directly compile the NAPI library file with the following command:

 ./build.sh --product-name rk3568 --ccache

Refer to RK3568 Quick Start - Image Burning to complete the burning.
Application-side development After completing the development of the NAPI function of the device, the application-side can implement the corresponding function by calling the face recognition interface exposed to the application in the NAPI component. Next, I will take you to use NAPI to realize the face recognition function.
development preparation

  1. Download DevEco Studio 3.0 Beta4;
  2. Build a development environment and refer to development preparation;
  3. To understand attribute eTS development, refer to eTS language quick start;
    SeetaFace2 initialization

    1. First, place the SeetaFace2 NAPI interface declaration file under the SDK directory/api;
    2. Then import the SeetaFace2 NAPI module; ck-start/star
    3. Call the initialization interface;
 // 首页实例创建后
async aboutToAppear() {
  await StorageUtils.clearModel();
  CommonLog.info(TAG,'aboutToAppear')
  // 初始化人脸识别
  let res = SeetafaceApp.FaceSearchInit()
  CommonLog.info(TAG,`FaceSearchInit res=${res}`)
  this.requestPermissions()
}

// 请求权限
requestPermissions(){
  CommonLog.info(TAG,'requestPermissions')
  let context = featureAbility.getContext()
  context.requestPermissionsFromUser(PERMISSIONS, 666,(res)=>{
    this.getMediaImage()
  })
}

Get all face pictures

Through the file management module fileio and media library management mediaLibrary, obtain all the picture information in the specified application data directory, and assign the path to faceList, and the faceList data is used for the Image component to provide url to load pictures

 // 获取所有图片
async getMediaImage(){
  let context = featureAbility.getContext();
  // 获取本地应用沙箱路径
  let localPath = await context.getOrCreateLocalDir()
  CommonLog.info(TAG, `localPath:${localPath}`)
  let facePath = localPath + "/files"
  // 获取所有照片
  this.faceList = await FileUtil.getImagePath(facePath)
}

set face model

Get the address of the selected face image and the input name, and call SeetafaceApp.FaceSearchRegister(params) to set the face model. The parameter params consists of name name, image image address set and sum image number.

 async submit(name) {
    if (!name || name.length == 0) {
        CommonLog.info(TAG, 'name is empty')
        return
    }
    let selectArr = this.faceList.filter(item => item.isSelect)
    if (selectArr.length == 0) {
        CommonLog.info(TAG, 'faceList is empty')
        return
    }
    // 关闭弹窗
    this.dialogController.close()
    try {
        let urls = []
        let files = []
        selectArr.forEach(item => {
            let source = item.url.replace('file://', '')
            CommonLog.info(TAG, `source:${source}`)
            urls.push(item.url)
            files.push(source)
        })

        // 设置人脸识别模型参数
        let params = {
            name: name,
            image: files,
            sum: files.length
        }
        CommonLog.info(TAG, 'FaceSearchRegister' + JSON.stringify(params))
        let res = SeetafaceApp.FaceSearchRegister(params)
        CommonLog.info(TAG, 'FaceSearchRegister res ' + res)
        // 保存已设置的人脸模型到轻量存储
        let data = {
            name:name,
            urls:urls
        }
        let modelStr = await StorageUtils.getModel()
        let modelList = JSON.parse(modelStr)
        modelList.push(data)
        StorageUtils.setModel(modelList)
        router.back()
    } catch (err) {
        CommonLog.error(TAG, 'submit fail ' + err)
    }
}

Implement frame selection of faces

Call SeetafaceApp.GetRecognizePoints to pass in the current picture address, get the upper left and lower right coordinates of the face, and then draw the face frame through the CanvasRenderingContext2D object.
Realize face recognition

Call SeetafaceApp.FaceSearchGetRecognize(url), pass in the image address to recognize the face and return the corresponding recognized name.

 // 人脸识别
recognize(){
    SeetafaceApp.FaceSearchGetRecognize(this.url).then(res=>{
        CommonLog.info(TAG,'recognize suceess' + JSON.stringify(res))
        if(res && res != 'ignored' && res != "recognize failed" && res != 'recognize error 1!'){
            // 赋值识别到的人物模型
            this.name = res
        }else{
            this.name = '未识别到该模型'
        }
    }).catch(err=>{
        CommonLog.error(TAG,'recognize' + err)
        this.name = '未识别到该模型'
    })
}

Reference documentation

SeetaFace2 porting development documentation:
https://gitee.com/openharmony-sig/knowledge_demo_smart_home/blob/master/docs/SeetaFace2/%E4%BA%BA%E8%84%B8%E8%AF%86%E5%88%AB%E5%BA %93%E7%9A%84%E7%A7%BB%E6%A4%8D.md
The development video tutorial of napi in OpenHarmony:
https://www.bilibili.com/video/BV1L44y1p7KE?spm_id_from=333.999.0.0
Get started with RK3568 quickly:
https://growing.openharmony.cn/mainPlay/learnPathMaps?id=27
Face recognition application:
https://gitee.com/openharmony-sig/knowledge_demo_travel/blob/master/docs/FaceRecognition_ETS/README_en.md
Preparation for application development:
https://docs.openharmony.cn/pages/v3.2Beta/zh-cn/application-dev/quick-start/start-overview.md/
eTS Language Quick Start:
https://docs.openharmony.cn/pages/v3.2Beta/zh-cn/application-dev/quick-start/start-with-ets.md/
Body of Knowledge Working Group:
https://gitee.com/openharmony-sig/knowledge


OpenHarmony开发者
160 声望1.1k 粉丝

OpenHarmony是由开放原子开源基金会(OpenAtom Foundation)孵化及运营的开源项目,


引用和评论

0 条评论