1. Description
Fate's model predictions are 离线预测
and 在线预测
in two ways, the effect of the two is the same, mainly in terms of usage, applicable scenarios, high availability, performance, etc. There are big differences ; This article shares the offline prediction practice using the model trained by Fate based on the 纵向逻辑回归
algorithm.
- Prediction task based on the model trained in " Privacy Computing FATE-Model Training " above
- For the basic overview and installation and deployment of Fate, please refer to the article " Privacy Computing FATE-Key Concepts and Single Machine Deployment Guide "
2. Query model information
Execute the following command to enter the Fate container:
docker exec -it $(docker ps -aqf "name=standalone_fate") bash
First, we need to obtain the information corresponding to the model model_id
and model_version
, which can be obtained by executing the following command through job_id:
flow job config -j 202205070226373055640 -r guest -p 9999 --output-path /data/projects/fate/examples/my_test/
job_id can be viewed in FATE Board.
After successful execution, the corresponding model information will be returned, and a folder will be generated in the specified directory job_202205070226373055640_config
{
"data": {
"job_id": "202205070226373055640",
"model_info": {
"model_id": "arbiter-10000#guest-9999#host-10000#model",
"model_version": "202205070226373055640"
},
"train_runtime_conf": {}
},
"retcode": 0,
"retmsg": "download successfully, please check /data/projects/fate/examples/my_test/job_202205070226373055640_config directory",
"directory": "/data/projects/fate/examples/my_test/job_202205070226373055640_config"
}
job_202205070226373055640_config
contains 4 files:
- dsl.json: The dsl configuration for the task.
- model_info.json: Model information.
- runtime_conf.json: The runtime configuration for the task.
- train_runtime_conf.json: empty.
3. Model deployment
Execute the following commands:
flow model deploy --model-id arbiter-10000#guest-9999#host-10000#model --model-version 202205070226373055640
Specify the model_id and model_version queried in the above steps through --model-id and --model-version respectively
Return after successful deployment:
{
"data": {
"arbiter": {
"10000": 0
},
"detail": {
"arbiter": {
"10000": {
"retcode": 0,
"retmsg": "deploy model of role arbiter 10000 success"
}
},
"guest": {
"9999": {
"retcode": 0,
"retmsg": "deploy model of role guest 9999 success"
}
},
"host": {
"10000": {
"retcode": 0,
"retmsg": "deploy model of role host 10000 success"
}
}
},
"guest": {
"9999": 0
},
"host": {
"10000": 0
},
"model_id": "arbiter-10000#guest-9999#host-10000#model",
"model_version": "202205070730131040240"
},
"retcode": 0,
"retmsg": "success"
}
Returns a new model_version after successful deployment
4. Prepare forecast configuration
Execute the following commands:
cp /data/projects/fate/examples/dsl/v2/hetero_logistic_regression/hetero_lr_normal_predict_conf.json /data/projects/fate/examples/my_test/
Copy the longitudinal logistic regression algorithm prediction configuration example that comes with Fate directly to our my_test
directory.
The predicted configuration file mainly configures three parts:
- The above part is to configure the initiator and participant roles
- The middle part needs to fill in the correct model information
- The following is the data table used for prediction
The only thing that needs to be modified is the middle model information part; it should be noted that the version number entered here is the version number returned after model deployment , and you need to add job_type to predict to specify the task type as a prediction task.
5. Perform prediction tasks
Execute the following commands:
flow job submit -c hetero_lr_normal_predict_conf.json
As with model training, the submit command is used, and the configuration file is specified by -c.
Return after successful execution:
{
"data": {
"board_url": "http://127.0.0.1:8080/index.html#/dashboard?job_id=202205070731385067720&role=guest&party_id=9999",
"code": 0,
"dsl_path": "/data/projects/fate/fateflow/jobs/202205070731385067720/job_dsl.json",
"job_id": "202205070731385067720",
"logs_directory": "/data/projects/fate/fateflow/logs/202205070731385067720",
"message": "success",
"model_info": {
"model_id": "arbiter-10000#guest-9999#host-10000#model",
"model_version": "202205070730131040240"
},
"pipeline_dsl_path": "/data/projects/fate/fateflow/jobs/202205070731385067720/pipeline_dsl.json",
"runtime_conf_on_party_path": "/data/projects/fate/fateflow/jobs/202205070731385067720/guest/9999/job_runtime_on_party_conf.json",
"runtime_conf_path": "/data/projects/fate/fateflow/jobs/202205070731385067720/job_runtime_conf.json",
"train_runtime_conf_path": "/data/projects/fate/fateflow/jobs/202205070731385067720/train_runtime_conf.json"
},
"jobId": "202205070731385067720",
"retcode": 0,
"retmsg": "success"
}
6. View the forecast results
You can view the results through the returned board_url
or job_id
to FATE Board
, but only up to 100 records can be viewed in the graphical interface;
We can export all the data output of the specified component through the output-data
command:
flow tracking output-data -j 202205070731385067720 -r guest -p 9999 -cpn hetero_lr_0 -o /data/projects/fate/examples/my_test/predict
- -j: specifies the job_id of the prediction task
- -cpn: Specifies the component name.
- -o: Specifies the output directory.
Return after successful execution:
{
"retcode": 0,
"directory": "/data/projects/fate/examples/my_test/predict/job_202205070731385067720_hetero_lr_0_guest_9999_output_data",
"retmsg": "Download successfully, please check /data/projects/fate/examples/my_test/predict/job_202205070731385067720_hetero_lr_0_guest_9999_output_data directory"
}
In the directory /data/projects/fate/examples/my_test/predict/job_202205070731385067720_hetero_lr_0_guest_9999_output_data
you can see two files:
- data.csv: for all output data.
- data.meta: is the column header of the data.
Scan the code to follow for a surprise!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。