头图

background

Alluxio is a distributed cache of the big data technology stack. It has a significant performance improvement for the warm up of S3, hdfs and other data, and is deeply integrated with upper-level computing engines such as Hive, spark, and Trino, as a query in the field of big data. Acceleration is a rare functional component.

The Alluxio community has in-depth interaction and integration with Amazon EMR services. The official integration solution on Amazon EMR is provided. For details, please refer to the Alluxio community documentation . Amazon Cloud Technology also provides bootstrap scripts and configurations for quick installation and deployment. For details, please refer to the official Amazon Cloud Technology blog .

The above documents are based on emr 5.2x version, and its hive, Spark and other component versions are older, and do not consider the multi-master of EMR, the integration scenario of Task computing instance group, when customers use a higher version of Amazon EMR and enable HA and Task computing instances , its installation and deployment are defective and the deployment fails.

This document takes the overall architecture of Alluxio as an entry point, introduces the design ideas of Alluxio in detail, enables readers to have a deeper understanding of the integration method on Amazon EMR, and reorganizes and corrects the defects of the Alluxio community on Amazon EMR integration solution. Added support for EMR task instance groups and multi-master high-availability clusters, making Alluxio more adaptable to customers' production environments on Amazon EMR.

Alluxio Community Documentation:
https://docs.alluxio.io/os/user/stable/en/cloud/AWS-EMR.html

Amazon Cloud Technology Official Blog:
https://aws.amazon.com/cn/blogs/china/five-minitues-to-use-alluxio-guide-emr-spark/

Alluxio architecture overview

The main functional components are:

Master node: similar to the design of NN, it also has the concepts of standby master (HA) and secondary master (metadata mirror merge), and the journal node starts with the master as a fast recovery

image.png

Worker node: Similar to DataNode, the cache layer provides Tier Storage (MEM, SSD, HDD three layers), short-circuit read and regular cache penetration, three write cache modes (memory only, cache_through can be synchronous and asynchronous, throught does not write cache)

image.png

Job master & Job worker: read and write cache data, alluxio provides a framework similar to hadoop MR, job master is responsible for resource allocation, job worker executes the pipeline pipeline of data, cache copy defaults to 1

The main business scenarios of Alluxio are:

  • hdfs/S3 cache, query acceleration
  • Unified UFS path for multi-object storage
  • Cross-bucket, hdfs cluster data cache

Main features feature:

  • Backend storage for hdfs, s3 multi-layer
  • Cache read and write, write support cache through mode, update backend storage asynchronously; read support push down, read backend storage directly after cache breakdown
  • ttl cache expiration time configuration
 e.g:
alluxio.user.file.create.ttl = -1
alluxio.user.file.create.ttl.action = DELETE
alluxio.master.ttl.checker.interval = 1hour
  • Impersonal/Acl/SASL HDFS similar permissions control functions also apply to Alluxio
  • Cache synchronization and cleaning
 e.g: 
缓存清理:Alluxio rm -r -U alluxio:///<path>
缓存同步:alluxio load alluxio:///<path>

Alluxio on Amazon EMR integration

Integrated Architecture

The architecture of Alluxio on Amazon EMR is as follows:

image.png

As shown in the figure above, the Alluxio Master component is used as a management module, which is installed and deployed in the Amazon EMR master instance group. If Alluxio HA high availability is required, you can deploy the EMR as a multi-master and enable the switch of alluxio HA (-h) in bootstrap , the deployment script will deploy the Alluxio Master to each EMR master node instance, and do Raft election in the S3 register directory for Alluxio master node to fail over.

The Alluxio Worker component is installed and deployed in the core and task instance group of Amazon EMR. Since the task instance group customer may configure scaling, when scaling the task computing node, the Alluxio work will also scale accordingly, and the cache node above it will be rebalanced, causing cache Layer performance jitters, so whether Alluxio is installed and deployed in the Task task instance group, the switch switch (-g) is also provided in the bootstrap script.

Alluxio tier storage is configured as mem layer, and UFS backend is configured as S3 data lake storage.

The corresponding Alluxio job master and job worker components are deployed in the same way as the master and worker nodes, and are distributed and installed in the EMR master node instance group and core and task instance groups.

Integration steps

The following sections detail the implementation steps for Alluxio integration on Amazon EMR

  • Download the community version tar installation package from the alluxio official website (this article uses 7.3)
  • The integrated installation and deployment of alluxio on EMR can be performed through Amazon cli or emr console, specifying the initial configuration json and bootstrap methods
  • Amazon emr cli way:
 aws emr create-cluster \
--release-label emr-6.5.0 \
–instance-groups '[{"InstanceCount":2,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":64,"VolumeType":"gp2″},"InstanceGroupType":"CORE","InstanceType":"m5.xlarge","Name":"Core-2″}, \
{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":64,"VolumeType":"gp2″},"VolumesPerInstance":2}]},"InstanceGroupType":"MASTER","InstanceType":"m5.xlarge","Name":"Master-1″}]' \
--applications Name=Spark Name=Presto Name=Hive \
--name try-alluxio \
--bootstrap-actions \
Path=s3://xxxxxx.serverless-analytics/alluxiodemobucket/alluxio-emr.sh,\
Args=[s3://xxxxxx.serverless-analytics/alluxiodemobucket/data/,-d,"s3://xxxxxx.serverless-analytics/alluxiodemobucket/install/alluxio-2.7.3-bin.tar.gz",-p,"alluxio.user.block.size.bytes.default=122M|alluxio.user.file.writetype.default=CACHE_THROUGH",-s,"|"] \
--configurations s3://xxxxxx.serverless-analytics/alluxiodemobucket/ \
--ec2-attributes KeyName=ec203.pem
  • On the emr console:

boostrap initialization parameters
s3://xxxxxx.serverless-analytics/alluxiodemobucket/data/ -d s3://xxxxxx.serverless-analytics/alluxiodemobucket/install/alluxio-2.7.3-bin.tar.gz -p alluxio.user.block.size .bytes.default=122M|alluxio.user.file.writetype.default=CACHE_THROUGH -s |

 boostrap初始化参数
s3://xxxxxx.serverless-analytics/alluxiodemobucket/data/ -d s3://xxxxxx.serverless-analytics/alluxiodemobucket/install/alluxio-2.7.3-bin.tar.gz -p alluxio.user.block.size.bytes.default=122M|alluxio.user.file.writetype.default=CACHE_THROUGH -s |

配置文件及boostrap脚本:
s3://xxxxxx.serverless-analytics/alluxiodemobucket/install:安装tar包
s3://xxxxxx.serverless-analytics/alluxiodemobucket/data:测试under store底层存储
s3://xxxxxx.serverless-analytics/alluxiodemobucket/*.sh|*.json : bootstrap脚本及initial 配置

初始化Alluxio json集群配置:
{"Classification":"presto-connector-hive","Properties":{"hive.force-local-scheduling":"true","hive.metastore":"glue","hive.s3-file-system-type":"PRESTO"}},{"Classification":"hadoop-env","Configurations":[{"Classification":"export","Properties":{"HADOOP_CLASSPATH":"/opt/alluxio/client/alluxio-client.jar:${HADOOP_CLASSPATH}"}}],"Properties":{}}

Boostrap startup script description

  • Bootstrap mainly completes the alluxio integration steps, including decompressing the alluxio tar installation package, waiting for key components such as emr hdfs to start, then decompressing and modifying the alluxio configuration file, and starting each component process of alluxio
  • The Alluxio community officially provides an integrated boostrap with Amazon emr, but it is limited to version 27. The component ports of the higher version (eg: emr6.5) will conflict, and the scaling of the task node instance type and HA and other scenarios are not considered. The plan mainly upgrades and optimizes the original script as follows:

The Bootstrap script hangs on the task node, because the DataNode process cannot be found, and the task instance type is not determined in the official script, and it will wait in a loop

 wait_for_hadoop func需要修改,如果是task,不再等待datanode进程,进入下一步骤
  local -r is_task="false"
  grep -i "instanceRole" /mnt/var/lib/info/job-flow-state.txt|grep -i task
  if [ $? = "0" ];then
     is_task="true"
  fi
  • If you do not need to expand the Alluxio worker on the Task instance, you need to specify parameters in the boostrap script to identify the alluxio installation and deployment process that spares the Task instance node
 e)ignore_task_node="true"
        ;;
  if [[ "${ignore_task_node}" = "true" ]]; then
     "don't install alluxio on task node, boostrap exit!"
     exit 0
  fi
  • There is no bootstrap script that supports HA by default, you need to judge multiple master nodes in bootstrap and start the standby alluxio master

The form of embedded JN log node is used here, which does not occupy the resources of Zookeeper on EMR:

In Alluxio HA mode, the task node needs to add the HA rpc access address list

 if [[ "${ha_mode}" = "true" ]]; then
      namenodes=$(xmllint --xpath "/configuration/property[name='${namenode_prop}']/value/text()" "${ALLUXIO_HOME}/conf/hdfs-site.xml")

      alluxio_journal_addre=""
      alluxio_rpc_addre=""
      for namenode in ${namenodes//,/ }; do
        if [[ "${alluxio_rpc_addre}" != "" ]]; then
          alluxio_rpc_addre=$alluxio_rpc_addre","
          alluxio_journal_addre=$alluxio_journal_addre","
        fi
        alluxio_rpc_addre=$alluxio_rpc_addre"${namenode}:19998"
        alluxio_journal_addre=$alluxio_journal_addre"${namenode}:19200"
      done      
      set_alluxio_property alluxio.master.rpc.addresses $alluxio_rpc_addre
  fi

Verify that Alluxio works

After EMR is started, Alluxio master and worker processes will be automatically started. On the management console console of Alluxio's admin port 29999, you can easily view the cluster status, capacity, UFS path and other information.

Alluxio console

image.png

image.png

Computing Framework Integration

 create external table s3_test1 (userid INT,
age INT,
gender CHAR(1),
occupation STRING,
zipcode STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
LOCATION 's3://xxxxxx.serverless-analytics/alluxiodemobucket/data/s3_test1'

Hive alluxio读写
0: jdbc:hive2://xx.xx.xx.xx:10000/default> shwo create table alluxiodb.test1;
|                   createtab_stmt                   |
+----------------------------------------------------+
| CREATE EXTERNAL TABLE `alluxiodb.test1`(           |
|   `userid` int,                                    |
|   `age` int,                                       |
|   `gender` char(1),                                |
|   `occupation` string,                             |
|   `zipcode` string)                                |
| ROW FORMAT SERDE                                   |
|   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  |
| WITH SERDEPROPERTIES (                             |
|   'field.delim'='|',                               |
|   'serialization.format'='|')                      |
| STORED AS INPUTFORMAT                              |
|   'org.apache.hadoop.mapred.TextInputFormat'       |
| OUTPUTFORMAT                                       |
|   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
| LOCATION                                           |
|   'alluxio:/testTable'                             |
| TBLPROPERTIES (                                    |
|   'bucketing_version'='2')                         |
+----------------------------------------------------+
0: jdbc:hive2://xx.xx.xx.xx:10000/default>INSERT INTO alluxiodb.test1 VALUES (2, 24, 'F', 'Developer', '12345');
0: jdbc:hive2://xx.xx.xx.xx:10000/default> select * from test1;
--+
| test1.userid  | test1.age  | test1.gender  | test1.occupation  | test1.zipcode  |
+---------------+------------+---------------+-------------------+----------------+
| 1             | 24         | F             | Developer         | 12345          |
| 4             | 46         | F             | Developer         | 12345          |
| 5             | 56         | A             | Developer         | 12345          |
| 2             | 224        | F             | Developer         | 12345        



Trino alluxio query:
trino:alluxiodb> select * from test1;
 userid | age | gender | occupation | zipcode
--------+-----+--------+------------+---------
      1 |  24 | F      | Developer  | 12345
      2 | 224 | F      | Developer  | 12345


Spark alluxio读写
>>> spark.sql("insert into  alluxiodb.test1 values (3,33,'T','Admin','222222')")
>>> spark.sql("select * from alluxiodb.test1").show(1000,False)                 +------+---+------+----------+-------+
|userid|age|gender|occupation|zipcode|
+------+---+------+----------+-------+
|2     |224|F     |Developer |12345  |
|3     |33 |T     |Admin     |222222 |
|1     |24 |F     |Developer |12345  |
+------+---+------+----------+-------+

benchmark test

The hive tpcds benchmark utility is used to generate and load test data, which can easily compare the query performance in the two scenarios of the s3 path and the alluxio cache path.

  • alluxio hive benchmark result:
 hive -i testbench_alluxio.settings
hive> use tpcds_bin_partitioned_orc_30;
hive> source query55.sql;
+-----------+------------------------+---------------------+
| brand_id  |         brand          |      ext_price      |
+-----------+------------------------+---------------------+
| 2002002   | importoimporto #2      | 328158.27           |
| 4004002   | edu packedu pack #2    | 278740.06999999995  |
| 2004002   | edu packimporto #2     | 243453.09999999998  |
| 2001002   | amalgimporto #2        | 226828.09000000003  |
| 4003002   | exportiedu pack #2     | 194363.72000000003  |
| 5004002   | edu packscholar #2     | 178895.29000000004  |
| 5003002   | exportischolar #2      | 158463.69           |
| 3003002   | exportiexporti #2      | 126980.51999999999  |
| 4001002   | amalgedu pack #2       | 107703.01000000001  |
| 5002002   | importoscholar #2      | 104491.46000000002  |
| 3002002   | importoexporti #2      | 87758.88            |
| 8010006   | univmaxi #6            | 87110.54999999999   |
| 10004013  | edu packunivamalg #13  | 76879.23            |
| 8008006   | namelessnameless #6    | 74991.82            |
| 6010006   | univbrand #6           | 72163.57            |
| 7006008   | corpbrand #8           | 71066.42            |
| 2003002   | exportiimporto #2      | 69029.02            |
| 6015006   | scholarbrand #6        | 66395.84            |
| 4002002   | importoedu pack #2     | 65223.01999999999   |
| 8013002   | exportimaxi #2         | 63271.69            |
| 9007002   | brandmaxi #2           | 61539.36000000001   |
| 3001001   | edu packscholar #2     | 60449.65            |
| 10003014  | exportiunivamalg #14   | 56505.57000000001   |
| 3001001   | exportiexporti #2      | 55458.64            |
| 7015004   | scholarnameless #4     | 55006.78999999999   |
| 5002001   | exportischolar #2      | 54996.270000000004  |
| 6014008   | edu packbrand #8       | 54793.31999999999   |
| 4003001   | amalgcorp #8           | 53875.51000000001   |
| 8011006   | amalgmaxi #6           | 52845.8             |
| 1002002   | importoamalg #2        | 52328.259999999995  |
| 2003001   | maxinameless #6        | 50577.89            |
| 9016008   | corpunivamalg #8       | 49700.12            |
| 7015006   | scholarnameless #6     | 49592.7             |
| 9005004   | scholarmaxi #4         | 49205.19            |
| 4003001   | exportiimporto #2      | 48604.97            |
| 2002001   | edu packamalg #2       | 48451.979999999996  |
| 9012002   | importounivamalg #2    | 48429.990000000005  |
| 7012004   | importonameless #4     | 48303.979999999996  |
| 10009004  | edu packamalg #2       | 48301.05            |
| 1004001   | amalgexporti #2        | 48215.880000000005  |
| 1001002   | amalgamalg #2          | 47018.94            |
| 9015010   | scholarunivamalg #10   | 46495.380000000005  |
| 6005007   | importobrand #6        | 46233.630000000005  |
| 9010004   | univunivamalg #4       | 46164.04            |
| 8015006   | scholarmaxi #6         | 46143.41            |
| 7016002   | corpnameless #2        | 46133.31            |
| 10006011  | corpunivamalg #11      | 46085.81            |
| 9001003   | importoamalg #2        | 45303.18            |
| 10015011  | scholarnameless #2     | 45299.06            |
| 5002001   | importoexporti #2      | 44757.73000000001   |
| 10010004  | univamalgamalg #4      | 43347.899999999994  |
| 2004001   | importoamalg #2        | 43127.46000000001   |
| 9002011   | edu packcorp #8        | 41740.42            |
| 10008009  | namelessunivamalg #9   | 41369.479999999996  |
| 8002010   | importonameless #10    | 41046.02            |
| 6002008   | importocorp #8         | 40795.42999999999   |
| 7007010   | brandbrand #10         | 40591.95            |
| 6012002   | importobrand #2        | 40545.72            |
| 2003001   | amalgexporti #2        | 39679.76            |
| 8005007   | exportischolar #2      | 39593.39            |
| 9015011   | importoscholar #2      | 39419.41            |
| 9005012   | scholarmaxi #12        | 39151.020000000004  |
| 9016012   | corpunivamalg #12      | 39117.53            |
| 5003001   | exportiexporti #2      | 39061.0             |
| 9002002   | importomaxi #2         | 38763.61            |
| 6010004   | univbrand #4           | 38375.29            |
| 8016009   | edu packamalg #2       | 37759.44            |
| 8003010   | exportinameless #10    | 37605.38            |
| 10010013  | univamalgamalg #13     | 37567.33            |
| 4003001   | importoexporti #2      | 37455.68            |
| 4001001   | importoedu pack #2     | 36809.149999999994  |
| 8006003   | edu packimporto #2     | 36687.04            |
| 6004004   | edu packcorp #4        | 36384.1             |
| 5004001   | scholarbrand #8        | 36258.58            |
| 10006004  | importonameless #10    | 36226.62            |
| 2002001   | scholarbrand #4        | 36138.93            |
| 7001010   | amalgbrand #10         | 35986.36            |
| 8015005   | edu packunivamalg #4   | 35956.33            |
| 10014008  | edu packamalgamalg #8  | 35371.05            |
| 7004005   | amalgamalg #2          | 35265.32            |
| 6016004   | corpbrand #4           | 35256.990000000005  |
| 4002001   | amalgedu pack #2       | 35183.9             |
+-----------+------------------------+---------------------+
  • s3 hive benchmark result:
 hive -i testbench_s3.settings
hive> use tpcds_bin_partitioned_orc_30;
hive> source query55.sql;
+-----------+------------------------+---------------------+
| brand_id  |         brand          |      ext_price      |
+-----------+------------------------+---------------------+
| 4003002   | exportiedu pack #2     | 324254.89           |
| 4004002   | edu packedu pack #2    | 241747.01000000004  |
| 2004002   | edu packimporto #2     | 214636.82999999996  |
| 3003002   | exportiexporti #2      | 158815.92           |
| 2002002   | importoimporto #2      | 126878.37000000002  |
| 2001002   | amalgimporto #2        | 123531.46           |
| 4001002   | amalgedu pack #2       | 114080.09000000003  |
| 5003002   | exportischolar #2      | 103824.98000000001  |
| 5004002   | edu packscholar #2     | 97543.4             |
| 3002002   | importoexporti #2      | 90002.6             |
| 6010006   | univbrand #6           | 72953.48000000001   |
| 6015006   | scholarbrand #6        | 67252.34000000001   |
| 7001010   | amalgbrand #10         | 60368.53            |
| 4002001   | amalgmaxi #12          | 59648.09            |
| 5002002   | importoscholar #2      | 59202.14            |
| 9007008   | brandmaxi #8           | 57989.22            |
| 2003002   | exportiimporto #2      | 57869.27            |
| 1002002   | importoamalg #2        | 57119.29000000001   |
| 3001001   | exportiexporti #2      | 56381.43            |
| 7010004   | univnameless #4        | 55796.41            |
| 4002002   | importoedu pack #2     | 55696.91            |
| 8001010   | amalgnameless #10      | 54025.19            |
| 9016012   | corpunivamalg #12      | 53992.149999999994  |
| 5002001   | exportischolar #2      | 53784.57000000001   |
| 4003001   | amalgcorp #8           | 52727.09            |
| 9001002   | amalgmaxi #2           | 52115.3             |
| 1002001   | amalgnameless #2       | 51994.130000000005  |
| 8003010   | exportinameless #10    | 51100.64            |
| 9003009   | edu packamalg #2       | 50413.2             |
| 10007003  | scholarbrand #6        | 50027.27            |
| 7006008   | corpbrand #8           | 49443.380000000005  |
| 9016010   | corpunivamalg #10      | 49181.66000000001   |
| 9005010   | scholarmaxi #10        | 49019.619999999995  |
| 4001001   | importoedu pack #2     | 47280.47            |
| 4004001   | amalgcorp #2           | 46830.21000000001   |
| 10007011  | brandunivamalg #11     | 46815.659999999996  |
| 9003008   | exportimaxi #8         | 46731.72            |
| 1003001   | amalgnameless #2       | 46250.08            |
| 8010006   | univmaxi #6            | 45460.4             |
| 8013002   | exportimaxi #2         | 44836.49            |
| 5004001   | scholarbrand #8        | 43770.06            |
| 10006011  | corpunivamalg #11      | 43461.3             |
| 2002001   | edu packamalg #2       | 42729.89            |
| 6016001   | importoamalg #2        | 42298.35999999999   |
| 5003001   | univunivamalg #4       | 42290.45            |
| 7004002   | edu packbrand #2       | 42222.060000000005  |
| 6009004   | maxicorp #4            | 42131.72            |
| 2002001   | importoexporti #2      | 41864.04            |
| 8006006   | corpnameless #6        | 41825.83            |
| 10008009  | namelessunivamalg #9   | 40665.31            |
| 4003001   | univbrand #2           | 40330.67            |
| 7016002   | corpnameless #2        | 40026.4             |
| 2004001   | corpmaxi #8            | 38924.82            |
| 7009001   | amalgedu pack #2       | 38711.04            |
| 6013004   | exportibrand #4        | 38703.41            |
| 8002010   | importonameless #10    | 38438.670000000006  |
| 9010004   | univunivamalg #4       | 38294.21            |
| 2004001   | importoimporto #2      | 37814.93            |
| 9010002   | univunivamalg #2       | 37780.55            |
| 3003001   | amalgexporti #2        | 37501.25            |
| 8014006   | edu packmaxi #6        | 35914.21000000001   |
| 8011006   | amalgmaxi #6           | 35302.51            |
| 8013007   | amalgcorp #4           | 34994.01            |
| 7003006   | exportibrand #6        | 34596.55            |
| 6009006   | maxicorp #6            | 44116.12            |
| 8002004   | importonameless #4     | 43876.82000000001   |
| 8001008   | amalgnameless #8       | 43666.869999999995  |
| 7002006   | importobrand #6        | 43574.33            |
| 7013008   | exportinameless #8     | 43497.73            |
| 6014008   | edu packbrand #8       | 43381.46            |
| 10014007  | edu packamalgamalg #7  | 42982.090000000004  |
| 9006004   | corpmaxi #4            | 42437.49            |
| 9016008   | corpunivamalg #8       | 41782.0             |
| 10006015  | amalgamalg #2          | 31716.129999999997  |
| 2003001   | univnameless #4        | 31491.340000000004  |
+-----------+------------------------+----------

It can be seen that the QPS of the average task is increased by about 30%~40%, and some tasks are increased by more than 50%.

References

Alluxio on Amazon EMR installation and deployment:
https://aws.amazon.com/cn/blogs/china/five-minitues-to-use-alluxio-guide-emr-spark

Alluxio Community EMR Integration Guide:
https://docs.alluxio.io/os/user/stable/en/cloud/AWS-EMR.html

Amazon EMR clusters:
https://docs.aws.amazon.com/en_us/emr/latest/ManagementGuide/emr-what-is-emr.html

summary

This article introduces in detail the installation and deployment of alluxio cluster on Amazon EMR, including bootstrap script and EMR cluster initialization json configuration, and compares the performance improvement of hive sql query on EMR cluster with Alluxio acceleration enabled through hive tpcds standard benchmark.

author of this article

Tang Qingyuan

Amazon Cloud Technology Data Analysis Solution Architect, responsible for Amazon Data Analytic service solution architecture design and Deep Dive support such as performance optimization, migration, and governance. 10+ experience in R&D and architecture design in data field. He has served as senior consultant of Oracle, senior architect of Migu Cultural Data Mart, and architect of data analysis field of ANZ Bank. He has rich practical experience in big data, data lake, intelligent lake warehouse and other projects.

Chen Hao

Amazon cloud technology partner solution architect, with nearly 20 years of IT industry experience, has rich practical experience in enterprise application development, architecture design and construction. At present, he is mainly responsible for the solution architecture consulting and design of Amazon Cloud Technology (China) partners, and is committed to the application and promotion of Amazon cloud services in China and helping partners build more efficient Amazon cloud service solutions.


亚马逊云开发者
2.9k 声望9.6k 粉丝

亚马逊云开发者社区是面向开发者交流与互动的平台。在这里,你可以分享和获取有关云计算、人工智能、IoT、区块链等相关技术和前沿知识,也可以与同行或爱好者们交流探讨,共同成长。


引用和评论

0 条评论