Author: Winter Island, Aveda

Guided reading

EDA Event-Driven Architecture (Event-Driven Architecture) is a system architecture model whose core capability lies in the ability to discover system "events" or important business moments (such as transaction nodes, site visits, etc.) event to take necessary action. So what kind of sparks can the EDA + container wipe out? This article will lead you to build a complete event-driven architecture on the cloud with the help of ASK container service + EB capabilities.

This article takes "Online File Decompression Scenario" as an example to show you how to use the classic EDA event driver with containers.

Follow the [Apache RocketMQ] public account to get more information!

Service Architecture

Online file decompression is driven by EB OSS event notification. The architecture scheme is as follows:

The core of the EDA architecture is the application of events. By default, OSS events can be collected through the cloud service bus, and the event notification can be customized. In this scenario, events of files uploaded by OSS are transmitted to the service in ASK in real time through EventBridge, and then the service in ASK downloads, decompresses and transmits the uploaded ZIP file to OSS again.

Cloud services used:

  • ASK: Serverless Container Service is a safe and reliable container product based on Alibaba Cloud's elastic computing infrastructure and fully compatible with the Kubernetes ecosystem.
  • EB: The EventBridge event bus is a serverless event bus service that supports event access for Alibaba Cloud services, custom applications, and SaaS applications, and is suitable for easily building a loosely coupled and distributed event-driven architecture.
  • OSS: Object storage service, providing massive, secure, low-cost, and highly reliable cloud storage services.

scene practice

The operation steps for the three cloud products of OSS, ASK and EventBridge are as follows:

OSS resource configuration

  • Create a bucket in the relevant region on the OSS console, and remember the bucket path information.

zip: place the zip package that needs to be decompressed

unzip: place the unzipped file

  • Create a user who can operate OSS files and generate ak/sk.

Open the RAM console [1] and click "Create User".

  • Enter the "Login Name" and "Display Name" and check "Open API Call Access" and click OK to create a user identity.

  • After creating the account, open the RAM console [1] to see the account information just created.

  • Click on the account name, then go to the account details page. Click "Create AccessKey".

  • Then the ak, sk information can be generated as shown below. Save these two key information, you need to use it when configuring the ask decompression service later, you can also click "Download CSV file" to download and save it.

  • With ask sk, it is not enough. To be able to access OSS services, you need to grant OSS operation permissions to this sub-account, and click "Permission Management".

  • Click "Add Permission".

  • Enter OSS in "System Policy", then check "AliyunOSSFullAccess", and click OK to complete the access authorization to OSS.

ASK resource configuration and code analysis

ASK (Serverless Kubernetes) runs the application to receive OSS events from EventBridge, so you need to create an ASK cluster first, and then deploy the decompression service.

1) Create an ASK cluster

  • Open the container service console [2] Click Create Cluster in the upper right corner

  • Select ASK cluster

Fill in the cluster name, check the "Service Agreement" at the bottom, and use the default configuration for others, and then you can create a cluster.

  • Click Create in the upper right corner to start creating the cluster.

  • A pop-up will pop up to confirm the creation. After confirming that there is no problem, click "OK" to start the creation.

  • The cluster can be created after a few minutes.

If you encounter problems during the cluster creation process, you can refer to the cluster creation document [3] solve the problem, or you can find a solution through the group number 31544226 and the DingTalk group.

2) Deploy the decompression service

  • After the cluster is deployed successfully, click on the container service page [4] to see the newly created cluster, and then click the cluster name to transfer to the cluster details page;

  • Click "Stateless" in "Workload" on the cluster details page to create Deployment and Service by submitting and decompressing service YAML

  • Click "Create Resource Using YAML" in the upper right corner

  • Submit the following YAML content, note that the two environment variables OSS_ACCESSKEYID and OSS_ACCESSKEYSECRET need to be set to your ak and sk, and you need to have permission to download and upload files from OSS. Then click Create to complete the creation of the service.
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eb-ask-demo
spec:
  selector:
    matchLabels:
      app: eb-ask-demo
  template:
    metadata:
      labels:
        app: eb-ask-demo
    spec:
      containers:
      - name: eb-ask-demo
        image: "registry.cn-hangzhou.aliyuncs.com/kubeway/demo-ossunzip:v0.0.1-20211218152144_master_37323b1"
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 8080
        env:
        - name: OSS_ACCESSKEYID
          value: "ak"
        - name: OSS_ACCESSKEYSECRET
          value: "sk"
        - name: OSS_ENDPOINT
          value: "oss-cn-hangzhou.aliyuncs.com"
  • The decompression service source code is available at github [5] .

3) Get the service exposure URL

  • After the service is deployed, click "Service" in "Network" to see the Service eb-ask-demo, and then you can see the address of the "External Endpoint". "External endpoint" address is the address of decompression services, the full path is http://121.43.97.107/unzip

EB resource allocation

The EB bus is divided into a cloud service bus and a custom bus. The cloud service bus is used to accept the cloud service event Eg OSS, and the custom bus is used to accept the custom event Eg decompression completion information.

  • Configure Cloud Service Bus

  • Configure the event mode. The event mode is the filter rule of the event. It needs to be configured carefully, otherwise it will cause a loop trigger.

More event mode descriptions [6] References for more links at the end.

{
    "source": [
        "acs.oss"
    ],
    "type": [
        "oss:ObjectCreated:PostObject",
        "oss:ObjectCreated:UploadPart",
        "oss:ObjectCreated:PutObject",
        "oss:ObjectCreated:UploadPartCopy",
        "oss:ObjectCreated:InitiateMultipartUpload",
        "oss:ObjectCreated:AppendObject",
        "oss:ObjectCreated:CompleteMultipartUpload"
    ],
    "subject": [
        {
            "suffix": ".zip"
        }
    ],
    "data": {
        "oss": {
            "bucket": {
                "name": [
                    "eb-ask"
                ]
            },
            "object": {
                "key": [
                    {
                        "prefix": "zip/"
                    }
                ]
            }
        }
    }
}

Effect verification

The service is ready, now find a zip file and upload it to OSS to see the effect. After uploading a zip file to the zip directory you should see the unzipped file in the unzip directory.

  • Prepare the zip file

You can create a text file and compress it into a zip format. Or you can open our example program Github project [7] , download the zip file of the source code of this project, and you can get a zip file directly.

  • upload zip file

Open the OSS console [8] , select the configured bucket, and enter the zip directory.

25.png

  • Click "Upload File".

  • Click Scan Files and find the zip file you just downloaded.

  • Select the zip file you just downloaded.

  • Click "Upload File".

  • The file was uploaded quickly and successfully.

  • Now open the unzip directory to see that the unzipped file has been uploaded.

  • Now open the EventBridge console [9] view the event trace. Select to view the last 5 minutes of events.

  • Click on "Event Track" to see that the event was successfully delivered to the decompression service exposed by ASK through EventBridge.

Advantages and Summary

  • The event-driven architecture scenario based on this scheme is a highly loosely coupled and highly distributed architecture model. The creator (source) of the event only knows the event that occurred, but does not know how the event is processed, and does not need to care how many related parties subscribe. the event.
  • ASK + EB can solve most of the container event-driven scenarios in the business layer, decoupled through EB and distributed for specific events. Through the fast and flexible online business components of the loosely coupled architecture, it provides enterprises with a more agile and efficient container business deployment solution.

References

[1] RAM console:

https://ram.console.aliyun.com/users

[2] Container Service Console

https://cs.console.aliyun.com/

[3] Create cluster documentation

https://help.aliyun.com/document_detail/86377.htm?spm=a2c4g.11186623.0.0.350f3e068qu6bW#task-e3c-311-ydb

[4] Container Service Page

https://cs.console.aliyun.com/#/k8s/cluster/list

[5] github 

https://github.com/AliyunContainerService/serverless-k8s-examples/oss-unzip

[6] More event mode descriptions:

https://help.aliyun.com/document_detail/181432.html

[7] Example program Github project

https://github.com/AliyunContainerService/serverless-k8s-examples

[8] OSS console

https://oss.console.aliyun.com/

[9] EventBridge Console

https://eventbridge.console.aliyun.com

[10] zip unzip the source code

https://github.com/AliyunContainerService/serverless-k8s-examples/tree/master/oss-unzip


阿里云云原生
1.1k 声望321 粉丝