AWS Serverless service is a serverless computing method for application engineers. The basic concept is to hand over the infrastructure required to run the service to AWS to manage. Engineers who use AWS Serverless services can focus on the development of the customer-oriented logical service layer without having to disperse too much energy on tasks such as infrastructure construction, management, and capacity expansion. The core of AWS Serverless development is a computing service called Lambda.
Today we will focus on Lambda, introduce the different assembly modes of Lambda and various AWS services in different application scenarios, and discuss the development and deployment based on AWS Serverless.
What?
First introduce what is Serverless development.
Different from the classic development, compilation, deployment and operation mode, using AWS Serverless computing service Lambda, you only need to upload the source file, select the execution environment and execute it, and you can get the running result. In this process, server deployment, runtime installation, and compilation are all managed and executed by the AWS Serverless computing platform. For developers, they only need to maintain the source code and the relevant configuration of the AWS Serverless execution environment.
Why?
Why choose Serverless?
For developers, using AWS Serverless services can save a lot of effort in managing infrastructure architecture and better focus on the development of business logic. As for services, the nature of AWS itself makes it well able to support flexible expansion and high concurrency scenarios. In addition, AWS Serverless-based development often has the advantages of rapid update and rapid deployment. Its on-demand charging method also has advantages in reducing expenses in application scenarios such as lightweight deployment test environments and rapid verification.
How?
So, let's take a look at how to use AWS Serverless related services to quickly assemble a simple Web Service.
AWS Serverless provides a rich service catalog to cover the needs of various functions. In addition to the core computing service Lambda, the establishment of Web Service services often requires services such as request entry routing (API Gateway), persistent storage (S3), CDN (CloudFront), firewall (WAF), domain name resolution (Route 53), etc. Used in combination. If you need to support the https protocol, you can also use the certificate management service (ACM) to achieve.
After assembling the above services, a complete response request process will look like this:
- The user request arrives at CloudFront via domain name resolution. After the WAF performs frequency control, IP filtering, header verification and other security guarantees, it is routed and forwarded to the core Lambda computing service through the API Gateway.
- Lambda will process the request, read or store data from the persistent storage S3 if necessary during processing, and finally return the processing result to the client through the API Gateway.
- Logs generated by Lambda during logical calculations will be output to the log management service provided by CloudWatch for future query. In addition, you can also perform additional optimizations, such as configuring CloudFront to load static resources directly from S3 to reduce time and computing overhead.
How to start Lambda
In the example of the Web Service just now, the execution of Lambda is invoked by the API Gateway service. In fact, Lambda execution can be invoked in many ways. First of all, AWS's own services are often used in conjunction with Lambda to include services such as message publishing (SNS), message queue (SQS), load balancer (ALB), and state machine (Step Function).
Of course, you can also start the execution of Lambda functions through SDK, Command Line or API interfaces. The execution mode is divided into two types: synchronous and asynchronous:
- Synchronous mode call: you need to wait for the Lambda function to finish executing before returning the result
- Asynchronous mode call: After calling the execution interface of Lambda, it will return immediately. The execution result of the Lambda function needs to be obtained through other means.
These two calling modes are available for flexible selection and use in different scenarios.
Message-driven example
Let's look at an example of using AWS Serverless services in a message-driven alarm processing system.
For example, we have a running system, and when an abnormal alarm occurs, an alarm message will be sent to the SNS service. The SNS service is a message Pub/Sub service that performs a basic fan-out publishing operation for alarm messages. On the one hand, the person in charge is notified by phone or email, and on the other hand, Lambda is called at the same time. Lambda can automate some alarms. deal with. This is the simplest alarm handling system.
But it should be noted here that the SNS service itself does not store messages. After SNS receives the message, it will immediately publish the message. If there is no recipient of the message at this time, the message will be discarded. In addition, the message is successfully delivered, that is, after the Lambda interface is successfully invoked, the message will be discarded regardless of the processing result. If Lambda fails due to some internal logic errors or external dependent system failures, the processing process fails, then the message that has been lost cannot be retried. To improve the reliability of message processing, you can add a message queue service (SQS) between SNS and Lambda.
The SQS standard queue provides an unordered, reliable, high-concurrency queue service that can store messages for up to 14 days. SNS publishes messages to SQS, and the messages are first stored in SQS. At this point, set SQS as the event source of Lambda, and the message will be sent to Lambda for further processing. SQS awakening Lambda can be configured as a synchronous process, that is, if Lambda execution fails and returns an error, SQS will not delete the message from the queue. The message that failed to process will be marked as invisible for the time being. After a period of hiding, SQS will call Lambda again to process the message. This way can greatly improve the reliability of message processing.
However, the above method also introduces the problem of a large accumulation of abnormal messages, which reduces the execution efficiency of normal messages. In order to solve this new problem, we can configure a Dead-Letter Queue for the message queue. If a message is still unsuccessful after multiple processing, it can be deleted from the original queue and transferred to the Dead-Letter Queue. The Dead-Letter Queue of the standard queue is also a standard queue in nature, and other subsequent processing of "abandoned" messages in it can also be continued.
Standard queues can better support high concurrency scenarios. A standard queue can receive a large number of messages at the same time, and concurrently evoke a large number of Lambda instances for processing. Correspondingly, the standard queue service cannot guarantee the order of message delivery, and the same message may be delivered repeatedly. Therefore, when using the SQS standard queue, you need to consider the deduplication of messages and the idempotence of processing logic. In addition to the standard queue, SQS has another first-in first-out (FIFO) queue. FIFO sacrifices concurrency performance to ensure the order and uniqueness of message delivery. In different application scenarios, different queue types can be flexibly selected and used according to specific needs.
to sum up
AWS Serverless services have natural advantages in terms of decoupling, elastic expansion, and cross-regional deployment, but they also have limitations:
- The upper limit of a single Lambda execution is 15 minutes, which is less supportive for long-term work.
- The availability of services built on the Serverless architecture is very dependent on the availability of AWS.
- Serverless-based development will generate learning costs for the AWS system, and the difficulty of debugging and troubleshooting will also become higher.
In actual production activities, it is necessary to comprehensively consider demand and balance cost and effect. In some application scenarios suitable for microservices, especially when performing short-state, temporary and other tasks, development based on AWS Serverless can become a very convenient development method.
The above is all the content shared this time. Regarding the video shared this time, you can also click [ here ] to view.
about the author
Ge Xinni, NetEase Yunxin back-end development engineer, has overseas development experience based on AWS Serverless, and is now engaged in Yunxin back-end scheduling development.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。