Introduction: SAE is the perfect fusion of advanced cloud-native technologies: containerization + microservices + serverless best practices. It advocates 0 threshold + 0 container foundation + 0 code transformation to enjoy the out-of-the-box experience of k8s+Serverless+ microservices, can host microservice applications/web applications/open source scheduled tasks, and help enterprises quickly enter the fast lane of cloud native practice, welcome Everyone understands!
Author: Dai Xin, Alibaba Cloud SAE Product Manager
In recent years, with the popularization of the Internet, the digitization of enterprises has developed faster and faster, and the technical structure has also changed several times, especially in the online business part. From the initial monolithic application to the distributed application to the cloud-native application, there has been an advanced change. However, while bringing convenience, it also brings a certain degree of complexity to enterprises: the threshold for new technologies is high, and containers and microservices are two typical roadblocks. Even after microservice and containerization, enterprises still need to pay attention to server configuration, operation and maintenance, capacity evaluation, and face the challenges of high performance and stability, and cannot enjoy the maximum value brought by the cloud.
The emergence of Serverless has brought about a leap-forward change. It brings more opportunities for the digital transformation of enterprises. In this mode, the management and deployment, operation and maintenance, resource allocation, and capacity expansion and contraction of servers and operating systems are all provided by cloud vendors, and computing power is truly provided like water, electricity, and coal. It can convert the general capabilities originally in the traditional application environment into cloud services, and customers can reach them at low cost and high efficiency.
The most important value of Serverless can be summarized into three points:
- Provide always-on services through infrastructure decoupling, extreme elasticity, and automatic fault handling, without worrying about downtime.
- Through an efficient R&D framework and new forms of DevOps, we can achieve market response in seconds.
- It has smoothed out the generation gap in technological competitiveness between leading Internet companies and traditional enterprises, allowing traditional enterprises to take time when faced with a large number of technological upgrades and reconstructions, there will be no talent gap, and even overtaking in corners.
The original intention of Alibaba Cloud Serverless Application Engine (SAE for short) is to allow customers to enjoy the complete experience of microservices + K8s + Serverless without changing any code or application deployment method, and is free of operation and maintenance out of the box. As the industry's first application-oriented Serverlss PaaS, since its launch in 2018, it has been well received by users, and the production environments of enterprise customers from all walks of life are running stably on SAE.
SAE product positioning: a fully managed, O&M-free, highly elastic general-purpose PaaS platform. It supports full hosting of open source microservices/open source timing task frameworks/web applications, and provides open source enhancements & enterprise-level features. It can be said that SAE covers the complete scenario of cloud application and is the best choice for application cloud application.
Serverless microservices, as a very popular term in the industry today, are broadly defined as: CI/CD pipeline, plus built-in high-efficiency R&D framework, plus shielding basic IaaS layer or K8s base, and providing end-to-end observability capabilities, as well as some automatic resiliency and traffic governance services.
Alibaba Cloud's SAE+MSE can be called the best practice of serverless services. Based on SAE, with application as the center, the agent of MSE is built in the application startup process of SAE, that is, it provides a whole set of microservice capabilities. In addition, the bottom layer naturally shields the K8s base and provides a serverless architecture, so SAE+MSE can be regarded as the best practice of serverless. At the same time, we can 100% embrace open source and give back to open source, because the MSE team has done a lot of open source preaching and made a lot of enhancements on the basis of open source. Based on this set of serverless microservice best practices, development efficiency can be increased by 70% and costs can be reduced by 60%.
Compared with ECS or K8s, SAE has more elastic indicators and more flexible elastic policies. It mainly provides three elastic policies.
- Elasticity of monitoring indicators: On the basis of open source K8s, business-oriented elasticity indicators, such as the number of QPS, RT, and TCP connections, are added. Based on these business indicators, elasticity can be accurately achieved, and the overall elastic capacity can be estimated more accurately. Generally suitable for scenarios with burst traffic or typical pulses.
- Timing flexibility: regularly set the expansion/reduction time and how many instances to expand/reduce. At the same time, SAE provides a white screen operation, which is simpler than the open-source K8S that needs to implement the HPA Controller by itself.
- Hybrid Elasticity (the first in the industry): A elasticity policy based on a mix of timing elasticity and index elasticity. The business of many customers has tidal characteristics and will be accompanied by traffic bursts, such as live video and other scenarios. Therefore, based on the flexibility of monitoring indicators as a bottom line, and then superimposing timing flexibility for traffic peaks in a fixed period of time as an enhanced solution, only one strategy can achieve the refined elasticity requirements of timing elasticity or monitoring indicator elasticity in different time periods.
E-commerce, new retail, and mutual entertainment industries often experience unexpected bursts of traffic. In the past, peaks were generally estimated in advance and fixed ECS resources were kept according to the peaks. However, inaccurate capacity estimations often occurred, resulting in wasted or insufficient resources, and more importantly, the SLA of the system was affected.
After using the pressure measurement tool plus SAE solution, the elastic threshold can be accurately set according to the pressure measurement results, and compared with the real-time monitoring indicators of ARMS, the system will automatically expand and shrink the capacity, and there is no need to do capacity planning, which greatly saves money. The hardware cost achieves elastic efficiency in seconds, which can easily cope with the peak test. In an emergency, it is also possible to avoid application avalanches by using the killer trick of current limiting and downgrading.
SAE provides an efficient closed-loop DevOps system, which completely covers the entire closed-loop process from development state to deployment state to operation and maintenance state. It provides three enterprise-grade CI/CD continuous integration solutions:
- Seamless connection to the open source CI/CD tool Jenkins: Through the built-in Maven plug-in, the complete process from source code to build to entire deployment can be completed. It can support several modes such as war package, jar package and image deployment.
- The most comprehensive CI/CD solution on the cloud: The difference between it and Jenkins is that the code can be directly hosted on the cloud, and the cloud effect will complete the code hosting. It can also achieve security management on the code side, customize the pipeline, and provide a complete and consistent environment for building and running. It has relatively complete functions and is generally suitable for medium-sized enterprises.
- The lightest and most easy-to-use CI/CD solution: deploy SAE through container image service. Its lightness lies in opening up the code warehouse through webhook, customizing some rules for building images and triggers on the container image service, and automatically building and deploying when the code is submitted. If you use the enterprise-level container image service, you can also achieve image security scanning, anti-vulnerability, global multi-domain distribution and other capabilities.
The SAE and ECS co-location solution is mainly applicable to two scenarios:
Scenario 1: An intermediate transition solution for migrating from ECS to SAE, which can improve the stability of the migration process.
Scenario 2: Fully use SAE as an elastic resource pool for backup.
This solution needs to ensure that the ECS instance and SAE instance of the same application can be mounted to the backend of the same SLB, and the weight ratio should be set. If it is a microservice application, it needs to be registered in the same registry. In addition, the client side also needs to do some adaptations: to reuse the release system built by the customer, it is necessary to ensure that the SAE instance and the ECS instance version are the same each time a release is released; to reuse the monitoring system built by the customer, the SAE instance needs to be The monitoring data is integrated with the monitoring data of the ECS. When the traffic peak arrives, the elastic module will bounce the elastic instance to the SAE, which greatly improves the elastic capacity expansion efficiency and reduces the cost.
Four major new features to extend the new frontier of Serverless
Terraform support
As the preferred cloud tool for domestic and foreign large customers, Terraform's value lies in infrastructure as code, which can automatically configure infrastructure and help enterprises to develop, deploy and expand cloud applications with higher speed, lower risk, and lower cost. Greatly improve the efficiency of automated operation and maintenance. After SAE is integrated into Terraform, developers do not need to understand each API, and provide declarative IaC, which makes it more secure to operate SAE resources, and it is easier to interface with CICD / GitOps. More importantly, it provides resource orchestration capabilities, enabling one-click deployment of SAE and dependent cloud resources, greatly improving the efficiency of site construction from 0 to 1. Several Internet clients are already using it in production environments.
Provides one-stop application hosting for PHP
When it comes to PHP operation and maintenance, everyone is familiar with various commercial versions of the server operation and maintenance panel. However, these panels only support stand-alone operation and maintenance, lack monitoring and second-level automatic elasticity, do not support incremental update of static files, and are not friendly to slightly larger PHP applications.
In response to the above pain points, SAE provides a fully managed service for PHP applications that is free of operation and maintenance, highly elastic, and seamlessly integrated with APM monitoring. In terms of framework, it supports popular frameworks such as laravel, ThinkPHP, Swoole, wordpress, etc. In terms of the operating environment, it supports the online application architecture LNMP, and provides PHP-FPM + Nginx by default. Support Docker image and PHP zip package deployment, greatly reducing the user threshold. The function matrix of PHP application hosting is quite rich, including uploading and downloading of development and debugging classes, built-in Xdebug, etc., as well as elastic scaling of runtime classes, and can independently manage static files and directories through NAS and OSS. Based on these capabilities, several typical usage scenarios of PHP are well supported: such as static site deployment, remote debugging, multi-site deployment, application migration of existing ECS/server operation and maintenance panels, etc.
SAE Job Official Invitation to Test
SAE adds support for task-type workloads, and open-source XXL job and other task frameworks can be transformed and migrated. According to business data processing requirements, a large number of computing tasks can be quickly created in a short period of time, and computing resources can be quickly released after the tasks are completed. It has the features of stand-alone, broadcast, parallel computing, fragmented operation, timing, automatic retry on failure, monitoring and alarming features, and provides a user experience that is fully managed and free of operation and maintenance.
Different from the traditional task framework, SAE job is more convenient to use (no code intrusion), more economical (resources are released immediately after the task runs), more stable (independent of online business, and automatically retrying if the task fails), more transparent ( Visual monitoring and alarm), more worry-free (no need to pay attention to the underlying resources). More importantly, SAE jobs can deeply integrate the microservice ecosystem and are compatible with open source K8s.
SAE job can be widely used in timed tasks, batch data processing, offline computing, asynchronous task decoupling, microservice ecological integration and other scenarios. You are welcome to try it for the first time.
SAE Support Incident Center
SAE provides a lot of enterprise-level enhancement capabilities: such as one-click start and stop of the development and test environment, permission isolation/control, which is convenient for customers to use directly. Recently, the event center capability has also been added: it can send abnormal events of application runtime and changes to users who subscribe to the rules through DingTalk, SMS, and email, laying a solid foundation for customers to respond in time and automate operation and maintenance, which is also our difference from open source. Some of the differences in the use experience of self-built K8s have truly achieved what customers think and what customers are anxious about.
SAE—the perfect fusion of cloud-native advanced technologies
SAE is the perfect fusion of cloud-native advanced technologies: containerization + microservices + serverless best practices. Its emergence helps serverless to go from dedicated to general, breaking the implementation boundary of serverless, making serverless no longer the special favor of front-end full-stack and small programs, as well as back-end microservices, batch tasks, SaaS services, IoT applications, etc. It can be built on Serverless, which is naturally suitable for the large-scale implementation of the core business of enterprises. It truly achieves the ultimate experience of "use it as soon as it comes, complete functions, and stop when it runs out", helping enterprises to easily enter the fast lane of cloud native practice.
For more content, pay attention to the Serverless WeChat official account (ID: serverlessdevs), which brings together the most comprehensive content of serverless technology, regularly holds serverless events, live broadcasts, and user best practices.
Copyright statement: The content of this article is contributed by Alibaba Cloud's real-name registered users. The copyright belongs to the original author. The Alibaba Cloud developer community does not own the copyright and does not assume the corresponding legal responsibility. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find any content suspected of plagiarism in this community, fill out the infringement complaint form to report it. Once verified, this community will delete the allegedly infringing content immediately.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。