The infrastructure landscape has changed dramatically over the past decade, with more and more enterprises distributing their workloads across multiple platforms, both on-premises and in the cloud. This has resulted in a big change in the way workloads are managed, with an accompanying increase in complexity and risk. There are many types of workload distribution across multiple platforms, with multi-cloud and hybrid cloud being the most common.
Simply put, multi-cloud is the component of deploying applications on two or more cloud infrastructure platforms. These platforms can be two public cloud service providers, or two private clouds, or some combination of them. Hybrid cloud is roughly the same, but it refers more to a combination of public and private clouds.
Multicloud and hybrid cloud application design patterns can take many forms, but two are the most important:
1) Components hosted on different clouds - the most common and simplest model involves separating components (application layer) so that each different component is deployed on a single provider and the entire application is distributed across multiple clouds . For example, the front end of an application might reside on a public cloud, its middleware on a private cloud, and its database on an on-premises bare metal cluster.
This example might involve a heavily trafficked front-end-centric web application, possibly updated frequently, that avoids calling back-end resources. Installing an application front end on a public cloud can quickly and dynamically scale that resource based on traffic and can simplify ad-hoc (but resource-intensive) procedures.
Placing middleware on a private cloud enables similar but more limited flexibility, as well as tighter security. Running the database on bare metal provides the highest scalability and performance while providing the greatest protection for critical and regulated data.
2) A single component, distributed across multiple clouds - Less often, we take a single application component and distribute it across multiple clouds. The challenge with this model is that latency and other potential cyber risks are now introduced into individual application components.
For example, as enterprises expand their use of public cloud services and seek cost optimization, they often encounter situations where the required resources are not available, in which case technologies like Kubernetes federation can be used to support container workloads—a great deal for Such microservices scale horizontally to perform a single application function - to "cross" the chasm between public clouds. However, writing microservices and applications that thrive on this architecture means expecting a range of latencies and special conditions that applications running on a single infrastructure typically don't encounter.
Cloudy Advantage
Helping developers more easily use resources and services from multiple cloud providers offers many advantages, including the following.
1) Leverage - This is how a business can have some leverage over its suppliers in order to be able to negotiate the best price and ensure the best level of service. If a business is locked into a single supplier (or if there is a monopoly), it loses this leverage and is vulnerable to rising costs and falling service levels.
2) Price/Performance Efficiency - The ability to access multiple public clouds allows businesses to continuously optimize price/performance - not just workload hosting, but all other performance factors and costs associated with serving applications (e.g. network egress) cost, interconnectivity, latency).
However, maximizing the freedom to optimize cost and performance by moving components and workloads between providers and infrastructure means limiting an enterprise's reliance on the platforms and providers used for highly differentiated features and services . Kubernetes and containers can play an important role here, forming a consistent foundation across multiple clouds and infrastructures.
3) Reduce risk - Cloud provider pricing is complex, difficult to observe and predict, and can change with little notice. Services can stop, and provider policies can change—providers can be fickle in their enforcement, and terms of service agreements leave customers with little recourse in the event of disputes.
Therefore, it makes a lot of sense to plan ahead, provide redundancy, and ensure that critical databases and other hard-to-move components are not locked into a specific provider.
4) Location - A key service provided by the public cloud is the ability to place workloads and data in specific areas. The ability to leverage location can tap into lucrative markets—critical to application performance (e.g., minimizing latency), storage and transfer costs, and (in some cases) the availability and scale of specific services.
5) Regulatory compliance options – Controlling workloads and data location (data at rest and data in motion) is also critical to enforcing jurisdictional policies that enable compliance, data sovereignty, and data protection. The ability to comply with the jurisdiction and customer requirements of GDPR, privacy protection and other regulations is a guarantee for the global development of enterprises.
Cloudy Challenge
Businesses need to develop a strategy to ensure that multi-cloud delivers the benefits without creating additional work difficulties for developers, DevOps, and operations teams.
1) Consistency is critical. By ensuring application platform consistency across private and public clouds, you can help ensure applications run anywhere without changes; and you can maintain configuration, operational automation, CI/CD, and other ancillary codebases in a single channel.
Kubernetes is currently the best available platform for retrofitting public and private cloud infrastructures as well as bare metal - providing many abstractions for isolating workloads from the underlying infrastructure and maintaining it despite underlying infrastructure issues They are dynamic and allow for fast, efficient, low-impact application updates, scaling, and lifecycle management.
2) Kubernetes alone is not enough - Organizations need to provide a consistent Kubernetes across all infrastructure i.e. easy to customize, easy to extend, fully observable, battery-inclusive, secure, universally compatible and operations friendly application environment, backed by a central source provided. A single cluster model accelerates operations, enables container, configuration, and automated portability, while also improving security (removing unknowns and variation, thereby reducing the attack surface), facilitating policy management, and simplifying regulatory compliance.
3) Using a centrally managed system to deliver, update, and manage clusters across multiple clouds can dramatically increase productivity; single pane of glass for observability and manual operations, fully automated and non-disruptive updates, for building self-service applications and a set of APIs for on-demand delivery of clusters. Operates various public and private cloud infrastructures through a central command and control facility "provider" middleware, helping to ensure that enterprises benefit from platform and public cloud specific services, while also enforcing consistent configuration and behavior to operate The application's Kubernetes cluster.
4) Freedom of choice is consistent with this model. A centrally managed multi-cloud infrastructure gives operators and developers the freedom to choose between public and private cloud alternatives, while also enabling the use of a range of operating systems and a host of automation, CI/CD, security and other tools.
5) Centralized monitoring and capacity management are also important to ensure that businesses have a clear picture of how systems are performing and the resources they are consuming so they can make the right decisions about where applications should run.
6) High on the list of core requirements should be ease of use. If the system is too complex to use, or requires developers to learn to deal with new or unfamiliar systems, this will greatly hinder multi-cloud adoption.
Of course, there are some downsides to choosing a multi-cloud strategy and ensuring a common platform is used to deploy and manage consistent Kubernetes (and potentially applications running on top of Kubernetes) across multiple platforms. Chief among them is that enterprises may not be able to (directly) take advantage of additional services offered by public (and private) cloud providers, including their "one-click Kubernetes" versions.
Services with low barriers to entry, including Kubernetes offerings, seem to offer less resistance to startups. However, the more a business invests in digging deeper into the vendor service portfolio without the abstractions and intermediaries provided by a centralized solution, the deeper it gets locked in.
Going into multicloud means repeating (differently) starting enterprise-scale work on each provider and maintaining all the parallel pipelines of tools you've created for it. As a result, "lifting and moving" any part of an enterprise's operations and business from one supplier to another becomes a multi-layered challenge.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。