Why are Kubernetes and containers inseparable from machine learning?

The original text comes from infosecurity

Author: Rebecca James

Compiled by JD Cloud Developer Community

At present, the boom of digital transformation is in full swing in the IT field, more and more enterprises are involved in it, and the integration of modern technologies such as machine learning and artificial intelligence is gradually becoming popular within the company organization.

As the technologies that make up an enterprise's complex IT infrastructure mature, deploying a cloud-native environment and using containers in that environment has long been part of the enterprise tech track.

Fortunately for business owners, Kubernetes and container deployment technologies can not only go hand-in-hand with machine learning, but can also be brought into a cloud-native model, providing many benefits to the business, including implementing effective business policies as well as security 's cultivation.

When we talk about machine learning, what comes to your mind? Machine use cases are diverse - from simple fraud/cybercrime detection, to tailor-made customer experiences, to complex operations like supply chain optimization, it's all a testament to what machine learning can bring to your business huge profits.

In addition, Gartner's forecast further demonstrates the numerous advantages offered by machine learning, which states that by 2021, 70% of enterprises will rely on some form of artificial intelligence.

The application of artificial intelligence in business

For businesses to take full advantage of artificial intelligence and machine learning and apply them to new business groups such as DevOps and DevSecOps, they must have a solid IT infrastructure.

A robust IT environment can provide data scientists with an environment to experiment with various datasets, computational models and algorithms without affecting other operations and without costing IT staff.

To effectively implement machine learning in business, enterprises need to find a way to repeatedly deploy code in both on-premises and cloud environments, and establish connections to all required data sources.

For modern businesses, time is an essential tool to help them achieve their goals, so they desperately need an IT environment that supports rapid code development.

Speaking of containers, containers speed up the deployment process of enterprise applications by wrapping code and its specific requirements to run in "packages", a feature that makes containers ideal for enterprises, and therefore machine learning and The ideal partner for artificial intelligence.

To sum up, the three phases of an AI project in a container-based environment, including exploration, model training, and deployment, are very promising. What exactly does each stage include? These three stages are explained below.

01 Explore <br>When building AI models, the norm that data scientists follow is to try different datasets as well as various ML algorithms to determine which datasets and algorithms to use so that they can improve their predictive level efficiency and accuracy.

Typically, data scientists rely on a large number of libraries and frameworks to create ML models for various situations and problems in different industries. Data scientists also need the ability to run tests and execute them quickly as they try to discover new revenue streams and work toward an enterprise's business goals.

While the use of AI technology is changing with each passing day, there is already data showing that companies that enable data scientists and engineers to develop using containerization have an advantage over their competitors.

Canadian web hosting provider HostPapa outperformed other leading web hosting providers thanks to its early adoption of Kubernetes, according to a report by Ottawa DevOps engineer Gary Stevens.

Incorporating containers in the exploratory phase of an AI or ML project enables data teams to freely package libraries according to their specific domain; deploy algorithms accordingly, and identify the right data sources based on team needs.

With the successful implementation of container-based programs such as Kubernetes, data scientists have access to isolated environments. This allows them to customize the exploration process without having to manage multiple libraries and frameworks in a shared environment.

02 Model Training <br>After designing the model, the data scientist needs to use a large amount of data to train the AI program across platforms to maximize the accuracy of the model and reduce the use of any human resources.

Considering that training AI models is a highly computationally intensive operation, containers are proving to be very beneficial for scaled workloads and fast communication with other nodes. Often, however, a member of the IT team or scheduler determines the best nodes.

In addition, data training with modern data management platforms through containers greatly affects and simplifies the data management process in AI models. In addition, data scientists have the advantage of running AI or ML projects on many different types of hardware, such as GPUs, which also allows them to always use those hardware platforms that are most accurate.

03 Deployment <br>As the trickiest part of an AI project, in the production and deployment phases of a machine learning application there can often be a combination of multiple ML models, each with a different purpose.

By incorporating containers in ML applications, IT teams can deploy each specific model as a separate microservice. So, what are microservices? A microservice is a self-contained, lightweight program that developers can reuse in other applications.

Not only do containers provide a portable, isolated, and consistent environment for rapidly deploying ML and AI models, they also have the potential to change today's IT landscape by enabling businesses to achieve their goals faster and better.

Original link: https://www.infosecurity-magazine.com/opinions/kubernetes-containers-machine/


京东云开发者(Developer of JD Technology)是京东云旗下为AI、云计算、IoT等相关领域开发者提供技术分...

2.5k 声望
5.2k 粉丝
0 条评论
Mybatis的parameterType造成线程阻塞问题分析 | 京东云技术团队
最近在新发布某个项目上线时,每次重启都会收到机器的 CPU 使用率告警,查看对应监控,持续时长达 5 分钟,对于服务重启有很大风险。而该项目有非常多 Consumer 消费,服务启动后会有大量线程去拉取消息处理逻辑...


定档 6 月!SegmentFault AI Hackathon 杭州站启动
AI 掀起巨浪,你我应是冲浪者。创业团队、互联网大厂、国家队的大模型角力如火如荼,各类开源模型、垂直模型的出现也推动着越来越多 AGI 应用的陆续落地。

SegmentFault思否7阅读 59k评论 2

用 AI 生成漂亮小姐姐(一)——Stable Diffusion 小白搭建教程
最近 AIGC、ChatGPT 等话题持续发酵,热门程度不亚于之前的 “元宇宙”。抖音、小红书到处都是机器对话、AI 绘图的视频。我看见别人生成的漂亮小姐姐图片眼馋得不行,终于按捺不住自己的好奇心,也尝试一下搭建。本...

WalkerD15阅读 1.7k评论 3

Science AI 大潮已至,科技部亲自下场出大动作
生成式 AI 爆火,中国如何在 AI 时代实现弯道超车?对此,科技部亲自给出答案:启动 AI for Science 专项部署工作。可以预见,AI for Science 新一轮大潮即将来临。

超神经HyperAI3阅读 80k

这是 TVM 算子清单(TOPI)的入门教程。 TOPI 提供了 numpy 风格的通用操作和 schedule,其抽象程度高于 TVM。本教程将介绍 TOPI 是如何使得 TVM 中的代码不那么样板化的。

超神经HyperAI1阅读 90.7k

编译 MXNet 模型
本篇文章译自英文文档 Compile MXNet Models。作者是 Joshua Z. Zhang,Kazutaka Morita。更多 TVM 中文文档可访问 →TVM 中文站。本文将介绍如何用 Relay 部署 MXNet 模型。首先安装 mxnet 模块,可通过 pip 快速...

超神经HyperAI1阅读 57.9k

横向对比 11 种算法,多伦多大学推出机器学习模型,加速长效注射剂新药研发

超神经HyperAI1阅读 46.8k


京东云开发者(Developer of JD Technology)是京东云旗下为AI、云计算、IoT等相关领域开发者提供技术分...

2.5k 声望
5.2k 粉丝