1
头图
This article comes from the sharing of eBay software engineer and Apache Kyuubi PPMC Member Wang Fei in the Apache SeaTunnel & Kyuubi joint Meetup. It introduces the basic architecture and usage scenarios of Apache Kyuubi (Incubating), the enhancements eBay has made to Kyuubi based on its own needs, and how Build Unified & Serverless Spark Gateway based on Kyuubi.

What is Kyuubi

First let me introduce Kyuubi. Kyuubi is a distributed Thrift JDBC/ODBC server that supports multi-tenancy and distributed features, and can meet the application of big data scenarios such as ETL and BI reports in the enterprise. The project was initiated by NetEase Shufan and has been incubated by the Apache Foundation. The current main direction is to build a Serverless SQL on Lakehouse service around various mainstream computing engines based on its own architecture design. Currently supported engines include Spark, Flink, and Trino. (aka Presto). My topic today is around Kyuubi and Spark, and I won't expand on other computing engines here.

For Spark, Kyuubi has HiveServer2 API, which supports Spark multi-tenancy, and then runs in a serverless manner. HiveServer2 is a classic JDBC service, and the Spark community also has a similar service called Spark Thrift Server. Here is a comparison of Spark Thrift Server and Kyuubi.

Spark Thrift Server can be understood as an independently running Spark app, which is responsible for receiving SQL requests from users. SQL compilation and execution will run in this app. When the scale of users reaches a certain level, there may be a single point of bottleneck.

For Kyuubi, we can look at the picture on the right. There is a red user and a blue user. They have a red Spark app and a blue Spark app respectively. After their SQL requests come in, the SQL compilation and The execution is carried out on the corresponding app, that is to say, the Kyuubi Server only transfers the SQL request once, and sends the SQL directly to the Spark app behind it.

For Spark Thrift Server, it needs to save results and state information, and it is stateful, so it cannot support HA and LB. However, Kyuubi does not save the results and is almost a stateless service. Therefore, Kyuubi supports HA and LB. We can increase the number of Kyuubi Servers to meet the needs of enterprises. So Kyuubi is a better Spark SQL Gateway.

The architecture of Kyuubi is divided into two layers, one is the Server layer and the other is the Engine layer. Both the Server layer and the Engine layer have a service discovery layer. The service discovery layer of the Kyuubi Server layer is used to randomly select a Kyuubi Server, which is shared by all users. The service discovery layer of the Kyuubi Engine layer is invisible to the user. It is used by the Kyuubi Server to select the corresponding user's Spark Engine. When a user's request comes in, it will randomly select a Kyuubi Server, and the Kyuubi Server will Go to Engine's service discovery layer to select an Engine. If Engine does not exist, it will create a Spark Engine. After the Engine is started, it will be registered with Engine's service discovery layer, and then an Internal will be performed between Kyuubi Server and Engine. Therefore, Kyuubi Server is shared by all users, and Kyuubi Engine is resource isolation between users.

Kyuubi supports some engine sharing levels, which are based on the balance between isolation and resources. In eBay we mainly use the USER and CONNECTION levels. First of all, for the CONNECTION level, a new app will be created for each connection of the user, that is, a Kyuubi Engine, which is suitable for ETL scenarios. The workload of ETL is relatively high and requires an independent app to execute; for the USER level, we can see There are two users here, one is Tom and the other is Jerry. Tom's two clients connect to the Kyuubi Server and will connect to the same Kyuubi Engine belonging to Tom. The USER level is suitable for ad-hoc scenarios, that is, for the same user. All connections will go to the same Kyuubi Engine to execute, and all requests to Jerry will go to Jerry's Kyuubi Engine to execute.

Some enhancements have been made to the USER sharing level Kyuubi, and the concept of an Engine POOL is introduced. Just like the thread pool in programming, we can create an Engine pool with a number in the pool. For example, Tom created two pools here. Called pool-a and pool-b, numbered pool-a-0, pool-a-1, if the name of the pool is directly specified when the client requests, the Kyuubi server will randomly select a pool from this pool Engine execution; if Tom not only specifies the name of the pool, but also specifies the index of the Engine in the pool when requesting, for example, specifying pool-b-0 here, Kyuubi Server will select the number 0 from this pool-b Engine to do the calculation. The corresponding parameter is kyuubi.engine.share.level.subdomain .

This provides great convenience for the integration of BI tools in eBay, because eBay, each analyst team may use the same account to perform data analysis, and the BI tool will create a Kyuubi Engine according to the user's IP, because each analysis The parameter configuration required by the teacher may be different. For example, their memory configuration is different. The BI tool can create such an engine pool, and then save a mapping between the user's IP and the created Engine index, and then in the When the user's request comes, according to the IP mapping relationship saved by the BI tool, to find the engine created by the user, or that many people in a team use a pool, many Spark apps can be pre-created and let this group People can randomly select an Engine to execute, which can increase the degree of concurrency. At the same time, it can also be used as a label under the USER sharing level to mark the purpose of the engine. For example, we can create different engine pools under the USER sharing level for beeline scenarios and java JDBC application scenarios, and use different engine pools in different usage scenarios. engine pool, isolated from each other.

As mentioned above, different engine sharing levels are mentioned. Some create a Spark App for each connection, and some create one or more Kyuubi Engines for a user. You may worry about the waste of resources. Here we talk about the dynamic management of resources by Kyuubi Engine. First, a Kyuubi Engine, that is, a Spark app, is divided into Spark driver and Spark executor. For executors, Spark itself has an executor dynamic allocation mechanism, which will decide whether to apply for more clusters to the cluster according to the current Spark job load. resources, or return the currently applied resources to the cluster. It is said that we add some restrictions on the Kyuubi Server layer, such as forcing the executor dynamic allocation to open, and setting the minimum number of executors to 0 when idle, that is to say, when an app is very idle, only one driver is running to avoid waste. resource. In addition to the dynamic recycling mechanism of the executor layer, Kyuubi also adds a resource recycling mechanism to the driver layer. For the CONNECTION sharing level, Kyuubi Engine is only used in the current connection, and the Spark driver will be recycled directly when the connection is closed. For USER-level sharing, Kyuubi has a parameter kyuubi.session.engine.idle.timeout to control the maximum idle time of the engine. For example, we set the idle time to 12 hours. If there is no request to connect to this Spark app within 12 hours, this Spark app It will end automatically to avoid wasting resources.

Kyuubi usage scenarios

Let's talk about Use Case. At present, Kyuubi supports the execution of SQL language and Scala language, and can also write SQL and Scala together to run. Because SQL is a very user-friendly language, it allows you to use simple SQL statements to query data without understanding the internal principles of Spark, but it also has certain limitations; and Scala language requires certain thresholds, but It is very flexible, we can write code or manipulate some Spark DataFrame API.

For example, it is possible to mix programming in a SQL file or a notebook. First, a training data table is created with SQL statements. After the table is created, the language mode is set to Scala through the SET statement, and then the code is written in Scala. Here, a kMeans is used to process the training data. The output is stored in a table, and then the language mode is switched to SQL, and the processing is continued with SQL. This is very convenient. We can combine the advantages of SQL and Scala to basically solve most of the cases in data analysis. We also provide a very friendly interface in Kyuubi JDBC, which can directly call KyuubiStatement::ExecuteScala to execute Scala statements.

Kyuubi's practice on eBay

eBay Demand Background

Our Hadoop team manages many Hadoop clusters. These clusters are distributed in different data centers and have different purposes. There is a unified set of permission verification based on KDC and LDAP.

When we first introduced Kyuubi, did we want to deploy a Kyuubi service for each cluster? If so, we may have to deploy three or four Kyuubi services, which need to be repeated during the upgrade, which is very inconvenient to manage, so we wondered whether we could use a set of Kyuubi to serve multiple Hadoop clusters.

eBay's enhancements to Kyuubi

The image below shows some of the enhancements we made for this requirement. First of all, because we support KDC and LDAP authentication, we let Kyuubi support both Kerberos and Plain type authorization authentication, and made some optimizations for the startup of Kyuubi Engine and Kyuubi's Beeline, and then we extended some Kyuubi's thrift API Support upload and download data. In response to the above-mentioned use of a Kyuubi to access multiple Hadoop clusters, we added the concept of a cluster selector, which can specify parameters when connecting to route requests to the corresponding clusters. In addition, we are also improving the RESTfull API, and have supported the RESTfull API authorization authentication of SPNEGO and BASIC for Kyuubi. In addition, we are also doing some work on RESTfull API to run SQL Query and Batch job. The numbered ones in the picture are some PRs that have already given back to the community.

Let's talk about 2, 3, and 4 here, some optimizations for Kyuubi's Thrift RPC. First of all, because Thrift RPC itself is designed for HiveServer2, it is very fast to establish a connection in HiveServer2/Spark Thriftserver2. To establish a connection in Kyuubi, you must first connect to the Kyuubi Server. The Kyuubi Server will not return the result to the client until the connection is established with the remote Kyuubi Engine.

If the Kyuubi Engine does not exist at the beginning, and when the Kyuubi Engine is started due to resource problems, or some parameter settings are incorrect, for example, he sets an invalid spark.yarn.queu , and an error occurs, there may be a minute or dozens in the middle. Seconds of delay, the client has to wait all the time, and no log is returned to the client during the waiting process. We made some asynchronous OpenSessions for this, and divided the OpenSession into two parts. The first step is to connect to the Kyuubi Server. The Kyuubi Server asynchronously starts a LaunchEngine Operation, and then immediately connects the Kyuubi Server to the client, so that the client can Do it in a second to connect to Kyuubi Server. But his next statement and after it comes in, it will wait until the Engine is initialized before it starts running. In fact, this is also a requirement of our PM. It doesn't matter if the first statement runs a little longer, but the connection must be fast, so we did such a job.

Because Hive Thrift RPC is a widely used and very user-friendly RPC, we made some extensions based on Thrift RPC without breaking its compatibility. First, for the request and return result of ExecuteStatement, it will return an OperationHandle in the API, and then obtain the status and log of the current Operation according to the OperationHandle. Because we have split OpenSession into OpenSession plus a LaunchEngine Operation, we want to return some information of LaunchEngine Operation through the configuration map of OpenSession request. We divide an OperationHandler into two parts, one is guid , and the other part is secret, which is placed in the configuration Map of OpenSessionResp.

Then after getting OpenSessionResp, you can spell out the OperationHandler corresponding to the Launch Engine Operation according to this configuration, and then use it to get the log and status of the LaunchEngine.

The following is an effect, when we establish a Kyuubi connection, we can know what happened in the process of spark-submit in real time. For example, if the user sets spark.yarn.queue wrong, or the user has been waiting due to resource problems, they can clearly know what happened in the middle, and there is no need to ask the platform maintainer to read the log, which not only makes the user feel extremely friendly, but also reduces the Efforts for platform maintainers.

Building Unified & Serverless Spark Gateways

As mentioned earlier, we need to use one Kyuubi service to serve multiple clusters, so we built a Unified & Serverless Spark Gateway based on Kyuubi. Unified means that we have only one endpoint, we are deployed on Kubernetes, and use Kubernetes' LB as the service discovery of Kyuubi Server. The endpoint is in the form of a Kubernetes LB plus a port, such as kyuubi.k8s-lb.ebay.com:10009 , to serve multiple clusters , we only need to add a parameter kyuubi.session.cluster to the JDBC URL and specify the cluster name, and then his request can be executed to the specified cluster. Regarding permission verification, we also use Unified, which supports both Kerberos and LDAP permission verification. About functions (function) is also unified, and supports Spark-SQL, Spark-Scala and ETL Spark Job submission.

Regarding serverless, Kyuubi Server is deployed on Kubernetes and is Cloud-native, and Kyuubi Server supports HA and LB, and Kyuubi Engine supports multi-tenancy, so the cost for platform maintainers is very low.

This is our approximate deployment. For multi-cluster, we introduced the concept of Cluster Selector, and allocated a Kubernetes ConfigMap file for each cluster. In this file, there are some configurations unique to this cluster, such as the ZooKeeper of this cluster. The configuration of the cluster, the environment variables of the cluster, will be injected into the started process when the Kyuubi Engine is started.

The configuration of the super user of each cluster is also different, so we also support the verification of the super user for each cluster. Currently, Kyuubi supports the refresh of HadoopFSDelegation token and HiveDelegation token, which allows Spark Engine to run for a long time without keytab, without worrying about token expiration. We also made this feature support multi-cluster.

The process of a user's request coming in is as follows: first, he needs to specify a Cluster Selector, Kyuubi Server (on Kubernetes) finds the corresponding cluster according to this Selector, connects to the ZooKeeper of the cluster, and then checks whether there is a corresponding Spark app in ZooKeeper, If you do not submit an app to YARN (Spark on YARN), the Engine will register with ZooKeeper after startup, and Kyuubi Server and Kyuubi Engine will find the Host and Port through ZooKeeper and create a connection.

Kyuubi has supported Thrift/JDBC/ODBC API from the very beginning, and the community is also improving the RESTFul API. eBay is also doing some work to improve the RESTFul API. We have added permission verification support to the RESTful API, and also supports SPNEGO (Kerberos) And BASIC (password based) permission check. We plan to add more RESTful APIs to Kyuubi. At present, there are APIs about sessions, for example, sessions can be turned off, which are mainly used to manage sessions of Kyuubi. We are going to add some APIs for SQL and Batch Job. Regarding SQL, you can submit a SQL Query directly through the RESTful API, and then you can get its results and log. Regarding Batch Job, you can submit a common Spark app that uses JAR to run through the RESTful API, and then you can get the ApplicationId of the Spark app and the log of spark-submit, which makes it easier for users to use Kyuubi to complete various commonly used tasks. Spark operations.

eBay's earnings

For users, they can use Spark services very conveniently. They can use Thrift, JDBC, ODBC, and RESTful interfaces. It is also very lightweight, and there is no need to install Hadoop/Spark binary or manage Hadoop and Spark. Conf, just use RESTful or Beeline/JDBC to connect.

For our platform development team, we have a centralized Spark service that can provide SQL, Scala services, and spark-submit services. We can easily manage Spark versions without distributing Spark installation packages. For users to use, it can better complete the grayscale upgrade, make the Spark version transparent to users, allow the entire cluster to use the latest Spark, and save cluster resources and company costs. In addition, the maintenance of the platform is also relatively convenient and the cost is very low, because we only need to maintain one Kyuubi service to serve multiple Hadoop clusters.

Author: Wang Fei, eBay Software Engineer, Apache Kyuubi PPMC Member

With video playback and PPT download:

The practice of Apache Kyuubi on eBay-Wang

Further reading:

In-depth practice of Apache Kyuubi in T3

Who is using Apache Kyuubi (Incubating)?

Kyuubi project homepage

Kyuubi code repository


网易数帆
391 声望550 粉丝

网易数智旗下全链路大数据生产力平台,聚焦全链路数据开发、治理及分析,为企业量身打造稳定、可控、创新的数据生产力平台,服务“看数”、“管数”、“用数”等业务场景,盘活数据资产,释放数据价值。