For enterprise-level and cloud databases, in addition to conventional dimensions such as performance, availability, and functionality, an important dimension is manageability. The manageability dimension will deeply affect the hidden cost of users actually using the database. In the latest version , TiDB introduced the data placement framework (Placement Rules In SQL), added the enterprise-level cluster management component TiDB Enterprise Manager, opened the preview of the intelligent diagnosis service PingCAP Clinic, and greatly enhanced the manageability of enterprise-level products. At the same time, it also adds the infrastructure required for many cloud-native databases.
This article mainly introduces one of the important components of TiDB's manageability: TiUP, a TiDB deployment tool that has been put into use since TiDB 4.0.
TiUP is a daily essential tool for TiDBer, so this article is classified as a series of "Reviewing the Old and Knowing the New". If you are new to TiDB, please refer to this article first: "From Horse Carriage to Electric Vehicle, TiDB Deployment Tool Metamorphosis" .
Environmental description
The environment and component version information involved in this article are as follows:
TiDB v5.4.0
TiUP v1.9.3 (2022-03-24 Release)
CentOS 7.9
Introduction to TiUP
In the installation management of various system software and application software, the package manager has a wide range of applications, and the appearance of the package management tool greatly simplifies the installation, upgrade and maintenance of the software. For example, almost all Linux using RPM will use Yum for package management, and Anaconda can be very convenient to manage the python environment and related packages.
Starting from TiDB 4.0, TiUP, as a new tool, assumes the role of package manager and manages many components in the TiDB ecosystem, such as TiDB, PD, TiKV, etc. When users want to run any component in the TiDB ecosystem, they only need to execute one line of TiUP command, which greatly reduces the difficulty of management compared to before.
Figure 1 - TiUP GitHub Commits Trend
Figure 2 - tiup source line number statistics (2022-03-24)
TiUP has been released for more than two years. The version has been iterated several times, and the total amount of code has doubled. It can be seen from the above figure that the code update has slowed down, and TiDBer can be safely used in the production environment.
On the TiUP component again
As a Ti weapon, TiUP is a must for daily work. Let's discuss the classic components and common commands of TiUP again. First, the important commands of tiup are listed below, and then we will focus on the discussion.
tiup
main/cmd/root
- tiup env
- tiup status
- tiup mirror
- tiup list --all --verbose
- tiup install hello
- tiup update
- tiup playground
- tiup client
- tiup cluster
- tiup bench ch (CH-benCHmark)/ TPCC (TPC-C)/ TPCH (TPC-H)/ YCSB (Yahoo! Cloud Serving Benchmark)
- tiup dm
- tiup clean
tiup mirror
Not every company will put the database department on the public cloud. Even in the public cloud, in order to facilitate the unified management of versions, most of them choose to build a warehouse by themselves to manage the benchmark version of the production library. Not to mention financial business, then, how to quickly, concisely and effectively build and maintain a warehouse on the intranet, here is a simple example.
First, you need to install TiUP on a machine that can connect to the external network, and clone the official TiUP repository:
- Download the tiup file and add environment variables
mkdir -pv ~/.tiup/bin
wget https://tiup-mirrors.pingcap.com/tiup-linux-amd64.tar.gz
tar zxf tiup-linux-amd64.tar.gz -C ~/.tiup/bin/
echo 'export PATH=~/.tiup/bin:$PATH' >> ~/.bash_profile
source ~/.bash_profile
tiup -v
Output TiUP version information:
1.9.3 tiup
Go Version: go1.17.7
Git Ref: v1.9.3
GitHash: f523cd5e051d0001e25d5a8b2c0d5d3ff058a5d5
- Clone the official repository
First point the warehouse image to the official library:
tiup mirror set https://tiup-mirrors.pingcap.com
# 屏幕输出日志> Successfully set mirror to https://tiup-mirrors.pingcap.com
Only clone the latest version suitable for the current operating system, just specify the TiDB v5.4.0
version, and other components will automatically recognize the latest version and download it.
tiup mirror clone ~/.tiup/package -a amd64 -o linux v5.4.0
- Copy the package folder to the intranet machine:
# current server
tar zcf package.tgz package/
# new server
cd ~/.tiup
tar zxvf package.tgz
./package/local_install.sh
source ~/.bash_profile
tiup list
At this point, the new local warehouse has been built, create a hello
component for testing:
# test mirror
CMP_TMP_DIR=`mktemp -d -p ~/.tiup`
cat > $CMP_TMP_DIR/hello.sh << EOF
#! /bin/sh
echo -e "\033[0;36m<<< Hello, TiDB! >>>\033[0m"
EOF
chmod 755 $CMP_TMP_DIR/hello.sh
tar -C $CMP_TMP_DIR -czf $CMP_TMP_DIR/hello.tar.gz hello.sh
- Publish the
hello
component to the local warehouse:
tiup mirror genkey
tiup mirror grant admin
tiup mirror publish hello v0.0.1 $CMP_TMP_DIR/hello.tar.gz hello.sh --desc 'Hello, TiDB'
View the published components, and run the components:
tiup list hello
tiup hello
Figure 3 - hello component running output
At this time, the local warehouse can manage self-published components, but still cannot provide services to the outside world. Let's use tiup server
to build a private library with one click:
# 运行tiup server
tiup server ~/.tiup/package --addr 0.0.0.0:8000 --upstream=""
# 修改镜像指向
tiup mirror set 'http://127.0.0.1:8000'
Note: Due to version differences, the environment variable TIUP_MIRRORS
is deprecated in the current version, but the command tiup mirror set <mirror-addr>
is recommended.
tiup playground
For distributed databases, how to quickly build prototypes locally for basic functional verification and testing is the basic ability of a DBA. As a result, tiup playground
came into being, one-click to build the smallest available cluster, and to specify the number of initial TiDB components to start, as well as expansion and contraction.
For example, to start a cluster labeled mydb1
, including a TiDB instance, a TiKV instance, a PD instance, and a TiFlash instance, without starting the monitoring component:
$ tiup playground v5.4.0 --host 127.0.0.1 --tag mydb1 --db 1 --kv 1 --pd 1 --tiflash 1 --without-monitor
127.0.0.1:4000 ... Done
127.0.0.1:3930 ... Done
CLUSTER START SUCCESSFULLY, Enjoy it ^-^
To connect TiDB: mysql --comments --host 127.0.0.1 --port 4000 -u root -p (no password)
To view the dashboard: http://127.0.0.1:2379/dashboard
PD client endpoints: [127.0.0.1:2379]
View the process id of each component:
$ tiup playground display
Pid Role Uptime
--- ---- ------
4321 pd 10m39.092616075s
4322 tikv 10m39.087748551s
4353 tidb 10m37.765844216s
4527 tiflash 9m50.16054123s
Connect to the tidb server and query the version:
$ mysql -uroot -h127.1 -P4000 -e 'select version()\G'
*************************** 1. row ***************************
version(): 5.7.25-TiDB-v5.4.0
For another example, start a cluster labeled mydb2
, only start TiKV instances, and 3 PD instances:
$ tiup playground v5.4.0 --host 127.0.0.1 --tag mydb2 --mode tikv-slim --pd 3 --without-monitor
Playground Bootstrapping...
Start pd instance:v5.4.0
Start pd instance:v5.4.0
Start pd instance:v5.4.0
Start tikv instance:v5.4.0
PD client endpoints: [127.0.0.1:2379 127.0.0.1:2382 127.0.0.1:2384]
View several members of the current PD and TiKV instance information through the PD API:
$ curl -s http://127.0.0.1:2379/pd/api/v1/members | jq .members[].name
"pd-1"
"pd-0"
"pd-2"
$ curl -s http://127.0.0.1:2379/pd/api/v1/stores | jq .stores[].store
{
"id": 1,
"address": "127.0.0.1:20160",
"version": "5.4.0",
"status_address": "127.0.0.1:20180",
"git_hash": "b5262299604df88711d9ed4b84d43e9c507749a2",
"start_timestamp": 1648110516,
"deploy_path": "/data/tiup/components/tikv/v5.4.0",
"last_heartbeat": 1648112496884914000,
"state_name": "Up"
}
Misc
Performance testing is also a necessary part, so I see that TiUP has integrated four test tool sets, tpcc, tpch, ycsh, and ch. One-click test can be performed by the following command.
tiup bench ch/tpcc/tpch/ycsb
The command to clean data with one click is as follows:
tiup clean --all
It should be emphasized here that the following commands need to be executed with caution in a production environment, unless you know what you are doing:
tiup cluster clean mydb3 --all
tiup cluster destroy mydb3
tiup cluster prune mydb3
Figure 4 - View all available components
Other components are discussed separately, or please refer to the official documentation first.
TiUP v1.9.3 Release
On 2022-03-24, TiUP released the v1.9.3 version. From the change log, we can clearly understand that this update fixed 5 bugs and made 2 improvements.
repair:
- Fixed a bug that the ---2c162ca23c6f17e7f732a8d58a5f5871
tiup-cluster
ofexec
could not be used when the hostname existed-
. ( #1794 , @nexustar ) - Fixed the issue of conflict detection of ports (service port, proxy port, proxy status port) of the TiFlash instance when using the
tiup-cluster
command. ( #1805 , @AstroProfundis ) - Fixed next-gen monitoring (
ng-monitor
) not being available in Prometheus. ( #1806 , @nexustar ) - Fixed an issue where node_exporter metrics could not be collected if the host only had Prometheus deployed. (Fixed with previous issue.) ( #1806 , @nexustar )
- Fixed ---dcf538d6bad26f64d0c1222f1f89405a
--host 0.0.0.0
not working whentiup-playground
was specified. ( #1811 , @nexustar )
Improve:
tiup cluster audit cleanup
tiup dm audit cleanup
- Added anonymous login example to Grafana configuration template. (You need to make sure the DM component version is above v1.9.0 =>
tiup dm -v
) ( #1785 @sunzhaoyang )
Extended thinking
In the cloud database era, or the distributed database era, how should the DBA role adjust itself? Do we still need a DBA that can only operate and maintain a certain database, such as traditional relational databases such as DB2, Oracle, MySQL, PostgreSQL, etc., or an advanced general, do we need a business DBA who understands business and has development skills? In fact, these are not outdated now, and should not be discarded. Instead, they should be used as basic skills and become the underlying modules of the DBA knowledge structure. DBA needs to use this as a basis to evolve to a higher stage, just like your predecessors. After obtaining the database source code and understanding what front-end customers really need, we can develop and adjust a high-performance database suitable for business scenarios, as well as a set of easy-to-use and easy-to-manage database ecological tools.
Needless to say, TiUP is a tool that fits this profile. One-click to build a private library, one-click to run the smallest cluster, one-click to manage the entire TiDB cluster, and one-click expansion and contraction of the TiDB cluster. However, there are functional trade-offs behind the seemingly simple ones. For example, tiup mirror
can only be operated on the command line, and there is no interface tool like Nexus that can publish and delete packages in the browser. For another example, the most commonly used scenario of TiUP is still running on ordinary machines. For the k8s environment, there are TiDB Operator
tools, but there are very few functions or cases for batch operation of ECS. In short, I hope that TiUP will be more powerful under the premise of maintaining practicality.
Further reading: To learn more about TiUP, please refer to the TiUP documentation map
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。