Amazon Timestream is a fast, scalable serverless time-series database service for IoT and operational applications that easily stores and analyzes trillions of events per day, 1000 times faster, and costs only It is one-tenth of the relational database. By keeping recent data in memory and moving historical data to a cost-optimized storage tier based on user-defined policies, Amazon Timestream saves customers the time and cost of managing the lifecycle of time-series data. Amazon Timestream's purpose-built query engine can be used to access and analyze recent and historical data without explicitly specifying in the query whether the data is kept in memory or in a cost-optimized tier. Amazon Timestream has built-in time series analysis functions that enable near real-time identification of trends and patterns in data. Amazon Timestream is a serverless service that automatically scales to adjust capacity and performance, so you can focus on building your applications without managing the underlying infrastructure.
This article introduces the real-time collection, storage and analysis of time series data for the Internet of Things (taking PM 2.5 scenario as an example) through Timestream, Kinesis Stream managed services and Grafana and Flink Connector open source software, including deployment architecture, environment deployment, data collection, data storage and Analysis, I hope that when you have similar IoT time series data storage and analysis requirements, you can get inspiration from it and help business development.
Architecture
Amazon Timestream enables rapid analysis of time series data generated by IoT applications using built-in analytical functions such as smoothing, approximation, and interpolation. For example, smart home device manufacturers can use Amazon Timestream to collect motion or temperature data from device sensors, interpolate to identify time frames without motion, and alert consumers to actions (such as reducing heat) to save energy.
In this article, the Internet of Things (taking the PM 2.5 scenario as an example) realizes real-time PM2.5 data collection, time series data storage and real-time analysis. The architecture is mainly divided into three parts:
- Real-time time series data collection: The Python data collection program combines Kinesis Stream and Kinesis Data Analytics for Apache Flink connector to simulate the monitoring equipment from PM 2.5, and collect data to Timestream in real time.
- Time series data storage: Realize time series data storage through Amazon Timestream time series database, set the storage duration of memory and magnetic storage (cost optimization layer), can achieve recent data retention in memory, and move historical data to cost optimization according to user-defined policies storage layer.
- Real-time time series data analysis: Real-time access to Timestream data through Grafana (with the Timesteam For Grafana plug-in installed), through Grafana's rich analysis chart forms, combined with the built-in time series analysis functions of Amazon Timestream, can realize near real-time identification of IoT data trends and patterns .
The specific architecture diagram is as follows:
Deployment environment
1.1 Create Cloudformation
Please use your own account (region please select us-east-1)
Download the Cloudformation Yaml file on Github:
git clone https://github.com/bingbingliu18/Timestream-pm25
The Timestream-pm25 directory contains the file timestream-short-new.yaml used by Cloudformation below
Select the default for everything else and click the Create Stack button.
Cloud Formation created successfully
1.2 Connect to the newly created Ec2 bastion machine:
Modify certificate file permissions
chmod 0600 [path to downloaded .pem file]
ssh -i [path to downloaded .pem file] ec2-user@[bastionEndpoint]
Execute Amazon configure:
Amazon configure
default region name, input: "us-east-1", other default settings.
1.3 Connect to the EC2 bastion host and install the corresponding software
set time zone
TZ='Asia/Shanghai'; export TZ
Install python3
sudo yum install -y python3
Install python3 pip
sudo yum install -y python3-pip
pip3 install boto3
sudo pip3 install boto3
pip3 install numpy
sudo pip3 install numpy
install git
sudo yum install -y git
1.4 Download the Github Timesram Sample library
git clone https://github.com/awslabs/amazon-timestream-tools amazon-timestream-tools
1.5 Install Grafana Server
Connect to the EC2 bastion machine:
sudo vi /etc/yum.repos.d/grafana.repo
For OSS releases: (copy the following to grafana.repo)
[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
Install grafana server:
sudo yum install -y grafana
Start grafana server:
sudo service grafana-server start
sudo service grafana-server status
Configure the grafana server to start automatically when the operating system starts:
sudo /sbin/chkconfig --add grafana-server
1.6 Install the timestream Plugin
sudo grafana-cli plugins install grafana-timestream-datasource
restart grafana
sudo service grafana-server restart
1.7 Configure the IAM Role that Grafana uses to access the Timesteam service
Get IAM Role Name
Select the IAM service, select the role to be modified, role name:
timestream-iot-grafanaEC2rolelabview-us-east-1
Modify the role trust relationship:
Select all Policy documents and replace with the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid":"",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Sid":"",
"Effect": "Allow",
"Principal": {
"AWS": "[请替换成CloudFormation output中的role arn]"
},
"Action": "sts:AssumeRole"
}
]
}
Modified trust relationship:
1.8 Log in to Grafana server
Log in to Grafana Server for the first time:
- Open a browser and visit http://[Grafana server public ip]:3000
- The default Grafana Server listening port is: 3000 .
How to obtain the Ec2 Public IP address, as shown in the following figure, to access the Cloudformation output:
- In the login interface, enter username: admin; password: admin. (The username and password are both admin)
- Click Log In. After the login is successful, you will receive a prompt to change the password
1.9 Add Timestream data source to Grafana server
Add Timestream data source
1.10 Configure Timestream data source in Grafana server
Copy the role ARN information required for the configuration (from the cloudformation output tab) Default Region: us-east-1
IoT data storage
2.1 Create Timestream database iot
2.2 Create Timestream table pm25
IoT data import
3.1 Install Flink connector to Timestream
install java8
sudo yum install -y java-1.8.0-openjdk*
java -version
Install debug info, otherwise jmap will throw exception
sudo yum --enablerepo='*-debug*' install -y java-1.8.0-openjdk-debuginfo
Install maven
sudo wget https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
sudo yum install -y apache-maven
mvn --version
change java version from 1.7 to 1.8
sudo update-alternatives --config java
sudo update-alternatives --config javac
Install Apache Flink
The latest Apache Flink version that supports Kinesis Data Analytics is 1.8.2.
- Create flink folder
cd
mkdir flink
cd flink
- Download Apache Flink version 1.8.2 source code:
wget https://archive.apache.org/dist/flink/flink-1.8.2/flink-1.8.2-src.tgz
- Unzip the Apache Flink source code:
tar -xvf flink-1.8.2-src.tgz
- Go to the Apache Flink source code directory:
cd flink-1.8.2
- Compile and install Apache Flink (this compilation time is relatively long and takes about 20 minutes):
mvn clean install -Pinclude-kinesis -DskipTests
3.2 Create Kinesis Data Stream Timestreampm25Stream
aws kinesis create-stream --stream-name Timestreampm25Stream --shard-count 1
3.3 Run Flink Connector to establish Kinesis connection to Timestream:
cd
cd amazon-timestream-tools/integrations/flink_connector
mvn clean compile
Please continue to run the following commands during the data collection process:
mvn exec:java -Dexec.mainClass="com.amazonaws.services.kinesisanalytics.StreamingJob" -Dexec.args="--InputStreamName
Timestreampm25Stream --Region us-east-1 --TimestreamDbName iot --TimestreamTableName pm25"
3.4 Prepare PM2.5 demo data:
Connect to EC2 Bastion Machine
Download 5 demo data generation programs:
cd
mkdir pm25
cd pm25
- Download the data acquisition Python program on Github:
git clone https://github.com/bingbingliu18/Timestream-pm25
cd Timestream-pm25
- Run 5 demo data generation programs (python program 2 arguments --region default: us-east-1; --stream default: Timestreampm25Stream)
Please continue to run the following commands during the data collection process:
python3 pm25_new_kinisis_test.py
IoT data analytics
4.1 Log in to Grafana Server to create Dashboard and Panel
When creating a Dashboard query, please set the time zone to the local browser time zone:
Create a new Panel:
Select the data source to be accessed, and paste the SQL statement to be executed by the query analysis into a new Panel:
4.2 Create Time Data Analysis Dashboard PM2.5 Analysis 1 (Save as PM2.5 Analysis 1)
4.2.1 Query the average value of PM2.5 at each monitoring site in Beijing
New Panel
SELECT CASE WHEN location = 'fengtai_xiaotun' THEN avg_pm25 ELSE NULL END AS fengtai_xiaotou,
CASE WHEN location = 'fengtai_yungang' THEN avg_pm25 ELSE NULL END AS fengtai_yungang,
CASE WHEN location = 'daxing' THEN avg_pm25 ELSE NULL END AS daxing,
CASE WHEN location = 'wanshou' THEN avg_pm25 ELSE NULL END AS wanshou,
CASE WHEN location = 'gucheng' THEN avg_pm25 ELSE NULL END AS gucheng,
CASE WHEN location = 'tiantan' THEN avg_pm25 ELSE NULL END AS tiantan,
CASE WHEN location = 'yanshan' THEN avg_pm25 ELSE NULL END AS yanshan,
CASE WHEN location = 'miyun' THEN avg_pm25 ELSE NULL END AS miyun,
CASE WHEN location = 'changping' THEN avg_pm25 ELSE NULL END AS changping,
CASE WHEN location = 'aoti' THEN avg_pm25 ELSE NULL END AS aoti,
CASE WHEN location = 'mengtougou' THEN avg_pm25 ELSE NULL END AS mentougou,
CASE WHEN location = 'huairou' THEN avg_pm25 ELSE NULL END AS huairou,
CASE WHEN location = 'haidian' THEN avg_pm25 ELSE NULL END AS haidian,
CASE WHEN location = 'nongzhan' THEN avg_pm25 ELSE NULL END AS nongzhan,
CASE WHEN location = 'tongzhou' THEN avg_pm25 ELSE NULL END AS tongzhou,
CASE WHEN location = 'dingling' THEN avg_pm25 ELSE NULL END AS dingling,
CASE WHEN location = 'yanqing' THEN avg_pm25 ELSE NULL END AS yanqing,
CASE WHEN location = 'guanyuan' THEN avg_pm25 ELSE NULL END AS guanyuan,
CASE WHEN location = 'dongsi' THEN avg_pm25 ELSE NULL END AS dongsi,
CASE WHEN location = 'shunyi' THEN avg_pm25 ELSE NULL END AS shunyi
FROM
(SELECT location, round(avg(measure_value::bigint),0) as avg_pm25
FROM "iot"."pm25"
where measure_name='pm2.5'
and city='Beijing'
and time >= ago(30s)
group by location,bin(time,30s)
order by avg_pm25 desc)
select graphic display select Gauge
Save Panel as Beijing PM2.5 analysis
Edit Panel Title: Beijing PM2.5 analysis
Save Dashboard PM2.5 analysis 1:
4.2.2 Query the average value of PM2.5 at each monitoring site in Shanghai in one day
New Panel
SELECT CASE WHEN location = 'songjiang' THEN avg_pm25 ELSE NULL END AS songjiang,
CASE WHEN location = 'fengxian' THEN avg_pm25 ELSE NULL END AS fengxian,
CASE WHEN location = 'no 15 factory' THEN avg_pm25 ELSE NULL END AS No15_factory,
CASE WHEN location = 'xujing' THEN avg_pm25 ELSE NULL END AS xujing,
CASE WHEN location = 'pujiang' THEN avg_pm25 ELSE NULL END AS pujiang,
CASE WHEN location = 'putuo' THEN avg_pm25 ELSE NULL END AS putuo,
CASE WHEN location = 'shangshida' THEN avg_pm25 ELSE NULL END AS shangshida,
CASE WHEN location = 'jingan' THEN avg_pm25 ELSE NULL END AS jingan,
CASE WHEN location = 'xianxia' THEN avg_pm25 ELSE NULL END AS xianxia,
CASE WHEN location = 'hongkou' THEN avg_pm25 ELSE NULL END AS hongkou,
CASE WHEN location = 'jiading' THEN avg_pm25 ELSE NULL END AS jiading,
CASE WHEN location = 'zhangjiang' THEN avg_pm25 ELSE NULL END AS zhangjiang,
CASE WHEN location = 'miaohang' THEN avg_pm25 ELSE NULL END AS miaohang,
CASE WHEN location = 'yangpu' THEN avg_pm25 ELSE NULL END AS yangpu,
CASE WHEN location = 'huinan' THEN avg_pm25 ELSE NULL END AS huinan,
CASE WHEN location = 'chongming' THEN avg_pm25 ELSE NULL END AS chongming
From(
SELECT location, round(avg(measure_value::bigint),0) as avg_pm25
FROM "iot"."pm25"
where measure_name='pm2.5'
and city='Shanghai'
and time >= ago(30s)
group by location,bin(time,30s)
order by avg_pm25 desc)
Save Panel as Shanghai PM2.5 analysis
Edit Panel Title: Shanghai PM2.5 analysis
Save Dashboard PM2.5 analysis 1
4.2.3 Query the average value of PM2.5 at each monitoring site in Guangzhou
New Panel
SELECT CASE WHEN location = 'panyu' THEN avg_pm25 ELSE NULL END AS panyu,
CASE WHEN location = 'commercial school' THEN avg_pm25 ELSE NULL END AS commercial_school,
CASE WHEN location = 'No 5 middle school' THEN avg_pm25 ELSE NULL END AS No_5_middle_school,
CASE WHEN location = 'guangzhou monitor station' THEN avg_pm25 ELSE NULL END AS Guangzhou_monitor_station,
CASE WHEN location = 'nansha street' THEN avg_pm25 ELSE NULL END AS Nansha_street,
CASE WHEN location = 'No 86 middle school' THEN avg_pm25 ELSE NULL END AS No_86_middle_school,
CASE WHEN location = 'luhu' THEN avg_pm25 ELSE NULL END AS luhu,
CASE WHEN location = 'nansha' THEN avg_pm25 ELSE NULL END AS nansha,
CASE WHEN location = 'tiyu west' THEN avg_pm25 ELSE NULL END AS tiyu_west,
CASE WHEN location = 'jiulong town' THEN avg_pm25 ELSE NULL END AS jiulong_town,
CASE WHEN location = 'huangpu' THEN avg_pm25 ELSE NULL END AS Huangpu,
CASE WHEN location = 'baiyun' THEN avg_pm25 ELSE NULL END AS Baiyun,
CASE WHEN location = 'maofeng mountain' THEN avg_pm25 ELSE NULL END AS Maofeng_mountain,
CASE WHEN location = 'chong hua' THEN avg_pm25 ELSE NULL END AS Chonghua,
CASE WHEN location = 'huadu' THEN avg_pm25 ELSE NULL END AS huadu
from(
SELECT location, round(avg(measure_value::bigint),0) as avg_pm25
FROM "iot"."pm25"
where measure_name='pm2.5'
and city='Guangzhou'
and time >= ago(30s)
group by location,bin(time,30s)
order by avg_pm25 desc)
Save Panel as Guangzhou PM2.5 analysis
Edit Panel Title: Guangzhou PM2.5 analysis
Save Dashboard PM2.5 analysis 1
4.2.4 Query the average value of PM2.5 at each monitoring site in Shenzhen
New Panel
SELECT CASE WHEN location = 'huaqiao city' THEN avg_pm25 ELSE NULL END AS Huaqiao_city,
CASE WHEN location = 'xixiang' THEN avg_pm25 ELSE NULL END AS xixiang,
CASE WHEN location = 'guanlan' THEN avg_pm25 ELSE NULL END AS guanlan,
CASE WHEN location = 'longgang' THEN avg_pm25 ELSE NULL END AS Longgang,
CASE WHEN location = 'honghu' THEN avg_pm25 ELSE NULL END AS Honghu,
CASE WHEN location = 'pingshan' THEN avg_pm25 ELSE NULL END AS Pingshan,
CASE WHEN location = 'henggang' THEN avg_pm25 ELSE NULL END AS Henggang,
CASE WHEN location = 'minzhi' THEN avg_pm25 ELSE NULL END AS Minzhi,
CASE WHEN location = 'lianhua' THEN avg_pm25 ELSE NULL END AS Lianhua,
CASE WHEN location = 'yantian' THEN avg_pm25 ELSE NULL END AS Yantian,
CASE WHEN location = 'nanou' THEN avg_pm25 ELSE NULL END AS Nanou,
CASE WHEN location = 'meisha' THEN avg_pm25 ELSE NULL END AS Meisha
From(
SELECT location, round(avg(measure_value::bigint),0) as avg_pm25
FROM "iot"."pm25"
where measure_name='pm2.5'
and city='Shenzhen'
and time >= ago(30s)
group by location,bin(time,30s)
order by avg_pm25 desc)
Save Panel as Shenzhen PM2.5 analysis
Edit Panel Title: Shenzhen PM2.5 analysis
Save Dashboard PM2.5 analysis 1
4.2.5 Time series analysis of Shenzhen Overseas Chinese Town (PM2.5 analysis in the last 5 minutes)
New Panel
select location, CREATE_TIME_SERIES(time, measure_value::bigint) as PM25 FROM iot.pm25
where measure_name='pm2.5'
and location='huaqiao city'
and time >= ago(5m)
GROUP BY location
The selection graph shows select Lines; Select Points:
Save Panel as Shen Zhen Huaqiao City PM2.5 analysis
Edit Panel Title: Analysis of PM2.5 in the last 5 minutes of Shenzhen Overseas Chinese Town
Save Dashboard PM2.5 analysis 1
4.2.6 Find out the average PM2.5 value at 30-second intervals in OCT Shenzhen in the past 2 hours (using linear interpolation to fill in missing values)
New Panel
WITH binned_timeseries AS (
SELECT location, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::bigint), 2) AS avg_PM25
FROM "iot".pm25
WHERE measure_name = 'pm2.5'
AND location='huaqiao city'
AND time > ago(2h)
GROUP BY location, BIN(time, 30s)
), interpolated_timeseries AS (
SELECT location,
INTERPOLATE_LINEAR(
CREATE_TIME_SERIES(binned_timestamp, avg_PM25),
SEQUENCE(min(binned_timestamp), max(binned_timestamp), 30s)) AS interpolated_avg_PM25
FROM binned_timeseries
GROUP BY location
)
SELECT time, ROUND(value, 2) AS interpolated_avg_PM25
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_PM25)
The selection graph shows select Lines:
Save Panel as Shen Zhen Huaqiao City PM2.5 analysis 1
Edit Panel Title: Average PM2.5 value in Shenzhen OCT in the past 2 hours (using linear interpolation to fill in missing values)
Save Dashboard PM2.5 analysis 1
4.2.7 PM2.5 average ranking of all cities in the past 5 minutes (linear interpolation)
New Panel
SELECT CASE WHEN city = 'Shanghai' THEN inter_avg_PM25 ELSE NULL END AS Shanghai,
CASE WHEN city = 'Beijing' THEN inter_avg_PM25 ELSE NULL END AS Beijing,
CASE WHEN city = 'Guangzhou' THEN inter_avg_PM25 ELSE NULL END AS Guangzhou,
CASE WHEN city = 'Shenzhen' THEN inter_avg_PM25 ELSE NULL END AS Shenzhen,
CASE WHEN city = 'Hangzhou' THEN inter_avg_PM25 ELSE NULL END AS Hangzhou,
CASE WHEN city = 'Nanjing' THEN inter_avg_PM25 ELSE NULL END AS Nanjing,
CASE WHEN city = 'Chengdu' THEN inter_avg_PM25 ELSE NULL END AS Chengdu,
CASE WHEN city = 'Chongqing' THEN inter_avg_PM25 ELSE NULL END AS Chongqing,
CASE WHEN city = 'Tianjin' THEN inter_avg_PM25 ELSE NULL END AS Tianjin,
CASE WHEN city = 'Shenyang' THEN inter_avg_PM25 ELSE NULL END AS Shenyang,
CASE WHEN city = 'Sanya' THEN inter_avg_PM25 ELSE NULL END AS Sanya,
CASE WHEN city = 'Lasa' THEN inter_avg_PM25 ELSE NULL END AS Lasa
from(
WITH binned_timeseries AS (
SELECT city,location, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::bigint), 2) AS avg_PM25
FROM "iot".pm25
WHERE measure_name = 'pm2.5'
AND time > ago(5m)
GROUP BY city,location, BIN(time, 30s)
), interpolated_timeseries AS (
SELECT city,location,
INTERPOLATE_LINEAR(
CREATE_TIME_SERIES(binned_timestamp, avg_PM25),
SEQUENCE(min(binned_timestamp), max(binned_timestamp), 30s)) AS interpolated_avg_PM25
FROM binned_timeseries
GROUP BY city,location
), all_location_interpolated as (
SELECT city,location,time, ROUND(value, 2) AS interpolated_avg_PM25
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_PM25))
select city,avg(interpolated_avg_PM25) AS inter_avg_PM25
from all_location_interpolated
group by city
order by avg(interpolated_avg_PM25) desc)
Select the Panel graphic type:
Save Panel as all city analysis 1
Edit Panel Title: Average PM2.5 for all cities in the past 5 minutes
Save Dashboard PM2.5 analysis 1
4.2.8 The ten highest PM2.5 collection points in the past 5 minutes (linear interpolation)
New Panel
WITH binned_timeseries AS (
SELECT city,location, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::bigint), 2) AS avg_PM25
FROM "iot".pm25
WHERE measure_name = 'pm2.5'
AND time > ago(5m)
GROUP BY city,location, BIN(time, 30s)
), interpolated_timeseries AS (
SELECT city,location,
INTERPOLATE_LINEAR(
CREATE_TIME_SERIES(binned_timestamp, avg_PM25),
SEQUENCE(min(binned_timestamp), max(binned_timestamp), 30s))
AS interpolated_avg_PM25
FROM binned_timeseries
GROUP BY city,location
), interpolated_cross_join as (
SELECT city,location,time, ROUND(value, 2) AS interpolated_avg_PM25
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_PM25))
select city,location, avg(interpolated_avg_PM25) as avg_PM25_loc
from interpolated_cross_join
group by city,location
order by avg_PM25_loc desc
limit 10
Select Table
Save Panel as all city analysis 2
Edit Panel Title: Top ten collection points with the highest PM2.5 in the past 5 minutes (linear interpolation)
Save Dashboard PM2.5 analysis 1
4.2.9 The ten collection points with the lowest PM2.5 in the past 5 minutes (linear interpolation)
New Panel
WITH binned_timeseries AS (
SELECT city,location, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::bigint), 2) AS avg_PM25
FROM "iot".pm25
WHERE measure_name = 'pm2.5'
AND time > ago(5m)
GROUP BY city,location, BIN(time, 30s)
), interpolated_timeseries AS (
SELECT city,location,
INTERPOLATE_LINEAR(
CREATE_TIME_SERIES(binned_timestamp, avg_PM25),
SEQUENCE(min(binned_timestamp), max(binned_timestamp), 30s))
AS interpolated_avg_PM25
FROM binned_timeseries
GROUP BY city,location
), interpolated_cross_join as (
SELECT city,location,time, ROUND(value, 2) AS interpolated_avg_PM25
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_PM25))
select city,location, avg(interpolated_avg_PM25) as avg_PM25_loc
from interpolated_cross_join
group by city,location
order by avg_PM25_loc asc
limit 10
Select Table
Save Panel as all city analysis 3
Edit Panel Title: The ten collection points with the lowest PM2.5 in the past 5 minutes (linear interpolation)
Save Dashboard PM2.5 analysis 1
To set the dashboard to refresh every 5 seconds:
This blog focuses on the real-time collection, storage and analysis of time series data through Timestream, Kinesis Stream managed service and Grafana (taking PM 2.5 scenario as an example), including deployment architecture, environment deployment, data collection, data storage and analysis. When you have similar IoT time series data storage and analysis requirements, you will be inspired to achieve efficient management of massive IoT time series data, mine the laws, patterns and values contained in IoT data, and help business development.
appendix
Amazon Timestream Developer Guide
"Amazon Timestream Developer Program Example"
Amazon Timestream and Grafana Integration Example
"Welcome to try Amazon cloud technology database products"
Author of this article
Liu Bingbing
Amazon cloud technology database solution architect, responsible for the consulting and architecture design of database solutions based on Amazon cloud technology, and is committed to the research and promotion of big data. Before joining Amazon Cloud Technology, he worked in Oracle for many years, and has rich experience in database cloud planning, design operation and maintenance optimization, DR solutions, big data and data warehouses, and enterprise applications.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。