头图



Amazon Timestream is a fast and scalable serverless time series database service suitable for IoT and operational applications. With this service, trillions of events can be easily stored and analyzed every day. The speed is increased by 1,000 times at a cost Only one-tenth of the relational database. By keeping recent data in memory and moving historical data to a cost-optimized storage layer according to user-defined strategies, Amazon Timestream saves customers the time and cost of managing the life cycle of time series data. Amazon Timestream's specially constructed query engine can be used to access and analyze recent and historical data without the need to display in the query whether the specified data is stored in memory or in the cost optimization layer. Amazon Timestream has built-in time series analysis functions that can identify data trends and patterns in near real-time. Amazon Timestream is a serverless service that can be automatically scaled to adjust capacity and performance, so there is no need to manage the underlying infrastructure and you can focus on building applications.

This article introduces real-time collection, storage and analysis of time series data through Timestream, Kinesis Stream hosting services and Grafana and Flink Connector open source software (taking PM 2.5 scenarios as an example), including deployment architecture, environment deployment, data collection, data storage and Analysis, I hope that when you have similar IoT time series data storage and analysis needs, you can get inspiration from it and help your business develop.

Architecture

Amazon Timestream can use built-in analysis functions (such as smoothing, approximation, and interpolation) to quickly analyze time series data generated by IoT applications. For example, smart home device manufacturers can use Amazon Timestream to collect motion or temperature data from device sensors, perform interpolation to identify the time frame of no motion, and remind consumers to take measures (such as reducing heat) to save energy.

In this article, the Internet of Things (taking PM 2.5 scenarios as an example) realizes real-time PM2.5 data collection, time series data storage, and real-time analysis. The architecture is mainly divided into three parts:

  • Real-time time series data collection: Through the Python data collection program combined with Kinesis Stream and Kinesis Data Analytics for Apache Flink connector to simulate the realization of monitoring equipment from PM 2.5, real-time collection of data to Timestream.
  • Time series data storage: Realize time series data storage through Amazon Timestream time series database, set the storage time of memory and magnetic storage (cost optimization layer), you can keep recent data in memory, and move historical data to cost optimization according to user-defined strategies Storage layer.
  • Real-time time series data analysis: Real-time access to Timestream data through Grafana (installing the Timesteam For Grafana plug-in), through Grafana's rich analysis chart form, combined with Amazon Timestream's built-in time series analysis functions, you can realize near real-time identification of trends and patterns of IoT data .
    The specific architecture diagram is as follows:

Deployment environment

1.1 Create Cloudformation

Please use your account (region please select us-east-1) to download the Cloudformation Yaml file:
https://bigdata-bingbing.s3-ap-northeast-1.amazonaws.com/timestream-short-new.yaml

Choose the default for everything else, click the Create Stack button.

Cloud Formation created successfully

1.2 Connect to the newly built Ec2 bastion machine:

Modify certificate file permissions

chmod 0600 [path to downloaded .pem file]ssh -i [path to downloaded .pem file] ec2-user@[bastionEndpoint]

execute aws configure:

aws configure

default region name, input: "us-east-1", other select default settings.

1.3 Connect to the EC2 bastion machine and install the corresponding software

Set time zone

TZ='Asia/Shanghai'; export TZ

Install python3

sudo yum install -y python3

Install python3 pip

sudo yum install -y python3-pip

pip3 install boto3

sudo pip3 install boto3

pip3 install numpy

sudo pip3 install numpy

install git

sudo yum install -y git

1.4 Download Github Timesteram Sample library

git clone https://github.com/awslabs/amazon-timestream-tools amazon-timestream-tools

1.5 Install Grafana Server

connected to the EC2 bastion machine:

sudo vi /etc/yum.repos.d/grafana.repo

For OSS releases: (copy the following content to grafana.repo)

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

install grafana server:

sudo yum install -y grafana

Start grafana server:

sudo service grafana-server start
sudo service grafana-server status

Configure the grafana server to automatically start when the operating system starts:

sudo /sbin/chkconfig --add grafana-server

1.6 Install timestream Plugin

sudo grafana-cli plugins install grafana-timestream-datasource

restart grafana

sudo service grafana-server restart

1.7 Configure Grafana to access the IAM Role used by Timesteam service

Get IAM Role Name

select IAM service, select the role to be modified, role name:

timestream-iot-grafanaEC2rolelabview-us-east-1

Modify role trust relationship:

Select all Policy document and replace it with the following content:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid":"",
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Sid":"",
      "Effect": "Allow",
      "Principal": {
        "AWS": "[请替换成CloudFormation output中的role arn]"
      },
      "Action": "sts:AssumeRole"
    } 
  ]
}

Modified trust relationship:

1.8 Log in to Grafana server

Log in to Grafana Server for the first time:

  1. Open the browser and visit http://[Grafana server public ip]:3000
  2. The default Grafana Server listening port is: 3000

, as shown in the figure below, access Cloudformation output:

  1. In the login interface, input username: admin; password: admin. ( input username and password are both admin )
  2. Click Log In. After logging in successfully, you will be prompted to change the password

1.9 Add Timestream data source

Add Timestream data source

1.10 Configure Timestream data source

ARN information required for copy configuration (from cloudformation output tab) Default Region: us-east-1

IoT data storage

2.1 Create Timestream database iot

2.2 Create Timestream table pm25

IoT data import

3.1 Install Flink connector to Timestream

install java8

sudo yum install -y java-1.8.0-openjdk*

java -version

install debug info, otherwise jmap will throw exception

sudo yum  --enablerepo='*-debug*' install -y java-1.8.0-openjdk-debuginfo

Install maven

sudo wget https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo 
sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo 
sudo yum install -y apache-maven 
mvn --version 

change java version from 1.7 to 1.8

sudo update-alternatives --config java

sudo update-alternatives --config javac

Install Apache Flink

The latest Apache Flink version that supports Kinesis Data Analytics is 1.8.2.

1. Create flink folder

cd

mkdir flink

cd flink

2. Download Apache Flink version 1.8.2 source code:

wget https://archive.apache.org/dist/flink/flink-1.8.2/flink-1.8.2-src.tgz

3. Unzip the Apache Flink source code:

tar -xvf flink-1.8.2-src.tgz

4. Enter the Apache Flink source code directory:

cd flink-1.8.2

5. Compile and install Apache Flink (This compilation time is relatively long and takes about 20 minutes):

mvn clean install -Pinclude-kinesis -DskipTests

3.2 Create Kinesis Data Stream Timestreampm25Stream

aws kinesis create-stream --stream-name Timestreampm25Stream --shard-count 1

3.3 Run Flink Connector to establish Kinesis connection to Timestream:

cd
cd amazon-timestream-tools/integrations/flink_connector
mvn clean compile

data collection process, please continue to run the following command:

mvn exec:java -Dexec.mainClass="com.amazonaws.services.kinesisanalytics.StreamingJob" -Dexec.args="--InputStreamName 
Timestreampm25Stream --Region us-east-1 --TimestreamDbName iot --TimestreamTableName pm25"

3.4 Prepare PM2.5 demo data:

connected to EC2 bastion machine

1. Download 5 demo data generation program:

cd
mkdir pm25
cd pm25
wget https://bigdata-bingbing.s3-ap-northeast-1.amazonaws.com/pm25_new_kinisis_test.py .

2. Run 5 demo data generation program (python program 2 parameters --region default: us-east-1; --stream default: Timestreampm25Stream)

data collection process, please continue to run the following command:

python3 pm25_new_kinisis_test.py

IoT data analysis

4.1 Log in to Grafana Server to create dashboard and Panel

When creating a Dashboard query, please set the time zone to the local browser time zone:

Create a new Panel:

Select the data source to be accessed, and paste the SQL statement executed for query and analysis into the new Panel:

4.2 Create time data analysis instrument version Dashboard PM2.5 Analysis 1(Save as PM2.5 Analysis 1)

4.2.1 Query the average PM2.5 value of each monitoring station in Beijing

New Panel

SELECT CASE WHEN location = 'fengtai_xiaotun' THEN avg_pm25 ELSE NULL END AS fengtai_xiaotou,
CASE WHEN location = 'fengtai_yungang' THEN avg_pm25 ELSE NULL END AS fengtai_yungang,
CASE WHEN location = 'daxing' THEN avg_pm25 ELSE NULL END AS daxing,
CASE WHEN location = 'wanshou' THEN avg_pm25 ELSE NULL END AS wanshou,
CASE WHEN location = 'gucheng' THEN avg_pm25 ELSE NULL END AS gucheng,
CASE WHEN location = 'tiantan' THEN avg_pm25 ELSE NULL END AS tiantan,
CASE WHEN location = 'yanshan' THEN avg_pm25 ELSE NULL END AS yanshan,
CASE WHEN location = 'miyun' THEN avg_pm25 ELSE NULL END AS miyun,
CASE WHEN location = 'changping' THEN avg_pm25 ELSE NULL END AS changping,
CASE WHEN location = 'aoti' THEN avg_pm25 ELSE NULL END AS aoti,
CASE WHEN location = 'mengtougou' THEN avg_pm25 ELSE NULL END AS mentougou,
CASE WHEN location = 'huairou' THEN avg_pm25 ELSE NULL END AS huairou,
CASE WHEN location = 'haidian' THEN avg_pm25 ELSE NULL END AS haidian,
CASE WHEN location = 'nongzhan' THEN avg_pm25 ELSE NULL END AS nongzhan,
CASE WHEN location = 'tongzhou' THEN avg_pm25 ELSE NULL END AS tongzhou,
CASE WHEN location = 'dingling' THEN avg_pm25 ELSE NULL END AS dingling,
CASE WHEN location = 'yanqing' THEN avg_pm25 ELSE NULL END AS yanqing,
CASE WHEN location = 'guanyuan' THEN avg_pm25 ELSE NULL END AS guanyuan,
CASE WHEN location = 'dongsi' THEN avg_pm25 ELSE NULL END AS dongsi,
CASE WHEN location = 'shunyi' THEN avg_pm25 ELSE NULL END AS shunyiFROM 
(SELECT location, round(avg(measure_value::bigint),0) as avg_pm25
FROM "iot"."pm25" 
where measure_name='pm2.5' 
and city='Beijing'
and time >= ago(30s)
group by location,bin(time,30s)
order by avg_pm25 desc)

Select the graph display select Gauge

Save Panel as Beijing PM2.5 analysis

Edit Panel Title:Beijing PM2.5 analysis

Save Dashboard PM2.5 analysis 1:

4.2.2 Query the average PM2.5 value of each monitoring station in Shanghai in one day

New Panel

SELECT CASE WHEN location = 'songjiang' THEN avg_pm25 ELSE NULL END AS songjiang,
CASE WHEN location = 'fengxian' THEN avg_pm25 ELSE NULL END AS fengxian, 
CASE WHEN location = 'no 15 factory' THEN avg_pm25 ELSE NULL END AS No15_factory, 
CASE WHEN location = 'xujing' THEN avg_pm25 ELSE NULL END AS xujing,
 CASE WHEN location = 'pujiang' THEN avg_pm25 ELSE NULL END AS pujiang, 
 CASE WHEN location = 'putuo' THEN avg_pm25 ELSE NULL END AS putuo, 
 CASE WHEN location = 'shangshida' THEN avg_pm25 ELSE NULL END AS shangshida,
CASE WHEN location = 'jingan' THEN avg_pm25 ELSE NULL END AS jingan, 
CASE WHEN location = 'xianxia' THEN avg_pm25 ELSE NULL END AS xianxia, 
CASE WHEN location = 'hongkou' THEN avg_pm25 ELSE NULL END AS hongkou, 
CASE WHEN location = 'jiading' THEN avg_pm25 ELSE NULL END AS jiading, 
CASE WHEN location = 'zhangjiang' THEN avg_pm25 ELSE NULL END AS zhangjiang, 
CASE WHEN location = 'miaohang' THEN avg_pm25 ELSE NULL END AS miaohang, 
CASE WHEN location = 'yangpu' THEN avg_pm25 ELSE NULL END AS yangpu, 
CASE WHEN location = 'huinan' THEN avg_pm25 ELSE NULL END AS huinan, 
CASE WHEN location = 'chongming' THEN avg_pm25 ELSE NULL END AS chongming
From(
SELECT location, round(avg(measure_value::bigint),0) as avg_pm25
FROM "iot"."pm25" 
where measure_name='pm2.5' 
and city='Shanghai'
and time >= ago(30s)
group by location,bin(time,30s)
order by avg_pm25 desc)

Save Panel as Shanghai PM2.5 analysis

Edit Panel Title:Shanghai PM2.5 analysis

Save Dashboard PM2.5 analysis 1

4.2.3 Query the PM2.5 average value of each monitoring station in Guangzhou

New Panel

SELECT CASE WHEN location = 'panyu' THEN avg_pm25 ELSE NULL END AS panyu,
CASE WHEN location = 'commercial school' THEN avg_pm25 ELSE NULL END AS commercial_school, 
CASE WHEN location = 'No 5 middle school' THEN avg_pm25 ELSE NULL END AS No_5_middle_school,
CASE WHEN location = 'guangzhou monitor station' THEN avg_pm25 ELSE NULL END AS Guangzhou_monitor_station, 
CASE WHEN location = 'nansha street' THEN avg_pm25 ELSE NULL END AS Nansha_street, 
CASE WHEN location = 'No 86 middle school' THEN avg_pm25 ELSE NULL END AS No_86_middle_school, 
CASE WHEN location = 'luhu' THEN avg_pm25 ELSE NULL END AS luhu, 
CASE WHEN location = 'nansha' THEN avg_pm25 ELSE NULL END AS nansha, 
CASE WHEN location = 'tiyu west' THEN avg_pm25 ELSE NULL END AS tiyu_west, 
CASE WHEN location = 'jiulong town' THEN avg_pm25 ELSE NULL END AS jiulong_town, 
CASE WHEN location = 'huangpu' THEN avg_pm25 ELSE NULL END AS Huangpu, 
CASE WHEN location = 'baiyun' THEN avg_pm25 ELSE NULL END AS Baiyun, 
CASE WHEN location = 'maofeng mountain' THEN avg_pm25 ELSE NULL END AS Maofeng_mountain, 
CASE WHEN location = 'chong hua' THEN avg_pm25 ELSE NULL END AS Chonghua, 
CASE WHEN location = 'huadu' THEN avg_pm25 ELSE NULL END AS huadu
from(
    SELECT location, round(avg(measure_value::bigint),0) as avg_pm25
FROM "iot"."pm25" 
where measure_name='pm2.5' 
and city='Guangzhou'
and time >= ago(30s)
group by location,bin(time,30s)
order by avg_pm25 desc)

Save Panel as Guangzhou PM2.5 analysis

Edit Panel Title:Guangzhou PM2.5 analysis

Save Dashboard PM2.5 analysis 1

4.2.4 Query the average PM2.5 value of each monitoring station in Shenzhen

New Panel

SELECT CASE WHEN location = 'huaqiao city' THEN avg_pm25 ELSE NULL END AS Huaqiao_city,
 CASE WHEN location = 'xixiang' THEN avg_pm25 ELSE NULL END AS xixiang,
CASE WHEN location = 'guanlan' THEN avg_pm25 ELSE NULL END AS guanlan,
CASE WHEN location = 'longgang' THEN avg_pm25 ELSE NULL END AS Longgang,
CASE WHEN location = 'honghu' THEN avg_pm25 ELSE NULL END AS Honghu,
CASE WHEN location = 'pingshan' THEN avg_pm25 ELSE NULL END AS Pingshan,
CASE WHEN location = 'henggang' THEN avg_pm25 ELSE NULL END AS Henggang,
CASE WHEN location = 'minzhi' THEN avg_pm25 ELSE NULL END AS Minzhi,
CASE WHEN location = 'lianhua' THEN avg_pm25 ELSE NULL END AS Lianhua,
CASE WHEN location = 'yantian' THEN avg_pm25 ELSE NULL END AS Yantian,
CASE WHEN location = 'nanou' THEN avg_pm25 ELSE NULL END AS Nanou,
CASE WHEN location = 'meisha' THEN avg_pm25 ELSE NULL END AS Meisha
From(
SELECT location, round(avg(measure_value::bigint),0) as avg_pm25
FROM "iot"."pm25" 
where measure_name='pm2.5' 
and city='Shenzhen'
and time >= ago(30s)
group by location,bin(time,30s)
order by avg_pm25 desc)

Save Panel as Shenzhen PM2.5 analysis

Edit Panel Title:Shenzhen PM2.5 analysis

Save Dashboard PM2.5 analysis 1

4.2.5 Shenzhen OCT Time Series Analysis (PM2.5 analysis in the last 5 minutes)

New Panel

select location, CREATE_TIME_SERIES(time, measure_value::bigint) as PM25 FROM iot.pm25
where measure_name='pm2.5' 
and location='huaqiao city'
and time >= ago(5m)
GROUP BY location

select graphics display select Lines; Select Points:

Save Panel as Shen Zhen Huaqiao City PM2.5 analysis

Edit Panel Title: PM2.5 analysis of Shenzhen Overseas Chinese Town in the last 5 minutes

Save Dashboard PM2.5 analysis 1

4.2.6 Find the average PM2.5 value of Shenzhen OCT in the past 2 hours at 30-second intervals (using linear interpolation to fill in missing values)

New Panel

WITH binned_timeseries AS (
    SELECT location, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::bigint), 2) AS avg_PM25
    FROM "iot".pm25
    WHERE measure_name = 'pm2.5'
        AND location='huaqiao city'
        AND time > ago(2h)
    GROUP BY location, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT location,
        INTERPOLATE_LINEAR(
            CREATE_TIME_SERIES(binned_timestamp, avg_PM25),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 30s)) AS interpolated_avg_PM25
    FROM binned_timeseries
    GROUP BY location
)
SELECT time, ROUND(value, 2) AS interpolated_avg_PM25
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_PM25)

select graphics display select Lines:

Save Panel as Shen Zhen Huaqiao City PM2.5 analysis 1

Edit Panel Title: Average PM2.5 value of Shenzhen OCT in the past 2 hours (using linear interpolation to fill in missing values)

Save Dashboard PM2.5 analysis 1

4.2.7 Ranking of the average PM2.5 of all cities in the past 5 minutes (linear interpolation)

New Panel

SELECT CASE WHEN city = 'Shanghai' THEN inter_avg_PM25 ELSE NULL END AS Shanghai,
CASE WHEN city = 'Beijing' THEN inter_avg_PM25 ELSE NULL END AS Beijing,
CASE WHEN city = 'Guangzhou' THEN inter_avg_PM25 ELSE NULL END AS Guangzhou,
CASE WHEN city = 'Shenzhen' THEN inter_avg_PM25 ELSE NULL END AS Shenzhen,
CASE WHEN city = 'Hangzhou' THEN inter_avg_PM25 ELSE NULL END AS Hangzhou,
CASE WHEN city = 'Nanjing' THEN inter_avg_PM25 ELSE NULL END AS Nanjing,
CASE WHEN city = 'Chengdu' THEN inter_avg_PM25 ELSE NULL END AS Chengdu,
CASE WHEN city = 'Chongqing' THEN inter_avg_PM25 ELSE NULL END AS Chongqing,
CASE WHEN city = 'Tianjin' THEN inter_avg_PM25 ELSE NULL END AS Tianjin,
CASE WHEN city = 'Shenyang' THEN inter_avg_PM25 ELSE NULL END AS Shenyang,
CASE WHEN city = 'Sanya' THEN inter_avg_PM25 ELSE NULL END AS Sanya,
CASE WHEN city = 'Lasa' THEN inter_avg_PM25 ELSE NULL END AS Lasa
from(
WITH binned_timeseries AS (
    SELECT city,location, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::bigint), 2) AS avg_PM25
    FROM "iot".pm25
    WHERE measure_name = 'pm2.5'
        AND time > ago(5m)
    GROUP BY city,location, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT city,location,
        INTERPOLATE_LINEAR(
            CREATE_TIME_SERIES(binned_timestamp, avg_PM25),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 30s)) AS interpolated_avg_PM25
    FROM binned_timeseries
    GROUP BY city,location
), all_location_interpolated as (
SELECT city,location,time, ROUND(value, 2) AS interpolated_avg_PM25
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_PM25))
select city,avg(interpolated_avg_PM25) AS inter_avg_PM25
from all_location_interpolated
group by city
order by avg(interpolated_avg_PM25) desc)

Select Panel graphics type:

Save Panel as all city analysis 1

Edit Panel Title: PM2.5 average of all cities in the past 5 minutes

Save Dashboard PM2.5 analysis 1

4.2.8 The ten collection points with the highest PM2.5 in the past 5 minutes (linear interpolation)

New Panel

WITH binned_timeseries AS (
    SELECT city,location, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::bigint), 2) AS avg_PM25
    FROM "iot".pm25
    WHERE measure_name = 'pm2.5'
        AND time > ago(5m)
    GROUP BY city,location, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT city,location,
        INTERPOLATE_LINEAR(
            CREATE_TIME_SERIES(binned_timestamp, avg_PM25),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 30s)) 
                AS interpolated_avg_PM25
    FROM binned_timeseries
    GROUP BY city,location
), interpolated_cross_join as (
SELECT city,location,time, ROUND(value, 2) AS interpolated_avg_PM25
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_PM25))
select city,location, avg(interpolated_avg_PM25) as avg_PM25_loc
from interpolated_cross_join
group by city,location
order by avg_PM25_loc desc
limit 10

Select Table

Save Panel as all city analysis 2

Edit Panel Title: Ten acquisition points with the highest PM2.5 in the past 5 minutes (linear interpolation)

Save Dashboard PM2.5 analysis 1

4.2.9 The ten collection points with the lowest PM2.5 in the past 5 minutes (linear interpolation)

New Panel

WITH binned_timeseries AS (
    SELECT city,location, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::bigint), 2) AS avg_PM25
    FROM "iot".pm25
    WHERE measure_name = 'pm2.5'
        AND time > ago(5m)
    GROUP BY city,location, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT city,location,
        INTERPOLATE_LINEAR(
            CREATE_TIME_SERIES(binned_timestamp, avg_PM25),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 30s)) 
                AS interpolated_avg_PM25
    FROM binned_timeseries
    GROUP BY city,location
), interpolated_cross_join as (
SELECT city,location,time, ROUND(value, 2) AS interpolated_avg_PM25
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_PM25))
select city,location, avg(interpolated_avg_PM25) as avg_PM25_loc
from interpolated_cross_join
group by city,location
order by avg_PM25_loc asc
limit 10

select Table

Save Panel as all city analysis 3

Edit Panel Title: Ten collection points with the lowest PM2.5 in the past 5 minutes (linear interpolation)

Save Dashboard PM2.5 analysis 1

Set the dashboard to refresh every 5 seconds:

This blog focuses on real-time collection, storage and analysis of time series data through Timestream, Kinesis Stream hosting services and Grafana (taking PM 2.5 scenarios as an example), including deployment architecture, environment deployment, data collection, data storage and analysis. Hope When you have similar IoT time series data storage and analysis needs, be inspired to realize the efficient management of massive IoT time series data, explore the laws, patterns and values contained in the IoT data, and help business development.

Appendix:

"Amazon Timestream Developer Guide"

https://docs.aws.amazon.com/zh_cn/timestream/latest/developerguide/what-is-timestream.html

"Amazon Timestream Development Program Example"

https://github.com/awslabs/amazon-timestream-tools/tree/master/sample_apps

"Amazon Timestream and Grafana Integration Example"

https://docs.aws.amazon.com/zh_cn/timestream/latest/developerguide/Grafana.html#Grafana.sample-app

Author of this article: Liu Bingbing

Amazon Cloud Technology Database Solution Architect, responsible for consulting and architecture design of database solutions based on Amazon Cloud Technology, and dedicated to the research and promotion of big data. Before joining Amazon Cloud Technology, he worked for Oracle for many years and has extensive experience in database cloud planning, design, operation and maintenance tuning, DR solutions, big data and data warehouses, and enterprise applications.


亚马逊云开发者
2.9k 声望9.6k 粉丝

亚马逊云开发者社区是面向开发者交流与互动的平台。在这里,你可以分享和获取有关云计算、人工智能、IoT、区块链等相关技术和前沿知识,也可以与同行或爱好者们交流探讨,共同成长。