Application and challenges of tens of billions of graph data in Kuaishou security intelligence

This article was first published in Nebula Graph public NebulaGraphCommunity , Follow to see the technical practice of large-scale graph database.


[ author introduction ]

  • Qi : Kuaishou Security-Mobile Security Group, mainly responsible for the construction of Kuaishou Security Intelligence Platform
  • Ni Wen : Kuaishou data platform-distributed storage group, mainly responsible for the construction of Kuaishou map database
  • Jingyi : Kuaishou data platform-distributed storage group, mainly responsible for the construction of Kuaishou map database

[ company profile ]

Kuaishou is a world-leading content community and social platform that aims to help people find what they need and use their strengths through short videos, and continue to improve everyone's unique sense of happiness.

1. Why do you need a graph database

Traditional relational databases have poor performance in dealing with complex data relational operations. As the amount and depth of data increase, relational databases cannot calculate results in an effective time.

Therefore, in order to better reflect the connection between data, enterprises need a database technology that stores relational information as entities and flexibly expands the data model. This technology is a graph database.

Compared with traditional relational databases, graph databases have the following two advantages:

The first point, the graph database can well reflect the relationship between data


It can be seen from the above graph model that the goal of graph database is to display these relationships in an intuitive way based on graph models. Its model expression based on the relationship between things makes graph databases naturally interpretable.

The second point, the graph database can handle the relationship between the data

  • High-performance : Traditional relational databases mainly rely on JOIN operations when processing relational data. As the amount of data increases and the depth of association increases, it is subject to multi-table joins and foreign key constraints. Traditional relational databases will lead to more The large additional overhead causes serious performance problems. The graph database adapts the data structure of the graph model from the bottom, making its data query and analysis faster.
  • Flexible : The graph database has a very flexible data model. Users can adjust the graph data structure model at any time according to business changes, add or delete vertices, edges, expand or shrink the graph model, etc., such frequent data schema changes are Very good support can be achieved on the graph database.
  • Agile : The graph model of the graph database is very intuitive and supports the test-driven development model. Functional testing and performance testing can be performed each time it is built. It meets today's most popular agile development requirements and is also helpful for improving production and delivery efficiency.

Based on the above two advantages, the graph database has huge demand in the fields of financial anti-fraud, public security criminal investigation, social network, knowledge graph, data blood relationship, IT assets and operation and maintenance, and threat intelligence.

And Kuaishou security intelligence integrates the security data of the entire chain such as mobile terminal, PC Web terminal, cloud, alliance and small program, and finally forms a unified basic security capability to empower the company's business.

Since security intelligence itself has the characteristics of data entity diversity, association relationship complexity, data label richness, etc., it is most appropriate to use a graph database to do it.

2. Why choose Nebula Graph

After collecting requirements and preliminary research, Kuaishou Security Intelligence finally chose Nebula Graph as the graph database for the production environment on the graph database.

2.1 Requirements collection

For the selection of the graph database, the main requirements are two aspects: data writing and data query

  1. Data writing method: offline + online

    • Need to support day-level offline data batch import, the amount of newly written data every day is at the level of tens of billions, and it is required that the correlation data generated on the same day can be written in an hour level
    • Need to support real-time data writing. Flink consumes data from Kafka, and after logical processing is done, it directly connects to the graph database for real-time data writing. The supported QPS is in the order of 10W
  2. Data query method: millisecond-level online real-time query, the QPS that needs to be supported is in the order of 5W

    • Point and edge attribute filtering and query
    • Multi-degree relational query
  3. Some basic graph data analysis capabilities

    • Graph shortest path algorithm, etc.

In summary, the selected graph database suitable for big data architecture needs to provide three basic capabilities: real-time and offline data writing , online graph data basic query , simple OLAP based on graph database Analyzing , its corresponding positioning is: online, high-concurrency, low-latency OLTP class diagram query service and simple OLAP class diagram query capability .

2.2 Selection

Based on the above deterministic requirements, in the selection of the graph database, we mainly considered the following points:

  • The amount of data that the graph database can support must be large enough, because enterprise-level graph data often reaches the level of tens of billions or even hundreds of billions
  • The cluster can be expanded linearly, because it needs to be able to expand the machine online without stopping the service in the production environment
  • The query performance should reach the millisecond level, because it needs to meet the performance requirements of online services, and as the amount of graph data increases, query performance will not be affected
  • It can easily connect with big data platforms such as HDFS and Spark, and build a graph computing platform on this basis later

2.3 Features of Nebula Graph


  1. High performance: provide millisecond-level read and write
  2. Scalable: horizontally expandable, supporting ultra-large-scale image storage
  3. Engine architecture: separation of storage and computing
  4. Graph data model: point (vertex), edge (edge), and support point or edge properties (properties) modeling
  5. Query language: nGQL, SQL-like query language, easy to learn and use, to meet complex business needs
  6. Provides richer and complete data import and export tools
  7. As an open source graph database product, Nebula Graph has good activity in the open source community
  8. Compared with JanusGraph and HugeGraph, Nebula Graph query performance has been greatly improved

It is based on Nebula Graph of our use scenarios and needs, so we finally chose Nebula Graph as the graph database for our production environment.

3. Graphical Data Modeling of Security Intelligence

As shown in the figure below, from the point of view of intelligence, security layered confrontation and defense, from bottom to top, the difficulty of confrontation gradually increases:


On each plane, the attacker and the defender were in separate confrontations. Now, after using the graph database, the entity IDs of each level can be connected through the association relationship to form a three-dimensional network through this three-dimensional level. The net can enable companies to quickly grasp more comprehensive information about attackers' attack methods, cheating tools, and gang characteristics.

Therefore, graph structure data modeling based on security data can transform the original plane recognition level into a three-dimensional network recognition level, which can help enterprises to identify attacks and risks more clearly and accurately.

3.1 Basic graph structure

The main purpose of graph modeling of security intelligence is to judge the risk of any dimension, not only to see its risk by the state and attributes of the dimension itself, but to expand the dimension from the individual to the network level, through the graph structure Data relationship, through upper and lower levels (heterogeneous graphs) and sibling levels (isomorphic graphs) to three-dimensionally observe the risk of this dimension.

Take the device risk as an example: For a device, it is divided into four levels: the network layer, the device layer, the account layer, and the user layer, each of which is expressed by its representative entity ID. Through the graph database, a three-dimensional, three-dimensional level of risk recognition for a device can be achieved, which is very helpful for risk identification.


As shown in the figure above, this is the basic graph structure modeling of security intelligence, and the above constitutes a knowledge graph based on security intelligence.

3.2 Dynamic graph structure

On top of the basic graph structure, it is also necessary to consider that the existence of each association relationship is time-sensitive. The association relationship exists in the A time period, but the association relationship may not exist in the B time period. Therefore, we hope to be safe. Intelligence can truly reflect the relationship between different time periods of objective reality on the graph database.

This means that the data of different graph structure models needs to be presented with the different query time intervals, which we call dynamic graph structure.

In the design of the dynamic graph structure, a question involved is: What kind of side relationship should be returned in the interval being queried?


As shown in the figure above, when the query time interval is B, C, D, this edge should be returned, and when the query time interval is A, E, this edge should not be returned.

3.3 Weight graph structure

In the face of black and gray products or real people doing evil, this situation often occurs: a device will correspond to a lot of accounts, some accounts are common accounts of criminals and bad guys, and some accounts are bought by them for specific illegal activities. Live account. In order to cooperate with the public security or legal affairs, we need to which accounts are commonly used by real bad guys, and which accounts are only accounts that they bought for evil.

Therefore, it will involve the weight of the side of the account and the device association: if it is an account commonly used by the device, then it indicates that the relationship between the account and the device is a strong relationship, then the weight of this side will be higher; if only If it is an account that is only used for evil/live broadcast, the relationship between the account and the device will be weaker, and the corresponding weight will be lower.

Therefore, in addition to the time dimension, we also added the weight dimension to the edge attributes.

In summary, the final graphic model established on security intelligence is: weighted dynamic time zone diagram structure.

4. Architecture and optimization of security intelligence service based on graph database

The overall security intelligence service architecture diagram is as follows:


Overall architecture diagram of security intelligence service

Among them, the information integrated query platform based on the graph database, the software architecture is shown in the following figure:


Software Architecture Diagram of Information Integrated Query Platform

Note: AccessProxy supports access from the office network to IDC, and kngx supports direct calls within IDC

4.1 Offline data writing optimization

Regarding the relationship data constructed, the amount updated every day is at the level of billions. How to ensure that these billions of data can be written within an hour, sense data abnormalities and not lose data, which is also a very challenging Sex work.

The optimization of this part is mainly: failure retry, dirty data discovery and import failure alarm strategy .

During the data import process, various factors such as dirty data, server jitter, database process hangs, too fast writing, and other factors may cause batch data write failure. We use synchronous client API, multi-level retry mechanism, and failure exit strategy. Solve the problem of writing failure or incomplete batch writing caused by jitter restart of the server.

4.2 Dual-cluster HA guarantee and switching mechanism

In the graph database part, Kuaishou deployed two sets of online and offline graph database clusters. The data of the two clusters adopts synchronous double writing. The online cluster undertakes online RPC services, and the offline cluster undertakes CASE analysis and WEB query services. These two clusters Do not affect each other.

At the same time, the status monitoring of the cluster and the dynamic configuration delivery module are connected. When a slow query or failure occurs in a certain cluster, the dynamic configuration delivery module is used to automatically switch, so that the upper-layer business is unaware.

4.3 Construction of cluster stability

The data architecture team conducted overall research, maintenance and improvement on the open source version of Nebula Graph.

Nebula's cluster adopts the model of separation of computing and storage. From the perspective of the overall architecture, it is divided into three roles: Meta, Graph, and Storage, responsible for metadata management, computing and storage:


Nebula overall architecture diagram

The storage layer of Nebula serves as the base of the graph database engine and supports multiple storage types. When we use Nebula, we choose classic mode , that is, RocksdDB implemented with classic C++ is used as the underlying KV storage, and the Raft algorithm is used to solve the consistency problem , So that the entire cluster supports horizontal dynamic expansion.


Storage layer architecture diagram

We have fully tested the storage layer, improved code and optimized parameters. These include: optimizing the Raft heartbeat logic, improving the logic of leader election and log offset, and tuning the Raft parameters to improve the failure recovery time of a single cluster; combined with the optimization of the client retry mechanism, the Nebula engine is in the user experience From the initial failure, the direct offline is improved to the millisecond-level recovery from the failure.

In the monitoring and alarm system, we have built monitoring of multiple levels of the cluster. The overall monitoring architecture is shown in the following figure:


Cluster monitoring architecture diagram

Including the following aspects:

  1. Machine hardware level cpu busy, disk util, memory, network, etc.
  2. Monitoring of meta, storage, and graph service interfaces of each role in the cluster, monitoring of the online status and distribution of the partition leader
  3. Evaluation and monitoring of the overall availability of the cluster from the perspective of users
  4. Metric collection and monitoring of meta, storage, rocksdb, graph of each role in the cluster
  5. Slow query monitoring

4.4 Optimization of Super Node Query

Since the mid-point of reality diagram of a network structure is often in line with power-law distribution features, graph traversal encounters super points (out of one million / ten million) will lead to a significant database level slow query , how to ensure online service inquiry Time-consuming stability and avoiding extremely time-consuming occurrences are problems that we need to solve.

graph traversal super point problem in engineering is to reduce the size of the query on the premise that the business is acceptable. The specific methods are:

  1. Truncate the qualified limit in the query
  2. Query edge sampling according to a certain proportion

The specific optimization strategies are described below:

4.4.1 Limit truncation optimization


The business level can accept limit truncation for each hop, for example, the following two queries:

# 最终做limit截断
go from hash('x.x.x.x') over GID_IP REVERSELY where (GID_IP.update_time >= xxx and GID_IP.create_time <= xxx) yield GID_IP.create_time as create_time, GID_IP.update_time as update_time, $^.IP.ip as ip, $$.GID.gid | limit 100
# 在中间查询结果已经做了截断,然后再进行下一步
go from hash('x.x.x.x') over GID_IP REVERSELY where (GID_IP.update_time >= xxx and GID_IP.create_time <= xxx) yield GID_IP._dst as dst | limit 100 | go from $-.dst ..... | limit 100

[Before optimization]

For the second query statement, before optimization, storage will traverse all the out-degrees of the point, and the graph layer will truncate limit n before finally returning to the client. This unavoidable time-consuming operation is unavoidable.

In addition, although Nebula supports the storage configuration cluster (process) level parameter max_edge_returned_per_vertex (the maximum output degree of each vertex scan), it cannot meet the flexibly specified limit of the query statement level, and for multi-hop multi-point statement level cannot be accurate. limit.

[Optimization ideas]

One-hop go traversal query is divided into two steps:

  • step1: Scan all destVertex out of srcVertex (at the same time get edge attributes)
  • step2: Get all the attribute value of destVertex

Then the execution of each jump in go multi-hop traversal is divided into two cases:

  • case 1: only execute step1 to sweep out the degree
  • case 2: execute step1 + step2

And step2 is time-consuming (checking each destVertex attribute is a rocksdb iterator, and it takes 500us if the cache is not hit). It is the key to advance the "limit cut" to before step2 for the point with a high degree of output, and the limit can be pushed down. Going to the step1 storage sweeping-out stage also has a greater benefit for the super points.


Here we summarize under what conditions the "limit truncation optimization" and its benefits can be performed:


Table note: N represents the vertex output degree, n represents the limit n, scan represents the consumption of the scan-out degree, get represents the consumption of obtaining the vertex attribute

[Test results]

For the above case1 and case2, the "limit truncation optimization" can be performed and the benefits are obvious. The security business query belongs to case2. The following are the test results for case2 limit 100 on a three-machine cluster with a single machine and a single disk of 900 GB data storage capacity (not Conditions for hitting rocksdb cache):


The above test results show that after our optimization, we have achieved very excellent performance in terms of time-consuming query of graph superpoints.

4.4.2 Edge sampling optimization

For scenarios where "limit truncation optimization" cannot be simply done, we can adopt the method of "side sampling optimization" to solve it. Based on the "storage process level configurable for each vertex's maximum return edge and open edge sampling functions" natively supported by the Nebula community, we can further support the following functions after optimization:

  1. Storage After the sampling function is turned on, you can configure the max_iter_edge_for_sample instead of scanning all edges (default)
  2. Graph supports go per bounce sampling function
  3. The storage and graph "sampling whether to open enable_reservoir_sampling " and "each vertex maximum return degree max_edge_returned_per_vertex " support session level parameters can be configured

With the support of the above functions, the business can more flexibly adjust the query sampling ratio and control the traversal query scale to achieve the smoothness of online services.

4.5 Renovation and optimization of query client

The open source Nebula Graph has its own set of clients, and how to combine this set of clients with Kuaishou's projects, here we have also done some corresponding transformations and optimizations. Mainly solved the following two problems:

  • connection pooling : The low-level interface provided by the official Nebula Graph client. Each query requires the steps of establishing connection initialization, executing query, and closing the connection. Frequent creation and closing of connections in high-frequency query scenarios greatly affects the system Performance and stability. In practice, the official client is re-encapsulated through connection pooling technology, and each stage of the connection life cycle is monitored, which realizes the reuse and sharing of connections and improves business stability.
  • automatic failover : through abnormal monitoring and regular detection of the various stages of connection establishment, initialization, query, and destruction, real-time discovery and automatic removal of faulty nodes in the database cluster are realized. If the entire cluster is unavailable, it can be done in seconds. Level migration to a standby cluster reduces the potential impact of cluster failures on the availability of online services.

4.6 Visualization and download of query results

For fixed-relation queries (nGQL), the front end displays the customized graphical interface according to the returned results, as shown in the following figure:


The front end here uses ECharts , and some optimizations have been made here for loading and displaying graph structure data at the front end.

question one : The relationship graph needs to be able to display the detailed information of each node, and the graph provided by ECharts can only show simple value values.

Solution : Transform the original code, add a click event to each node, and pop up a modal box to display more detailed information.

Question two : After the click event is triggered, the relationship graph will rotate for a long time, and it is impossible to identify which node is clicked.

Solution : Get the window position of each node when the graph is first rendered, and fix the position of each node after the click event is triggered.

Question three : When there are many nodes in the graph, the relationship graph is more crowded.

Solution : Turn on mouse zoom and comment roaming functions.

For the query of flexible relationship (flexible nGQL), Nebula Graph Studio , as shown in the following figure:


V. Practice of Graph Database in Security Intelligence

Based on the structure and optimization of the database in the above figure, we provide two access methods: Web query and RPC query, which mainly support the following services of Kuaishou:

  • Support Kuaishou safe source traceability, offline attack and black and gray production analysis
  • Support business security risk control and anti-cheating

For example, the performance of group control equipment and normal equipment on the graph data is obviously different:


For the identification of group control equipment:


6. Summary and Outlook


  • stability building : cluster HA capability to achieve real-time synchronization across AZ clusters and automatic access switching to ensure an SLA of 99.99
  • performance improvement : consider transforming the storage scheme of RPC and AEP new hardware, and optimize the query execution plan
  • Graph calculation platform and graph query get through : build an integrated platform for graph calculation/graph learning/graph query
  • Real-time judgment : Real-time relationship writing and real-time risk comprehensive judgment

Seven. Acknowledgements

Thanks to the open source community Nebula Graph for supporting Kuaishou.

Exchange graph database technology? Please join Nebula exchange group your business card under Nebulae fill , Nebula assistant will pull you into the group ~

Recommended reading

阅读 1.2k

Nebula 的图数据库世界
介绍图数据库和 Nebula 的一切

NebulaGraph:一个开源的分布式图数据库。欢迎来 GitHub 交流:[链接]

118 声望
661 粉丝
0 条评论

NebulaGraph:一个开源的分布式图数据库。欢迎来 GitHub 交流:[链接]

118 声望
661 粉丝