头图

> This article was first published on the public Geek Barracks , Original address

Elasticsearch is a very powerful open source search engine that can help us quickly search for the information we want from massive data. For example, when you are shopping in an online mall, ElasticSearch can help you quickly get the goods you want to find; when you search on GitHub, ElasticSearch can not only help you find the corresponding code repository, but also support code-level search and highlight The corresponding code snippet.

ElasticSearch is also a big data analysis platform with very strong data analysis capabilities. Compared with the "T+1" timeliness of Hadoop, ElasticSearch has higher performance and can obtain data analysis results in near real-time.

must first sharpen his tools if he wants to do his job well.

Before really learning ElasticSearch, let's get ready to learn the experimental environment.

install ElasticSearch and Kibana

ElasticSearch is a search engine that naturally supports distributed. You can deploy only one ElasticSearch node or easily deploy a cluster composed of multiple nodes. The number of nodes is transparent to application development.

In addition to installing ElasticSearch, we will also install Kibana. Kibana is a platform for managing and operating ElasticSearch, with many powerful functions, through which we can conveniently operate ElasticSearch.

You can download the latest version of ElasticSearch and Kibana at the following address; as of December 7, 2021, the latest version of ElasticSearch is 7.15.2.

https://www.elastic.co/cn/downloads/

The author's desktop PC is running the Ubuntu Linux operating system. The downloaded ElasticSearch and Kibana compressed packages and the decompressed folders are shown in the following figure:

image.png

First enter the ElasticSearch folder, run the following command to start ElasticSearch (if you want to start in Daemon mode in the background, you can add the -d parameter):

./bin/elasticsearch

You can use the following curl command to determine whether ElasticSearch has started successfully:

curl http://localhost:9200

If you receive a Response similar to the following, it proves that ElasticSearch has been successfully installed and started:

{
  "name" : "poype",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "QqnV6yVtQte10Dw3IN6eEQ",
  "version" : {
    "number" : "7.15.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "93d5a7f6192e8a1a12e154a2b81bf6fa7309da0c",
    "build_date" : "2021-11-04T14:04:42.515624022Z",
    "build_snapshot" : false,
    "lucene_version" : "8.9.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Let's start Kibana, enter the directory corresponding to Kibana, and execute the following commands;

./bin/kibana

After the process is started, enter in the browser:

http://localhost:5601

If you can see a welcome page similar to the figure below, it proves that Kibana started successfully. After Kibana is started, it will automatically be associated with Elasticsearch that has just been started.
image.png

Run ElasticSearch in Docker

Compared to installing ElasticSearch and Kibana directly on the operating system, I recommend that you use Docker to build a learning environment.

Certain environmental factors of the operating system may cause ElasticSearch to fail to start. For example, if the Java version installed on your machine does not meet the requirements of ElasticSearch, it will fail to start. You must reset the JAVA_HOME environment variable to make ElasticSearch start successfully. Using Docker can simulate a clean Linux sandbox environment, which can effectively avoid the interference of environmental factors on ElasticSearch.

In addition, using docker-compose can deploy multiple containers at once, so that an ElasticSearch cluster containing multiple nodes can be deployed with one click, saving a lot of tedious operations and making each deployment more convenient.

The author has prepared the following docker-compose.yml file, which defines a cluster consisting of three ElasticSearch nodes and a Kibana node, which you can use directly.

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data02:/usr/share/elasticsearch/data
    networks:
      - elastic
  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data03:/usr/share/elasticsearch/data
    networks:
      - elastic
  kibana:
    image: docker.elastic.co/kibana/kibana:7.15.2
    container_name: kibana
    environment:
      ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'
      SERVER_NAME: kibana.example.org
    ports:
      - "5601:5601"
    networks:
      - elastic

volumes:
  data01:
    driver: local
  data02:
    driver: local
  data03:
    driver: local

networks:
  elastic:
    driver: bridge

Copy this yaml configuration to a file named docker-compose.yml, and execute the docker-compose up command in the same path as the file to start a perfect ElasticSearch cluster with one click.
image.png

The following command can view the status information of a cluster:

curl http://localhost:9200/_cluster/health?pretty

If you receive a Response similar to the following, it proves that the ElasticSearch cluster has been started successfully:

{
  "cluster_name" : "es-docker-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 7,
  "active_shards" : 14,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Through the number_of_nodes parameter, we can confirm that we have successfully built an ElasticSearch cluster consisting of 3 nodes.

Summary

At this point, we have an ElasticSearch environment that can be used for learning and experimentation. In the next section, I will use an example of an "online bookstore" to guide you through the various features of ElasticSearch, let you feel the uniqueness of ElasticSearch compared to traditional databases, and master how to control ElasticSearch through Kibana.

Friends who like this article, welcome to follow the public geek barracks to watch more exciting content


poype
425 声望79 粉丝