1、在线联网安装

      直接进入容器内部进行编辑

# 进入容器内部编辑,或者在Elasticsearch下的bin目录下执行elasticsearch-plugin
docker exec -it  elasticsearch bash

# 安装IK分词器插件(Github官网)
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip

# 或者加速下载(第三方加速)
elasticsearch-plugin install https://github.91chifun.workers.dev//https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip

      等待下载完成然后cd,然后查看是否有ik分词器

cd plugins/
ls

      如果有ik分词器则安装完成,然后重新启动es然后访问

2、离线安装

      首先下载,然后解压,如果不是容器方式安装直接解压plugins目录即可

# 下载
wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip

# 加速
wget https://github.91chifun.workers.dev//https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip

# 解压
mkdir ./ik
unzip elasticsearch-analysis-ik-6.7.0.zip -d ./ik/
# 复制到容器内
docker cp ik elasticsearch:/usr/share/elasticsearch/plugins/

# 重启es节点
docker restart elasticsearch

3、测试

      我们使用kibanna或者发送请求

# 最大分词(将词以细粒度分词,搜索分词数量多,精确)
GET _analyze
{
  "analyzer":"ik_max_word",
  "text":"我是中国人"
}

# 短语分词(将词拆分短语,分词数少)
GET _analyze
{
  "analyzer":"ik_smart",
  "text":"我是中国人"
}

      如果返回如下的信息表示安装成功

{
  "tokens" : [
    {
      "token" : "我",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "CN_CHAR",
      "position" : 0
    },
    {
      "token" : "是",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "中国人",
      "start_offset" : 2,
      "end_offset" : 5,
      "type" : "CN_WORD",
      "position" : 2
    },
    {
      "token" : "中国",
      "start_offset" : 2,
      "end_offset" : 4,
      "type" : "CN_WORD",
      "position" : 3
    },
    {
      "token" : "国人",
      "start_offset" : 3,
      "end_offset" : 5,
      "type" : "CN_WORD",
      "position" : 4
    }
  ]
}


诗人总诉梦
1 声望2 粉丝