1、在线联网安装

      直接进入容器内部进行编辑

# 进入容器内部编辑
docker exec -it  elasticsearch bash

# 安装IK分词器拼音插件(Github官网)
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-pinyin/releases/download/v6.7.0/elasticsearch-analysis-pinyin-6.7.0.zip


# 安装IK分词器插件(第三方加速)
elasticsearch-plugin install https://github.91chifun.workers.dev//https://github.com/medcl/elasticsearch-analysis-pinyin/releases/download/v6.7.0/elasticsearch-analysis-pinyin-6.7.0.zip

      等待下载完成然后cd,然后查看是否有pinyin插件

cd plugins/
ls

      如果有pinyin插件则安装完成,然后重新启动es然后访问

2、离线安装

      下载并进行安装

# 下载
wget install https://github.com/medcl/elasticsearch-analysis-pinyin/releases/download/v6.7.0/elasticsearch-analysis-pinyin-6.7.0.zip
# 解压
mkdir ./pinyin
unzip elasticsearch-analysis-pinyin-6.7.0.zip -d ./pinyin/

# 复制到容器内
docker cp pinyin elasticsearch:/usr/share/elasticsearch/plugins/

# 重启es节点
docker restart elasticsearch

3、测试

      我们可以使用curl命令或者kibana测试

# Curl方式
curl -X POST -H "Content-Type:application/json" -d "{\"analyzer\":\"pinyin\",\"text\":\"刘德华\"}" http://139.9.70.155:10092/_analyze
# Kibana方式
GET _analyze
{
  "text":"刘德华",
  "analyzer":"pinyin"
}

      返回拼音表示安装完成

{
  "tokens" : [
    {
      "token" : "liu",
      "start_offset" : 0,
      "end_offset" : 0,
      "type" : "word",
      "position" : 0
    },
    {
      "token" : "de",
      "start_offset" : 0,
      "end_offset" : 0,
      "type" : "word",
      "position" : 1
    },
    {
      "token" : "hua",
      "start_offset" : 0,
      "end_offset" : 0,
      "type" : "word",
      "position" : 2
    },
    {
      "token" : "ldh",
      "start_offset" : 0,
      "end_offset" : 0,
      "type" : "word",
      "position" : 2
    }
  ]
}

      下面我们将拼音以及分词都结合起来进行搜索,首先我们创建一个索引,这里表示我们分词采用自定义的方式进行分词我们分别将ik_smart以及ik_max_word都对pinyin进行了整合,并且我们的主分片3个,每个分片一个副本集

PUT /test_pinyin
{
  "settings": {
        "analysis": {
            "analyzer": {
                "ik_smart_pinyin": {
                    "type": "custom",
                    "tokenizer": "ik_smart",
                    "filter": ["my_pinyin", "word_delimiter"]
                },
                "ik_max_word_pinyin": {
                    "type": "custom",
                    "tokenizer": "ik_max_word",
                    "filter": ["my_pinyin", "word_delimiter"]
                }
            },
            "filter": {
                "my_pinyin": {
                    "type" : "pinyin",
                    "keep_separate_first_letter" : true,
                    "keep_full_pinyin" : true,
                    "keep_original" : true,
                    "first_letter": "prefix",
                    "limit_first_letter_length" : 16,
                    "lowercase" : true,
                    "remove_duplicated_term" : true 
                }
            }
        },
        "number_of_shards": 3,
        "number_of_replicas": 1
  }
}

      然后我们创建一个_mapping模板他的类型是test,用于设置字段指定使用哪个分词器,手动创建mapping

PUT /test_pinyin/test/_mapping
{
    "properties": {
        "content": {
            "type": "text",
                        "analyzer": "ik_smart_pinyin",
                        "search_analyzer": "ik_smart_pinyin",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                }
            }
        },
        "age": {
            "type": "long"
        }
    }
}

      然后创建之后我们来导入几条条数据

POST /test_pinyin/test/
{
  "content":"小米手机有点东西",
  "age":18
}


POST /test_pinyin/test/
{
  "content":"中华人民共和国有个刘德华",
  "age":18
}

      然后我们就能开始愉快的查询了,首先我们不分词直接中文搜索

# 搜索刘德华查询出结果
POST /test_pinyin/test/_search
{
  "query":{
    "match":{
      "content":"刘德华"
    }
  }
}
# 搜索liudehua查询出结果
POST /test_pinyin/test/_search
{
  "query":{
    "match":{
      "content":"liudehua"
    }
  }
}
# 搜索小米查询出结果
POST /test_pinyin/test/_search
{
  "query":{
    "match":{
      "content":"小米"
    }
  }
}

# 搜索xiaomi查询出结果
POST /test_pinyin/test/_search
{
  "query":{
    "match":{
      "content":"xiaomi"
    }
  }
}




诗人总诉梦
1 声望2 粉丝