elasticsearch operator 全匹配的问题?

进行相关词搜索的时候,设置 operatorand

{
    "query" : {
        "match" : {
            "name" : {
                #name会被分析器拆分,如`bao` `bao` 这样的词
                "query" : "baobaobashi",
                "operator" : "and"
            }
        }
    }
}

我希望是 name 先经过 分析器 分词, 在进行 and 判订,

现在的operator是针对baobaobashi这个词

有什么方法能实现我的需求吗?

#我的代码:

查询:

{
    "query" : {
        "match" : {
            "name" : {
                "query" : "baobaodong",
                "operator" : "and",
                #拼音分词
                "analyzer" : "pinyin_analyzer"
            }
        }
    }
}

结果:

"hits": [
      {
        "_index": "py_v2",
        "_type": "t1",
        "_id": "AVV78g7MQFB5It8MqOdk",
        "_score": 20.431112,
        "_source": {
          "name": "宝宝懂礼貌"
        }
      },
      {
        "_index": "py_v2",
        "_type": "t1",
        "_id": "AVV78g-PQFB5It8MqOeB",
        "_score": 16.614202,
        "_source": {
          "name": "动物牙医-宝宝巴士"
        }
      },
      {
        "_index": "py_v2",
        "_type": "t1",
        "_id": "AVV78hAIQFB5It8MqOeO",
        "_score": 15.143915,
        "_source": {
          "name": "宝宝学交通工具"
        }
      },
      {
        "_index": "py_v2",
        "_type": "t1",
        "_id": "AVV78g5yQFB5It8MqOdX",
        "_score": 14.770354,
        "_source": {
          "name": "宝宝学日用品"
        }
      }

我对_analyze工具测试对baobaodong这个词的分词效果:

{
  "tokens": [
    {
      "token": "ba",
      "start_offset": 0,
      "end_offset": 9,
      "type": "word",
      "position": 0
    },
    {
      "token": "ab",
      "start_offset": 0,
      "end_offset": 9,
      "type": "word",
      "position": 0
    },
    {
      "token": "ba",
      "start_offset": 0,
      "end_offset": 9,
      "type": "word",
      "position": 0
    },
    {
      "token": "ao",
      "start_offset": 0,
      "end_offset": 9,
      "type": "word",
      "position": 0
    },
    {
      "token": "od",
      "start_offset": 0,
      "end_offset": 9,
      "type": "word",
      "position": 0
    },
    {
      "token": "do",
      "start_offset": 0,
      "end_offset": 9,
      "type": "word",
      "position": 0
    },
    {
      "token": "on",
      "start_offset": 0,
      "end_offset": 9,
      "type": "word",
      "position": 0
    },
    {
      "token": "ng",
      "start_offset": 0,
      "end_offset": 9,
      "type": "word",
      "position": 0
    }
  ]
}
阅读 4.7k
撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进