栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Java

ES kibala指令操作及JAVA API对比

Java 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

ES kibala指令操作及JAVA API对比

目录

一、连接 ES

二、索引操作

1、创建索引

2、创建索引结构(mapping)

3、查询索引结构

         4、删除索引

三、document操作

1、插入

1.1)单条插入

1.2)批量插入

2、查询

2.1)基本查询

2.2)match查询

2.3)term查询

3、修改

3.1)单条修改

3.2)批量修改

4、数据删除

4.1)单条删除

4.2)批量删除


        我们使用 Kibana 进行ES的指令操作和使用 JAVA API 进行操作时主要的结构是相同的,对比演示方便理解和记忆。ES的学习还是推荐自己查看官网进行学习(全英文)。

一、连接 ES

        JAVA 连接ES 我使用的是 Java High Level REST Client。

public class ESClient {

    private String host = "127.0.0.1:9200";

    private static volatile ESClient esClient;

    private RestClientBuilder restClientBuilder;

    private static Sniffer sniffer;

    private static RestHighLevelClient highClient;

    public static ESClient getInstance(){
        // 饿汉式创建对象的话初始化不是很好
        // return esClient;
        if (esClient == null) {
            synchronized (ESClient.class) {
                if (esClient == null) {
                    esClient = new ESClient();
                    esClient.initBuilder();
                }
            }
        }
        return esClient;
    }

    public RestClientBuilder initBuilder() {
        String[] hosts = host.split(",");
        HttpHost[] httpHost = new HttpHost[hosts.length];
        for (int i = 0; i < hosts.length; i++) {
            String[] hostIp = hosts[i].split(":");
            httpHost[i] = new HttpHost(hostIp[0], Integer.parseInt(hostIp[1]), "http");
        }
        restClientBuilder = RestClient.builder(httpHost);
        Header[] defaultHeaders = new Header[]{
                new BasicHeader("Content-Type", "application/json")
        };
        restClientBuilder.setDefaultHeaders(defaultHeaders);

        return restClientBuilder;
    }

    
    public RestHighLevelClient getHighLevelClient() {
        if (highClient == null) {
            synchronized (ESClient.class) {
                if (highClient == null) {
                    highClient = new RestHighLevelClient(restClientBuilder);
                    //十秒刷新并更新一次节点
                    sniffer = Sniffer.builder(highClient.getLowLevelClient())
                            .setSniffIntervalMillis(15000)
                            .setSniffAfterFailureDelayMillis(15000)
                            .build();
                }
            }
        }
        return highClient;
    }

    
    public static void closeClient() {
        if (null != highClient) {
            try {
                sniffer.close();    //需要在highClient close之前操作
                highClient.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    }
}

二、索引操作

1、创建索引

        ES在创建索引时,有俩个最主要的设置:分片和副本的设置,虽然可以自主进行分片和副本的设置,但是Elasticsearch 提供了优化好的默认配置。 除非你理解这些配置的作用并且知道为什么要去修改,否则不要随意修改。

        在实际中index没必要每天创建一个,为了灵活管理,最低建议每月一个 ,按照年月作为索引名。

number_of_shards

        每个索引的主分片数,默认值是 5 。这个配置在索引创建后不能修改。

number_of_replicas

        每个主分片的副本数,默认值是 1 。对于活动的索引库,这个配置可以随时修改。

        Kibana创建索引:

PUT /zcm_test
{
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 2
  }
}

        JAVA API 创建索引: 

    @Test
    public void getHighLevelClientAndTestCreateIndex() throws IOException {
        RestHighLevelClient client = ESClient.getInstance().getHighLevelClient();
        CreateIndexRequest createIndexRequest = new CreateIndexRequest("zcm_test");
        // 观察分片的几种状态
        createIndexRequest.settings(Settings.builder()
                .put("index.number_of_shards", 3)
                .put("index.number_of_replicas", 2)
        );

        CreateIndexResponse createIndexResponse = client.indices().create(createIndexRequest, RequestOptions.DEFAULT);
        if (createIndexResponse.isAcknowledged()) {
            System.out.println("创建index成功!");
        } else {
            System.out.println("创建index失败!");
        }
        // 一定记得close
        ESClient.getInstance().closeClient();
    }

2、创建索引结构(mapping)

         ES底层使用了lucene,不支持对mapping的修改,想要修改mapping,可使用索引重建的方式:创建新的索引,将原索引数据复制到新索引中,删除原索引,修改新索引名。

        后边会将mapping中重要的字段都进行尝试。一会进行总结。

        Kibana创建索引结构:

PUT /zcm_test01
{
  "mappings": {
    "properties": {
      "name":{
        "type": "text"
      },
      "age":{
        "type": "integer"
      }
    }
  }
}

        JAVA API设置索引结构:

    public void createMappingTest() throws IOException {
        RestHighLevelClient highLevelClient =         
        ESClient.getInstance().getHighLevelClient();
// 注释片段不知道为啥不行,有时间看一下

        XContentBuilder builder = XContentFactory.jsonBuilder();
        builder.startObject();
        {
            builder.startObject("mappings");
            {
                builder.startObject("properties");
                {
                    builder.startObject("title");
                    {
                        builder.field("type", "keyword");
                    }
                    builder.endObject();
                    builder.startObject("content");
                    {
                        builder.field("type", "text");
                    }
                    builder.endObject();
                    builder.startObject("score");
                    {
                        builder.field("type", "double");
                    }
                    builder.endObject();

                }
                builder.endObject();
            }
            builder.endObject();
        }
        builder.endObject();
        CreateIndexRequest index = new CreateIndexRequest("zcm_test03");
        index.source(builder);
        highLevelClient.indices().create(index, RequestOptions.DEFAULT);
    }

3、查询索引结构

        Kibana查询索引信息:

查看zcm_test中所有数据的结构
GET /zcm_test/_mappings

查看某一字段中结构

        JAVA API查询索引信息:

    @Test
    public void getHighLevelClientAndTestGetIndex() throws IOException {
        RestHighLevelClient client = ESClient.getInstance().getHighLevelClient();
        GetIndexRequest getIndexRequest = new GetIndexRequest("zcm_test");
        // 获取index返回结果,可以获取mapping结构等信息
        GetIndexResponse getIndexResponse = client.indices().get(getIndexRequest, RequestOptions.DEFAULT);
        Map map = getIndexResponse.getMappings();
        System.out.println("index mapping : "+ JSON.toJSonString(map));
        for (String index : getIndexResponse.getIndices()) {
            System.out.println("index name :"+ index);
        };
        ESClient.getInstance().closeClient();
    }

4、删除索引

        在删除索引时,当索引不存在时会显示异常信息,工作中添加判断。

        Kibana删除索引:

DELETE /zcm_test02

        JAVA API删除索引:

    @Test
    public void delectIndexTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest("zcm_test02");
        AcknowledgedResponse response = highLevelClient.indices().delete(deleteIndexRequest, RequestOptions.DEFAULT);
        if (response.isAcknowledged()){
            System.out.println("删除成功");
        }
    }

三、document操作

1、插入

1.1)单条插入

        在插入数据时,尽量不要自定义id,会影响插入速度。

        Kibana单条插入:

POST /zcm_test01/_doc/1
{
  "name":"我爱罗",
  "age":16,
  "desc":"风影"
}

        JAVA API单条插入:

    @Test
    public void insertdocumentTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        // index不存在则自动根据匹配到的template创建
        IndexRequest indexRequest = new IndexRequest("zcm_test");
        // 插入时尽量不要自定义id
        //indexRequest.id("0");
        Map map = new HashMap<>();
        map.put("name", "zcm");
        map.put("age", "18");
        map.put("desc", "火影");
        indexRequest.source(JSON.toJSonString(map), XContentType.JSON);
        IndexResponse indexResponse = highLevelClient.index(indexRequest, RequestOptions.DEFAULT);
        System.out.println(indexResponse);
        ESClient.getInstance().closeClient();
    }

1.2)批量插入

        在使用Kibana中进行批量删除时,需要使用 _bulk 关键字,但是 _bulk不推荐使用,高版本废除了。其实批量操作都差不多,批量删除和批量修改也是一样的。主要代码解析如下:

        第一行:指定请求与索引和类型,可以选择的请求有"create","index","delete","update",
"_index"指定索引名,"_type"指定类型名,"_id"指定id。

        第二行指定插入的内容。

        注意:index 和 create  第二行是source数据体(2)delete 没有第二行(3)update 第二行可以是partial doc,upsert或者是script````

        Kibana批量插入:

POST _bulk
  {"index":{"_index":"zcm_test01","_type":"_doc","_id":"2"}}
  {"id": 2, "location": "南京市栖霞区马群街道29号", "money": 3000, "area":80, "type": "三居室"}
  {"index":{"_index":"zcm_test01","_type":"_doc","_id":"3"}}
  {"id": 3, "location": "南京市玄武区山西路门路29号", "money": 400, "area":15, "type": "四居室", "style": "合租"}
  {"index":{"_index":"zcm_test01","_type":"_doc","_id":"4"}}
  {"id": 4, "location": "南京市秦淮区山北京东路29号", "money": 500, "area":14, "type": "三居室", "style": "合租"}
  {"index":{"_index":"zcm_test01","_type":"_doc","_id":"5"}}
  {"id": 5, "location": "南京市秦淮区新街口29号", "money": 450, "area":16, "type": "四居室", "style": "合租"}

        JAVA API批量插入:

    @Test
    public void batchInsertTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        BulkRequest bulkRequest = new BulkRequest("zcm_test");
        // 插入3条数据
        for (int i = 1; i < 4; i++) {
            Map map = new HashMap<>();
            map.put("name", "zcm");
            map.put("age", 18+i);
            map.put("desc", "火影"+i);
            IndexRequest indexRequest = new IndexRequest();
            indexRequest.source(JSON.toJSonString(map), XContentType.JSON);
            // 可以直接添加多条数据
            //bulkRequest.add(indexRequest).add(indexRequest);
            bulkRequest.add(indexRequest);
        }
        BulkResponse response = highLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
        System.out.println(response.getItems().length);
        ESClient.getInstance().closeClient();
    }

2、查询

        我这里把查询分为三种,基本的GET查询、match查询和term精确查询,基本的查询就是按照id进行查询,不使用。

        match查询根据分词器进行查询。

        term查询进行精确匹配。

2.1)基本查询

        Kibana查询:    

//批量查询
GET /_mget
{
  "docs":[
    {
      "_index":"zcm_test",
      "_type":"_doc",
      "_id":"Jq7GQn4BDhScTU2klmvB"
    },
      {
      "_index":"zcm_test",
      "_type":"_doc",
      "_id":"9L57Mn4Boab7MgHCtVce"
    }
    ]
}

        JAVA API查询:

    // 单条查询
    @Test
    public void selectdocumentTest() throws IOException{
        RestHighLevelClient restHighLevelClient = ESClient.getInstance().getHighLevelClient();
        // 需要写入_id(相当于主键),而id一般都是自动生成,type默认为_doc
        GetRequest getRequest = new GetRequest("zcm_test", "_doc", "Jq7GQn4BDhScTU2klmvB");
        // 设置查询的字段和查询顺序
        String[] includes = {"name","age"};
        String[] excludes = {"desc"};
        FetchSourceContext fetchSourceContext = new FetchSourceContext(true, includes, excludes);
        getRequest.fetchSourceContext(fetchSourceContext);
        GetResponse getResponse = restHighLevelClient.get(getRequest, RequestOptions.DEFAULT);
        System.out.println(getResponse);
        ESClient.getInstance().closeClient();
    }   }

    
    // 批量查询主要是使用mget
    @Test
    public void selectBatchDocTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        // 多id查询
        MultiGetRequest getRequest = new MultiGetRequest();
        getRequest.add("zcm_test", "Jq7GQn4BDhScTU2klmvB");
        getRequest.add("zcm_test", "9L57Mn4Boab7MgHCtVce");
        MultiGetResponse getResponse = highLevelClient.mget(getRequest, RequestOptions.DEFAULT);
        for (MultiGetItemResponse itemResponse : getResponse) {
            System.out.println(itemResponse.getResponse().getSourceAsString());
        }
        ESClient.getInstance().closeClient();
    }
    

2.2)match查询

        使用match查询时会根据分词器将词项进行分词处理,之后进行查询,存在多个关键字。

        kibana match查询:

#最简单的match查询,会将我南京按照默认分词器进行分词:山,玄武
#后边会讲分词器,一会有例子
GET /zcm_test01/_search
{
  "query":{
    "match":{
      "location":"山玄武"
    }
  }
}

#进行精确匹配:match_phrase
#slop:在进行精确匹配时可以存在调节因子,有点问题,后面具体研究
GET /zcm_test01/_search
{
  "query": {
    "match_phrase": {
      "location": {
            "query" : "北京市栖霞区马群街道29号",
            "slop" : 1
        }
    }
  }
}

#查询多个字段,使用关键字multi_match:存在三个参数:best_fields、most_fields、cross_fields,主要时返回的评分不同
#添加best_fields(默认),看的是字段的匹配程度,比如:分词字段我和南京,取俩个中匹配分值最高的返回,侧重于字段维度,单个字段的得分权重大,对于同一个query,单个field匹配更多的term,则优先排序。
GET /zcm_test01/_search
{
  "query": {
    "multi_match": {
      "query": "我南京",
      "type": "best_fields",
      "fields": ["name","location"]
    }
  }
}

#most_fields:一个doc匹配到的词项越多,评分越高,侧重于查询维度,单个查询条件的得分权重大,如果一次请求中,对于同一个doc,匹配到某个term的field越多,则越优先排序。
GET /zcm_test01/_search
{
  "query": {
    "multi_match": {
      "query": "我南京",
      "type": "most_fields",
      "fields": ["name","location"]
    }
  }
}

#cross_fields:将任何与任一查询匹配的文档作为结果返回,但只将最佳匹配的评分作为查询的评分结果返回
GET /zcm_test01/_search
{
  "query": {
    "multi_match": {
      "query": "我南京",
      "type": "cross_fields",
      "fields": ["name","location"]
    }
  }
}

        Java API match查询:

    // match查询步骤
    //1、构建SearchRequest请求对象,指定索引库,
    //2、构建SearchSourceBuilder查询对象
    //3、构建QueryBuilder对象指定查询方式和查询条件
    //4、将QuseryBuilder对象设置到SearchSourceBuilder对象中
    //5、将SearchSourceBuilder设置到SearchRequest中
    //6、调用方法查询数据
    //7、解析返回结果
    @Test
    public void matchTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        // 1、构建SearchRequest请求对象,指定索引库,
        SearchRequest searchRequest = new SearchRequest("zcm_test01");
        // 2、构建SearchSourceBuilder查询对象
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        // 3、构建QueryBuilder对象指定查询方式和查询条件
        MatchQueryBuilder matchQueryBuilder = QueryBuilders.matchQuery("location", "我南京");
        // 4、将QuseryBuilder对象设置到SearchSourceBuilder对象中
        searchSourceBuilder.query(matchQueryBuilder);
        // 5、将SearchSourceBuilder设置到SearchRequest中
        searchRequest.source(searchSourceBuilder);
        // 6、调用方法查询数据
        SearchResponse search = highLevelClient.search(searchRequest, RequestOptions.DEFAULT);
        // 7、解析返回结果
        SearchHit[] hits = search.getHits().getHits();
        for (int i = 0; i < hits.length; i++) {
            System.out.println(hits[i].getSourceAsString());
        }
    }

    // match_all 查询全部数据
    @Test
    public void matchAllTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        SearchRequest searchRequest = new SearchRequest("zcm_test01");
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        // 知道查询方式
        MatchAllQueryBuilder matchQueryBuilder = QueryBuilders.matchAllQuery();
        searchSourceBuilder.query(matchQueryBuilder);
        searchRequest.source(searchSourceBuilder);
        SearchResponse search = highLevelClient.search(searchRequest, RequestOptions.DEFAULT);
        SearchHit[] hits = search.getHits().getHits();
        for (int i = 0; i < hits.length; i++) {
            System.out.println(hits[i].getSourceAsString());
        }
    }

    // multiMatch:多条件查询
    @Test
    public void multiMatchTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        SearchRequest searchRequest = new SearchRequest("zcm_test01");
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        // 知道查询方式
        MultiMatchQueryBuilder matchQueryBuilder = QueryBuilders.multiMatchQuery("我南京", "name", "location");
        searchSourceBuilder.query(matchQueryBuilder);
        searchRequest.source(searchSourceBuilder);
        SearchResponse search = highLevelClient.search(searchRequest, RequestOptions.DEFAULT);
        SearchHit[] hits = search.getHits().getHits();
        for (int i = 0; i < hits.length; i++) {
            System.out.println(hits[i].getSourceAsString());
        }
    }

2.3)term查询

        term是代表完全匹配,即不进行分词器分析,文档中必须包含整个搜索的词汇。

        kibana term查询:

# 不会分词:以value值进行查询,但是doc中的数据默认会分词,可以在mapping中设置,设置字段的"index": "not_analyzed"
GET /zcm_test01/_search
{
  "query": {
    "term": {
      "name": {
        "value": "我"
      }
    }
  }
}

# 联合查询
GET /zcm_test01/_search
{
  "query": {
    "bool": {
      "must": [
        {
          "term": {
            "name": {
              "value": "我"
            }
          }
        }
      ],
      "must_not": [
        {
          "term": {
            "name": {
              "value": "qv"
            }
          }
        }
      ],
      "should": [
        {
          "term": {
            "name": {
              "value": "爱"
            }
          }
        }
      ]
    }
  }
}

        Java API term查询:

    //term 查询
    @Test
    public void termTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        SearchRequest searchRequest = new SearchRequest("zcm_test01");
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        // 知道查询方式
        TermQueryBuilder matchQueryBuilder = QueryBuilders.termQuery("name", "我");;
        searchSourceBuilder.query(matchQueryBuilder);
        searchRequest.source(searchSourceBuilder);
        SearchResponse search = highLevelClient.search(searchRequest, RequestOptions.DEFAULT);
        SearchHit[] hits = search.getHits().getHits();
        for (int i = 0; i < hits.length; i++) {
            System.out.println(hits[i].getSourceAsString());
        }
    }

3、修改

3.1)单条修改

        修改数据分为俩种,put为全量覆盖,post为更新,但是每次版本号都会改变。

        Kibana单条修改:

PUT /product/_doc/3
{
    "name" : "xiaomi erji",
    "desc" :  "erji zhong de huangmenji",
    "price" :  999,
    "tags": [ "low", "bufangshui", "yinzhicha" ]
}

POST /product/_doc/3
{
    "name" : "xiaomi erji new"
}

        JAVA API单条修改:

        和上边插入相同,上边有讲解。

3.2)批量修改

        和上边批量插入相同,上边有讲解。

        Kibana批量修改:

POST _bulk
  {"update":{"_index":"zcm_test01","_type":"_doc","_id":"2"}}
  { "doc" : {"location" : "南京市栖霞区马群街道28号"} }
  {"update":{"_index":"zcm_test01","_type":"_doc","_id":"3"}}
  { "doc" : {"location" : "南京市栖霞区马群街道28号"} }
  {"update":{"_index":"zcm_test01","_type":"_doc","_id":"4"}}
  { "doc" : {"location" : "南京市栖霞区马群街道28号"} }
  {"update":{"_index":"zcm_test01","_type":"_doc","_id":"5"}}
  { "doc" : {"location" : "南京市栖霞区马群街道28号"} }

        JAVA API批量修改:

4、数据删除

4.1)单条删除

        和上边批量插入相同,上边有讲解。

        Kibana单条删除:

# 删除doc和filed
DELETE /zcm_test/1
DELETE /zcm_test

        JAVA API单条删除:

    // 删除操作
    @Test
    public void deleteDocTest() throws IOException{
        RestHighLevelClient highLevelClient = ESClient.getInstance().getHighLevelClient();
        // 多id查询
        DeleteRequest deleteRequest = new DeleteRequest("zcm_test01", "1");
        DeleteResponse deleteResponse = highLevelClient.delete(deleteRequest, RequestOptions.DEFAULT);
        System.out.println(deleteResponse.toString());
        ESClient.getInstance().closeClient();
    }

4.2)批量删除

        和上边批量插入相同,上边有讲解。

        Kibana批量删除:

POST _bulk
  { "delete" : { "_index" : "test", "_type" : "_doc", "_id" : "1" } }
  { "delete" : { "_index" : "test", "_type" : "_doc", "_id" : "2" } }
  { "delete" : { "_index" : "test", "_type" : "_doc", "_id" : "3" } }

        JAVA API批量删除:

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/709124.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号