栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

谷粒商城-高级篇-aiueo

谷粒商城-高级篇-aiueo

105 初步检索 105.1 _cat
GET /_cat/nodes : 查看所有节点
GET /_cat/health : 查看es健康状况 
GET /_cat/master : 查看主节点
GET /_cat/indices: 查看所有索引

107 乐观锁字段

_seq_no :并发控制字段 , 每次更新就会 + 1 , 用来做乐观锁

_primary_term : 同上,主分片重新分配 , 如重启 , 就会变化.

107.1 并发修改

发送两次修改同时对同一文档进行修改 , 为了控制并发就可以加上if_seq_no=1&if_primary_term=1 就可以修改

PUT  http://xxx.xxx.xxx.xxx/index/type/1?if_seq_no=1&if_primary_term=1
{
 "name":1
}
PUT  http://xxx.xxx.xxx.xxx/index/type/1?if_seq_no=1&if_primary_term=1
{
 "name":1
}
113.match_phrase短语匹配

将需要匹配的值当成一个整体单词,(不分词)进行检索

GET bank_search
{
	"query";{
	"match_phrase":{
	"address":"mill road"	
	}
  }
}
128.sku在es中存储模型分析

es DSL

PUT product
{ 
    "mappings": { 
        "properties": { 
            "skuId": { 
                "type": "long" 
            },
            "spuId": { 
                "type": "keyword" 
            },
            "skuTitle": { 
                "type": "text", 
                "analyzer": "ik_smart" 
            },
            "skuPrice": { 
                "type": "keyword" 
            },
            "skuImg": { 
                "type": "keyword",
			   "index": false, 
                "doc_values": false #不可聚合等操作
            },
            "saleCount": { 
                "type": "long" 
            },
            "hasStock": { 
                "type": "boolean" 
            },
            "hotScore": {
                "type": "long" 
            },
            "brandId": { 
                "type": "long" 
            },
            "catalogId": { 
                "type": "long" 
            },
            "brandName": { 
                "type": "keyword", 
                "index": false, 
                "doc_values": false 
            },
            "brandImg": { 
                "type": "keyword", 
                "index": false, 
                "doc_values": false 
            },
            "catalogName": { 
                "type": "keyword", 
                "index": false,
                "doc_values": false 
            },"attrs": { 
                "type": "nested", #数组扁平化处理
                "properties": { 
                    "attrId": { 
                        "type": "long" 
                    },
                    "attrName": { 
                        "type": "keyword", 
                        "index": false, 
                        "doc_values": false 
                    },
				   "attrValue": { 
                       "type": "keyword" 
                   } 
                } 
            } 
        } 
    }
}

冗余存储的字段不需要参与检索 , 所以"type"和"doc_value"设置成false

132.R-泛型结果封装

132.1 从List中获取值

如果List里面封装的不是基本类型 , 并且引用类型(对象)里面有非重字段(对应数据库中entity的主键id),如果想要从里面快速的获取数据可以封装成Map.

对象

@Data
public class SkuHasStockVo {
    private Long skuId; #主键Id 唯一
    private Boolean hasStock;
}

将对象集合封装成Map

 try {
            R skuHasStock = wareFeignService.getSkuHasStock(skuIdList);

            List data = (List) skuHasStock.get("data");

             stockMap = data.stream().collect(Collectors.toMap(
                    skuHasStockVo -> skuHasStockVo.getSkuId(), //key
                    skuHasStockVo -> skuHasStockVo.getHasStock()//value
            ));
        }catch (Exception e){
            log.error("库存服务查询异常:原因{}",e);
        }

在一层循环中,就不必在遍历对象集合,即根据主键Id,查找自己封装的Map

 List upProduct = skus.stream().map(sku -> {
		
            //......
            //TODO : 库存
            if(finalStockMap == null){
                esModel.setHasStock(true);
            }else{
                esModel.setHasStock(finalStockMap.get(sku.getSkuId())); //根据自己封装的Map找到对应的Value,而不是循环遍历查找
            }
            return esModel;
        }).collect(Collectors.toList());
132.2 远程调用的失败Catch

远程调用可能存在失败,需要自己手动处理远程调用失败的情况

   Map stockMap = null;
        try {
            R skuHasStock = wareFeignService.getSkuHasStock(skuIdList);

            List data = (List) skuHasStock.get("data");

             stockMap = data.stream().collect(Collectors.toMap(
                    skuHasStockVo -> skuHasStockVo.getSkuId(),
                    skuHasStockVo -> skuHasStockVo.getHasStock()
            ));
        }catch (Exception e){
            log.error("库存服务查询异常:原因{}",e);
        }

135.TypeReference

对于远程调用返回R对象想要得到具体的类型不用强转,利用fastjson的TypeReferece

R

	//利用fastjson进行逆转
	public  T getData(TypeReference typeReference){
		Object data = this.get("data"); //默认是map
		String s = JSON.toJSonString(data);
		T t = JSON.parseObject(s, (Type) typeReference);
		return t;
	}

从R中获取具体的类型

  R r = wareFeignService.getSkuHasStock(skuIdList);

            TypeReference> typeReference = new TypeReference>() {};

             stockMap = r.getData(typeReference).stream().collect(Collectors.toMap(
                    skuHasStockVo -> skuHasStockVo.getSkuId(),
                    skuHasStockVo -> skuHasStockVo.getHasStock()
            ));
137.渲染一级分类数据 137.1 thymeleaf 名称空间
 
137.2 快速编译页面 

快捷键 : ctrl+shift+f9

139.搭建域名访问环境一
server {
    listen       80;
    server_name  gulimall.com;

  
    location / {
        proxy_pass   http://192.168.31.57:7000; #window本机内网地址
    }
}

140.搭建域名访问环境二-负载均衡到网关 140.1 nginx conf

http块

 upstream gulimall {
      server 192.168.31.57:88;
    }

server块

server {
    listen       80;
    server_name  gulimall.com;
    
    location / {
        proxy_pass   http://guilimall;
    }
}
140.2 gateway-applicatoin.yml

一定要放在所有路由的最后面

 - id: gulimall-host-route #renrenfast 路由
            uri: lb://product-service
            predicates:
              - Host=**.gulimall.com,gulimall.com

140.3 nginx代理给网关的问题

nginx代理给网关的时候,会丢失请求的host信息.

server {
    listen       80;
    server_name  gulimall.com;
    
    location / {
        proxy_set_header Host $host; #设置上host头
        proxy_pass   http://guilimall;
    }
}
145.JvisualVM

启动 :

cmd -> jvisualvm

145.1线程

运行 : 正在运行的

休眠 : sleep

等待 : wait

驻留 : 线程池里面的空闲线程

监视 : 阻塞的线程,正在等待锁

146 中间件对性能的影响 146.1 docker 监控容器状态
docker stats
150.优化三级分类数据获取 150.1 Version Origin

不断的嵌套与数据库交互,导致IO开销太大,频繁的网络交互导致接口的性能非常差劲!

 @Override
    public Map> getCatalogJson() {
        
        //1.查出所有1级分类
        List level1Categorys = this.getLevel1Categorys();

        //2.封装数据
        Map> parent_cid = level1Categorys.stream().collect(Collectors.toMap(
                k -> k.getCatId(),
                v -> {
                    List categoryEntities
                            = baseMapper.selectList(new QueryWrapper().eq("parent_cid", v.getCatId()));
                    List catelog2Vos = null;
                    if (categoryEntities != null) {
                        catelog2Vos = categoryEntities.stream().map(l2 -> {
                            Catelog2Vo catelog2Vo = new Catelog2Vo(v.getCatId().toString(), null, l2.getCatId().toString(), l2.getName());
                            //找当前二级分类的三级分类封装成vo
                            List level3Catelog =
                                    baseMapper.selectList(new QueryWrapper().eq("parent_cid", l2.getCatId()));
                            if(level3Catelog != null){
                                List collect = level3Catelog.stream().map(l3 -> {
                                    //2.封装成指定格式
                                    Catelog2Vo.Catelog3Vo catelog3Vo = new Catelog2Vo.Catelog3Vo(l2.getCatId().toString(),l3.getCatId().toString(),l3.getName());
                                    return catelog3Vo;
                                }).collect(Collectors.toList());
                                catelog2Vo.setCatalog3List(collect);
                            }
                            return catelog2Vo;
                        }).collect(Collectors.toList());
                    }
                    return catelog2Vos;
                }
        ));


        return parent_cid;
    }
150.2 Version X

将数据库的多次查询变为1次 , 如果再有根据父Id查找子分类的需求 , 直接在本地的集合中去找.

   @Override
    public Map> getCatalogJson() {

        List selectList = baseMapper.selectList(null);

        //1.查出所有1级分类
        List level1Categorys = selectList.stream().filter(categoryEntity -> categoryEntity.getParentCid() == 0).collect(Collectors.toList());

        //2.封装数据
        Map> parent_cid = level1Categorys.stream().collect(Collectors.toMap(
                k -> k.getCatId(),
                v -> {
                    List categoryEntities
                            = findSonCategory(selectList,v.getCatId());
                    List catelog2Vos = null;
                    if (categoryEntities != null) {
                        catelog2Vos = categoryEntities.stream().map(l2 -> {
                            Catelog2Vo catelog2Vo = new Catelog2Vo(v.getCatId().toString(), null, l2.getCatId().toString(), l2.getName());
                            //找当前二级分类的三级分类封装成vo
                            List level3Catelog =
                                    findSonCategory(selectList,l2.getCatId());
                            if(level3Catelog != null){
                                List collect = level3Catelog.stream().map(l3 -> {
                                    //2.封装成指定格式
                                    Catelog2Vo.Catelog3Vo catelog3Vo = new Catelog2Vo.Catelog3Vo(l2.getCatId().toString(),l3.getCatId().toString(),l3.getName());
                                    return catelog3Vo;
                                }).collect(Collectors.toList());
                                catelog2Vo.setCatalog3List(collect);
                            }
                            return catelog2Vo;
                        }).collect(Collectors.toList());
                    }
                    return catelog2Vos;
                }
        ));

        return parent_cid;
    }
156 .锁-解决缓存击穿问题

让高并发情况下,一个服务实例只访问一次数据库.

156.1 version origin

getCatalogJSONFromDB

 public Map> getCatalogJsonFromDb() {

        synchronized (this){
            //得到锁以后,我们应该再去缓存中确定一次,如果没有才需要继续查询
            String catalogJSON = redisTemplate.opsForValue().get("catalogJSON");
            if(!StringUtils.isEmpty(catalogJSON)){
                //缓存不为null直接返回
                Map> result = JSON.parseObject(catalogJSON, new TypeReference>>() {
                });
                return result;
            }
            System.out.println("查询了数据库......");
            //查db
            List selectList = baseMapper.selectList(null);

            //1.查出所有1级分类
            List level1Categorys = selectList.stream().filter(categoryEntity -> categoryEntity.getParentCid() == 0).collect(Collectors.toList());

            //2.封装数据
            Map> parent_cid = level1Categorys.stream().collect(Collectors.toMap(
                    k -> k.getCatId().toString(),
                    v -> {
                        List categoryEntities
                                = findSonCategory(selectList,v.getCatId());
                        List catelog2Vos = null;
                        if (categoryEntities != null) {
                            catelog2Vos = categoryEntities.stream().map(l2 -> {
                                Catelog2Vo catelog2Vo = new Catelog2Vo(v.getCatId().toString(), null, l2.getCatId().toString(), l2.getName());
                                //找当前二级分类的三级分类封装成vo
                                List level3Catelog =
                                        findSonCategory(selectList,l2.getCatId());
                                if(level3Catelog != null){
                                    List collect = level3Catelog.stream().map(l3 -> {
                                        //2.封装成指定格式
                                        Catelog2Vo.Catelog3Vo catelog3Vo = new Catelog2Vo.Catelog3Vo(l2.getCatId().toString(),l3.getCatId().toString(),l3.getName());
                                        return catelog3Vo;
                                    }).collect(Collectors.toList());
                                    catelog2Vo.setCatalog3List(collect);
                                }
                                return catelog2Vo;
                            }).collect(Collectors.toList());
                        }
                        return catelog2Vos;
                    }
            ));
            
            return parent_cid;
        }
    }

@Override

getCatalogJSON

@Override
    public Map> getCatalogJson() {

        String catalogJSON = redisTemplate.opsForValue().get("catalogJSON");

        if(StringUtils.isEmpty(catalogJSON)){
            //缓存中没有,查询数据库
            System.out.println("缓存不命中.....查询数据库");
            Map> catalogJsonFromDb = getCatalogJsonFromDb();
            //查到的数据再放入缓存,将对象转为json放在缓存中
            String s = JSON.toJSonString(catalogJsonFromDb);
            redisTemplate.opsForValue().set("catalogJSON", s,1, TimeUnit.DAYS);

            return catalogJsonFromDb;
        }
        System.out.println("缓存命中.....直接返回");
        Map> result = JSON.parseObject(catalogJSON, new TypeReference>>() {});
        return result;
    }
156.2 version X

将老三样三个动作原子性放入同步代码块中.

将数据放入缓存的操作放入同步代码块

 //查到的数据再放入缓存,将对象转为json放在缓存中
String s = JSON.toJSonString(catalogJsonFromDb);
redisTemplate.opsForValue().set("catalogJSON", s,1, TimeUnit.DAYS);

getCatalogJSONFromDb

public Map> getCatalogJsonFromDb() {

        synchronized (this){
            //得到锁以后,我们应该再去缓存中确定一次,如果没有才需要继续查询
            String catalogJSON = redisTemplate.opsForValue().get("catalogJSON");
            if(!StringUtils.isEmpty(catalogJSON)){
                //缓存不为null直接返回
                Map> result = JSON.parseObject(catalogJSON, new TypeReference>>() {
                });
                return result;
            }
            System.out.println("查询了数据库......");
            //查db
            List selectList = baseMapper.selectList(null);

            //1.查出所有1级分类
            List level1Categorys = selectList.stream().filter(categoryEntity -> categoryEntity.getParentCid() == 0).collect(Collectors.toList());

            //2.封装数据
            Map> parent_cid = level1Categorys.stream().collect(Collectors.toMap(
                    k -> k.getCatId().toString(),
                    v -> {
                        List categoryEntities
                                = findSonCategory(selectList,v.getCatId());
                        List catelog2Vos = null;
                        if (categoryEntities != null) {
                            catelog2Vos = categoryEntities.stream().map(l2 -> {
                                Catelog2Vo catelog2Vo = new Catelog2Vo(v.getCatId().toString(), null, l2.getCatId().toString(), l2.getName());
                                //找当前二级分类的三级分类封装成vo
                                List level3Catelog =
                                        findSonCategory(selectList,l2.getCatId());
                                if(level3Catelog != null){
                                    List collect = level3Catelog.stream().map(l3 -> {
                                        //2.封装成指定格式
                                        Catelog2Vo.Catelog3Vo catelog3Vo = new Catelog2Vo.Catelog3Vo(l2.getCatId().toString(),l3.getCatId().toString(),l3.getName());
                                        return catelog3Vo;
                                    }).collect(Collectors.toList());
                                    catelog2Vo.setCatalog3List(collect);
                                }
                                return catelog2Vo;
                            }).collect(Collectors.toList());
                        }
                        return catelog2Vos;
                    }
            ));

            //查到的数据再放入缓存,将对象转为json放在缓存中
            String s = JSON.toJSonString(parent_cid);
            redisTemplate.opsForValue().set("catalogJSON", s,1, TimeUnit.DAYS);
            
            return parent_cid;
        }
    }

@Override

getCatalogJSON

@Override
    public Map> getCatalogJson() {

        String catalogJSON = redisTemplate.opsForValue().get("catalogJSON");

        if(StringUtils.isEmpty(catalogJSON)){
            //缓存中没有,查询数据库
            System.out.println("缓存不命中.....查询数据库");
            Map> catalogJsonFromDb = getCatalogJsonFromDb();

            return catalogJsonFromDb;
        }
        System.out.println("缓存命中.....直接返回");
        Map> result = JSON.parseObject(catalogJSON, new TypeReference>>() {});
        return result;
    }

158.分布式锁原理以及使用

总结 : 加锁(setnx , set expire time)的时候保证步骤是原子化的,解锁(根据key查看锁,删除锁)的时候保证步骤是原子化的

利用Lua脚本保证,查看锁和删除锁的动作是原子性的

 
    public Map> getCatalogJsonFromDbWithRedisLock() {

            //1.占分布式锁 , 去redis占坑
            String uuid = UUID.randomUUID().toString();
            Boolean lock = redisTemplate.opsForValue().setIfAbsent("local", uuid,300,TimeUnit.SECONDS);

            if(lock == true){
                //lua 脚本解锁
                Map> dataFromDb;
                try {
                    dataFromDb = getDataFromDb();
                }finally {
                    String luascript = "if redis.call('get', KEYS[1]) == ARGV[1] then return  redis.call('del', KEYS[1]) else return 0 end";
                    redisTemplate.execute(new DefaultRedisscript(luascript, Long.class), Arrays.asList("lock"), uuid);
                }

            return dataFromDb;
            }else {
                //加锁失败 ... 重试
                try {
                    TimeUnit.MILLISECONDS.sleep(200);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                return getCatalogJsonFromDbWithRedisLock(); //自旋的方式
            }
    }
159.分布式锁 Redission
@GetMapping("/hello")
    public void hello(){
        //1.获取一把锁,只要锁的名字一样,就是同一把锁
        RLock lock = redisson.getLock("my-lock");

        //2.加锁,阻塞式等待,只要没拿到,就等待锁
        lock.lock();
        try {
            System.out.println("加锁成功,执行业务=>"+Thread.currentThread().getId());
            TimeUnit.SECONDS.sleep(30);
        }catch (Exception e){

        }finally {
            //3.解锁
            System.out.println("解锁=>"+Thread.currentThread().getId());
            lock.unlock();
   }

Redission自己封装的分布式锁框架,解决了一些问题

锁的自动续期,如果业务执行时间超长,运行期间自动给锁续上新的30s,不用担心由于业务执行时间长,从而导致锁自动过期删除的问题

加锁的业务只要运行完成,就不会给当前锁续期,即使不手动解锁,锁默认在30s(默认)之后自动删除

161.WatchDog

161.1 lock.lock(20,TimeUnit.SECONDS);

手动设置锁的过期时间之后,不会自动续期,所以自动解锁的时间设置一定要大于业务的执行时间.

161.2 最佳实践

明显的设置锁的过期时间,省掉了整个续期操作,手动解锁.

162.读写锁 162.1 写锁
@GetMapping("/write")
    public String writevalue(){

        String s = "";
        RReadWriteLock lock = redisson.getReadWriteLock("rw-lock");
        RLock wlock = lock.writeLock();
        wlock.lock();
        try {
            s = UUID.randomUUID().toString();
            TimeUnit.SECONDS.sleep(30);
            redisTemplate.opsForValue().set("writevalue", s);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }finally {
            wlock.unlock();
        }
        return s;
    }
162.2 读锁
 @GetMapping("/read")
    public String readValue(){
        RReadWriteLock lock = redisson.getReadWriteLock("rw-lock");
        RLock rlock = lock.readLock();
        String s = "";
        rlock.lock();
        try {
            s = redisTemplate.opsForValue().get("writevalue");
        }finally {
            rlock.unlock();
        }
        return s;
    }
162.3 补充

只要有写锁的存在,都必须等待

163.Semaphore-信号量

bash

set park 3

acquire 占一个车位

@GetMapping("/park")
    public String park() throws InterruptedException {
        RSemaphore park = redisson.getSemaphore("park");
        park.acquire();// 获取一个信号量,获取一个值,占一个车位
        return "acquire ok";
  }

release 释放一个车位

  @GetMapping("/go")
    public String go(){
        RSemaphore park = redisson.getSemaphore("park");
        park.release();
        return "release ok";
  }
164.countDownLatch-闭锁 164.1 set
 @GetMapping("/lockDoor")
    public String lockDoor() throws InterruptedException {
        RCountDownLatch countDownLatch = redisson.getCountDownLatch("door");
        countDownLatch.trySetCount(5);
        countDownLatch.await();
        return "放假了.....";
    }
164.2 countDown
   @GetMapping("/gogogo/{id}")
    public String gogogo(@PathVariable("id") Long id){
        RCountDownLatch countDownLatch = redisson.getCountDownLatch("door");
        countDownLatch.countDown();
        return id+"班的人都走了";
  }
166.缓存一致性 166.1 利用Redision加锁
 public Map> getCatalogJsonFromDbWithRedissionLock() {

        RLock lock = redisson.getLock("catalogJson-lock");
        lock.lock();
        Map> dataFromDb;
            try {
                dataFromDb = getDataFromDb();
            }finally {
                lock.unlock();
            }
            return dataFromDb;
    }
166.2 缓存一致性-双写模式

由于卡顿等原因,导致写缓存2在最前,写缓存1在后面就出现了不一致

这是暂时性的脏数据问题,但是在数据稳定,缓存过期以后,又能得到最新的正确数据

166.3 缓存一致性-失效模式

 

166.4 缓存一致性-解决方案

我们系统的一致性解决方案 :

缓存的所有数据都有过期时间,数据过期下一次查询触发主动更新

对于不经常写的数据,读写数据的时候,加上分布式的读写锁

168. 整合SpringCache 168.1 依赖
 
        
            org.springframework.boot
            spring-boot-starter-cache
        


   
        
            org.springframework.boot
            spring-boot-starter-data-redis
        
168.2 配置
 spring:
 	cache:
       type: redis
168.3 启动
@EnableCaching

168.4 hello-springCache

测试使用缓存

    @Cacheable({"category"})
    @Override
    public List getLevel1Categorys() {
        List categoryEntities = baseMapper.selectList(
                new QueryWrapper().eq("parent_cid", 0));
        return categoryEntities;
    }

169 注解

169.1 @Cacheable

@Cacheable : 触发将数据保存到缓存的操作,如果缓存中有,则方法不用调用.如果缓存中没有,会调用方法,最后将方法的结果放入缓存.每一个缓存的数据我们都来指定要放到哪个名字的缓存[缓存的分区(按照业务类型分),便于管理]

169.1.1 默认行为 :

① 如果缓存中有,方法不会调用

② key默认自动生成 =>缓存的名字 :: SimpleKey[] (自主生成的key值)

③ 缓存的value值,默认使用jdk序列化机制,将序列化的数据存入Redis

④ 默认ttl时间 : -1 (永不过期)

169.1.2 自定义配置

① 指定生成的缓存使用key : key属性指定,接受一个SpEL表达式

    @Cacheable(cacheNames = {"category"} , key = "'level1Categorys'")
    @Override
    public List getLevel1Categorys() {
        List categoryEntities = baseMapper.selectList(
                new QueryWrapper().eq("parent_cid", 0));
        return categoryEntities;
    }
@Cacheable(cacheNames = {"category"} , key = "#root.method.name")

② 指定缓存的数据的存活时间 , 在配置文件中配置

spring:
	cache:
    	type: redis
    	redis:
      		time-to-live: 3600000

③.① 将数据保存为json格式 , version Orign 会使配置文件中的配置失效

@Configuration
@EnableCaching
public class MyCacheConfig {

    
    @Bean
    RedisCacheConfiguration redisCacheConfiguration(){

        RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();

        config = config.serializeKeysWith(
                RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()));

        config = config.serializevaluesWith(
                RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()));
        return config;

    }
}

③.② 注入CacheProperties.class , 让配置文件配置项生效

@EnableConfigurationProperties(CacheProperties.class)
@Configuration
@EnableCaching
public class MyCacheConfig {

    @Autowired
    private CacheProperties cacheProperties;

    
    @Bean
    RedisCacheConfiguration redisCacheConfiguration(){

        CacheProperties.Redis redisProperties = cacheProperties.getRedis();

        RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();

        config = config.serializeKeysWith(
                RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()));

        config = config.serializevaluesWith(
                RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()));

        if (redisProperties.getTimeToLive() != null) {
            config = config.entryTtl(redisProperties.getTimeToLive());
        }

        if (redisProperties.getKeyPrefix() != null) {
            config = config.prefixKeysWith(redisProperties.getKeyPrefix());
        }

        if (!redisProperties.isCacheNullValues()) {
            config = config.disableCachingNullValues();
        }

        if (!redisProperties.isUseKeyPrefix()) {
            config = config.disableKeyPrefix();
        }

        return config;

    }
}
169.1.3 其他配置
spring:
 cache:
    type: redis
    redis:
      time-to-live: 3600000
      key-prefix: CHCHE_ #前缀
      use-key-prefix: true #是否使用前缀
      cache-null-values: true #是否缓存空值,应对缓存穿透

169.2 @CacheEvict

@CacheEvict : 触发将数据从缓存删除的操作.

@Caching 组合以上多个操作.

@CacheEvict(cacheNames = {"category"},allEntries = true) //删除某个分区下的所有数据

  @Caching(evict = {
            @CacheEvict(value = "category",key = "'getLevel1Categorys'"),
            @CacheEvict(value = "category",key = "'getCatalogJson'")
    })
 
    //@CacheEvict(cacheNames = {"category"},allEntries = true)
    @Caching(evict = {
            @CacheEvict(value = "category",key = "'getLevel1Categorys'"),
            @CacheEvict(value = "category",key = "'getCatalogJson'")
    })
    @Override
    @Transactional
    public void updateCascade(CategoryEntity category) {
        categoryDao.updateById(category);
        categoryBrandRelationService.updateCategory(category.getCatId(),category.getName());
    }

@CachePut : 不影响方法执行更新缓存. 用于双写模式

@CacheConfig : 在类级别共享缓存的相同配置.

172.SpringCache-原理与不足

读模式

        缓存穿透 : 查询一个Null数据=>解决:缓存空数据;ache-null-values=true

        缓存击穿 : 大量并发进来查询一个正好过期的数据.解决:加锁:? 

        缓存雪崩 : 大量key同时过期.解决:加随机时间.加上过期时间 : spring.cache.redis.time-to-          live

写模式(缓存与数据库一致)

        读写加锁

        引入Cannl,感知Mysql的更新去更新数据库 

        读多写多,直接去数据库查询即可

172.1 关于数据加分布式锁问题

SpringCache默认是无加锁的.所以要加锁有两种解决办法.

第一种是手写缓存逻辑.

第二种是在注解上加上配置,但是实现的是本地锁

sync = true
@Cacheable(cacheNames = {"category"} , key = "#root.method.name",sync = true)
    @Override
    public List getLevel1Categorys() {
        List categoryEntities = baseMapper.selectList(
                new QueryWrapper().eq("parent_cid", 0));
        return categoryEntities;
    }
172.总结

常规数据(读多写少,即时性,一致性要求不高的数据);完全可以使用Spring-Cache,写模式只要缓存的数据有过期时间即可.

特殊业务.特殊设计

178.检索DSL测试-聚合测试 178.1 数据迁移
POST _reindex 
{
	"source":{
	  "index":"twitter"
	},
	"dest":{
		"index":"new_twitter"
	}
}
178.2 商品数据新映射
PUT gulimall_product
{
  "mappings": {
    "properties": {
      "attrs": {
        "type": "nested",
        "properties": {
          "attrId": {
            "type": "long"
          },
          "attrName": {
            "type": "keyword"
          },
          "attrValue": {
            "type": "keyword"
          }
        }
      },
      "brandId": {
        "type": "long"
      },
      "brandImg": {
        "type": "keyword",
        "index": false,
        "doc_values": false
      },
      "brandName": {
        "type": "keyword"
      },
      "catalogId": {
        "type": "long"
      },
      "catalogName": {
        "type": "keyword"
      },
      "hasStock": {
        "type": "boolean"
      },
      "hotScore": {
        "type": "long"
      },
      "saleCount": {
        "type": "long"
      },
      "skuId": {
        "type": "long"
      },
      "skuImg": {
        "type": "keyword"
      },
      "skuPrice": {
        "type": "keyword"
      },
      "skuTitle": {
        "type": "text",
        "analyzer": "ik_smart"
      },
      "spuId": {
        "type": "keyword"
      }
    }
  }
}
182.ES-Response封装 182.1 关于聚合类型
  //得到品牌的名字
String brandName = ((ParsedStringTerms) bucket.getAggregations().get("brand_name_agg")).getBuckets().get(0).getKeyAsString();

bucket获取聚合时,返回的类型如何判断?

通过Debug模式分析 es 返回的Response 从而获得 聚合类型

182.2 代码
 
    private SearchResult buildSearchResult(SearchResponse response,SearchParam searchParam) {

        SearchResult result = new SearchResult();
        SearchHits hits = response.getHits();

        
        List esModels = new ArrayList<>();
        if(hits.getHits() != null && hits.getHits().length > 0){
            for (SearchHit hit : hits.getHits()) {
                String sourceAsString = hit.getSourceAsString();
                SkuEsModel esModel = JSON.parseObject(sourceAsString, SkuEsModel.class);
                if(!StringUtils.isEmpty(searchParam.getKeyword())){
                    HighlightField skuTitle = hit.getHighlightFields().get("skuTitle");
                    String highLightField = skuTitle.getFragments()[0].string();
                    esModel.setSkuTitle(highLightField);
                }
                esModels.add(esModel);
            }
        }
        result.setProducts(esModels);

        
        ParsedLongTerms catalog_agg = response.getAggregations().get("catalog_agg");
        List catalogVos = new ArrayList<>();
        List buckets = catalog_agg.getBuckets();
        for (Terms.Bucket bucket : buckets) {
            SearchResult.CatalogVo catalogVo = new SearchResult.CatalogVo();
            //得到分类id
            String keyAsString = bucket.getKeyAsString();
            catalogVo.setCatalogId(Long.parseLong(keyAsString));
            //得到分类名
            ParsedStringTerms catalog_name_agg = bucket.getAggregations().get("catalog_name_agg");
            String catalog_name = catalog_name_agg.getBuckets().get(0).getKeyAsString();
            catalogVo.setCatalogName(catalog_name);
            catalogVos.add(catalogVo);
        }
        result.setCatalogs(catalogVos);

        
        List brandVos = new ArrayList<>();
        ParsedLongTerms brand_agg = response.getAggregations().get("brand_agg");
        for (Terms.Bucket bucket : brand_agg.getBuckets()) {
            SearchResult.BrandVo brandVo = new SearchResult.BrandVo();
            //得到品牌的id
            long brandId = bucket.getKeyAsNumber().longValue();
            //得到品牌的名字
            String brandName = ((ParsedStringTerms) bucket.getAggregations().get("brand_name_agg")).getBuckets().get(0).getKeyAsString();
            //得到品牌的图片
            String brandImg = ((ParsedStringTerms) bucket.getAggregations().get("brand_img_agg")).getBuckets().get(0).getKeyAsString();
            brandVo.setBrandId(brandId);
            brandVo.setBrandName(brandName);
            brandVo.setBrandImg(brandImg);
            brandVos.add(brandVo);
        }
        result.setBrands(brandVos);

        
        List attrVos = new ArrayList<>();
        ParsedNested attr_agg = response.getAggregations().get("attr_agg");
        ParsedLongTerms attr_id_agg = attr_agg.getAggregations().get("attr_id_agg");
        for (Terms.Bucket bucket : attr_id_agg.getBuckets()) {
            SearchResult.AttrVo attrVo = new SearchResult.AttrVo();
            //得到属性的id
            long attrId = bucket.getKeyAsNumber().longValue();
            //得到属性的名字
            String attrName = ((ParsedStringTerms) bucket.getAggregations().get("attr_name_agg")).getBuckets().get(0).getKeyAsString();
            //得到属性的所有值
            List attrValues = ((ParsedStringTerms) bucket.getAggregations().get("attr_value_agg")).getBuckets().stream().map(item -> {
                String keyAsString = item.getKeyAsString();
                return keyAsString;
            }).collect(Collectors.toList());
            attrVo.setAttrId(attrId);
            attrVo.setAttrValue(attrValues);
            attrVo.setAttrName(attrName);
            attrVos.add(attrVo);
        }
        result.setAttrs(attrVos);


        //分页信息 - 当前页码
        result.setPageNum(searchParam.getPageNum());
        //分页信息 - 总记录数
        long total = hits.getTotalHits().value;
        result.setTotal(total);
        //分页信息 - 总页码
        int totalPages = (int)total%ESConstant.PRODUCT_PAGE_SIZE == 0 ?
                (int)total/ESConstant.PRODUCT_PAGE_SIZE :
                ((int)total/ESConstant.PRODUCT_PAGE_SIZE + 1);
        result.setTotalPages(totalPages);
        return result;
    }

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/746623.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号