Create Sentinel high-availability cluster current limiting middleware from -99

Continuing from the previous Sentinel cluster current limiting exploration , I briefly mentioned the principle of cluster current limiting last time, and then simply modified it with the official demo, and it can run normally and take effect.

This time, we need to go a step further and implement a high-availability solution for embedded cluster current limiting based on Sentinel, and package it into a middleware starter for use by three parties.

For high availability, we mainly need to solve two problems, which are the problems that need to be solved whether using embedded or independent mode. In comparison, the embedded mode is simpler.

  1. Cluster server automatic election
  2. automatic failover
  3. Sentinel-Dashboard persistence to Apollo

Cluster current limit

First of all, considering that most services may not need the function of cluster current limiting, an annotation is implemented to manually enable the cluster current limiting mode. Only when the annotation is enabled, will the cluster current limiting beans and current limiting be instantiated data.

 @Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Import({EnableClusterImportSelector.class})
@Documented
public @interface SentinelCluster {
}

public class EnableClusterImportSelector implements DeferredImportSelector {
    @Override
    public String[] selectImports(AnnotationMetadata annotationMetadata) {
        return new String[]{ClusterConfiguration.class.getName()};
    }
}

After this is written, when our SentinelCluster annotation is scanned, it will instantiate ClusterConfiguration .

 @Slf4j
public class ClusterConfiguration implements BeanDefinitionRegistryPostProcessor, EnvironmentAware {
    private Environment environment;

    @Override
    public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
        BeanDefinitionBuilder beanDefinitionBuilder = BeanDefinitionBuilder.genericBeanDefinition(ClusterManager.class);
        beanDefinitionBuilder.addConstructorArgValue(this.environment);
        registry.registerBeanDefinition("clusterManager", beanDefinitionBuilder.getBeanDefinition());
    }

    @Override
    public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {

    }

    @Override
    public void setEnvironment(Environment environment) {
        this.environment = environment;
    }
}

In the configuration, instantiate the ClusterManager which is used to manage the cluster current limit. This logic is the same as the one used in our previous article. Register to ApolloDataSource and then automatically monitor Apollo changes to achieve dynamic effect.

 @Slf4j
public class ClusterManager {
    private Environment environment;
    private String namespace;
    private static final String CLUSTER_SERVER_KEY = "sentinel.cluster.server"; //服务集群配置
    private static final String DEFAULT_RULE_VALUE = "[]"; //集群默认规则
    private static final String FLOW_RULE_KEY = "sentinel.flow.rules"; //限流规则
    private static final String DEGRADE_RULE_KEY = "sentinel.degrade.rules"; //降级规则
    private static final String PARAM_FLOW_RULE_KEY = "sentinel.param.rules"; //热点限流规则
    private static final String CLUSTER_CLIENT_CONFIG_KEY = "sentinel.client.config"; //客户端配置

    public ClusterManager(Environment environment) {
        this.environment = environment;
        this.namespace = "YourNamespace";
        init();
    }

    private void init() {
        initClientConfig();
        initClientServerAssign();
        registerRuleSupplier();
        initServerTransportConfig();
        initState();
    }

    private void initClientConfig() {
        ReadableDataSource<String, ClusterClientConfig> clientConfigDs = new ApolloDataSource<>(
                namespace,
                CLUSTER_CLIENT_CONFIG_KEY,
                DEFAULT_SERVER_VALUE,
                source -> JacksonUtil.from(source, ClusterClientConfig.class)
        );
        ClusterClientConfigManager.registerClientConfigProperty(clientConfigDs.getProperty());
    }

    private void initClientServerAssign() {
        ReadableDataSource<String, ClusterClientAssignConfig> clientAssignDs = new ApolloDataSource<>(
                namespace,
                CLUSTER_SERVER_KEY,
                DEFAULT_SERVER_VALUE,
                new ServerAssignConverter(environment)
        );
        ClusterClientConfigManager.registerServerAssignProperty(clientAssignDs.getProperty());
    }

    private void registerRuleSupplier() {
        ClusterFlowRuleManager.setPropertySupplier(ns -> {
            ReadableDataSource<String, List<FlowRule>> ds = new ApolloDataSource<>(
                    namespace,
                    FLOW_RULE_KEY,
                    DEFAULT_RULE_VALUE,
                    source -> JacksonUtil.fromList(source, FlowRule.class));
            return ds.getProperty();
        });
        ClusterParamFlowRuleManager.setPropertySupplier(ns -> {
            ReadableDataSource<String, List<ParamFlowRule>> ds = new ApolloDataSource<>(
                    namespace,
                    PARAM_FLOW_RULE_KEY,
                    DEFAULT_RULE_VALUE,
                    source -> JacksonUtil.fromList(source, ParamFlowRule.class)
            );
            return ds.getProperty();
        });
    }

    private void initServerTransportConfig() {
        ReadableDataSource<String, ServerTransportConfig> serverTransportDs = new ApolloDataSource<>(
                namespace,
                CLUSTER_SERVER_KEY,
                DEFAULT_SERVER_VALUE,
                new ServerTransportConverter(environment)
        );

        ClusterServerConfigManager.registerServerTransportProperty(serverTransportDs.getProperty());
    }

    private void initState() {
        ReadableDataSource<String, Integer> clusterModeDs = new ApolloDataSource<>(
                namespace,
                CLUSTER_SERVER_KEY,
                DEFAULT_SERVER_VALUE,
                new ServerStateConverter(environment)
        );

        ClusterStateManager.registerProperty(clusterModeDs.getProperty());
    }
}

In this case, the basic function of a cluster's current limit is almost OK. The above steps are relatively simple, and it can basically run according to the official documentation. Next, we need to implement the core functions mentioned at the beginning of the article.

Automatic election & failover

How to achieve automatic election? To be simple, don't think about it so much. After each machine is successfully started, it is directly written to Apollo. The first one to write successfully is the Server node.

In order to ensure the problems caused by concurrency in this process, we need to lock to ensure that only one machine successfully writes its own local information.

Since I use Eureka as the registration center, Eureka has CacheRefreshedEvent local cache refresh event. Based on this, whenever the local cache is refreshed, we will check whether the current server node exists, and then implement the election according to the actual situation.

First add our listener in spring.factories.

 org.springframework.boot.autoconfigure.EnableAutoConfiguration=com.test.config.SentinelEurekaEventListener

The listener will only take effect when the cluster current limit annotation SentinelCluster is enabled.

 @Configuration
@Slf4j
@ConditionalOnBean(annotation = SentinelCluster.class)
public class SentinelEurekaEventListener implements ApplicationListener<CacheRefreshedEvent> {
    @Resource
    private DiscoveryClient discoveryClient;
    @Resource
    private Environment environment;
    @Resource
    private ApolloManager apolloManager;

    @Override
    public void onApplicationEvent(EurekaClientLocalCacheRefreshedEvent event) {
        if (!leaderAlive(loadEureka(), loadApollo())) {
            boolean tryLockResult = redis.lock; //redis或者其他加分布式锁
            if (tryLockResult) {
                try {
                    flush();
                } catch (Exception e) {
                } finally {
                    unlock();
                }
            }
        }
    }
  
    private boolean leaderAlive(List<ClusterGroup> eurekaList, ClusterGroup server) {
        if (Objects.isNull(server)) {
            return false;
        }
        for (ClusterGroup clusterGroup : eurekaList) {
            if (clusterGroup.getMachineId().equals(server.getMachineId())) {
                return true;
            }
        }
        return false;
    }
}

OK, in fact, seeing the code, we already know that we have implemented the logic of failover. In fact, the reason is the same.

The server information in Apollo is empty when it is started for the first time, so the first machine to be locked and written is the server node. If the server goes offline later, the local registry cache is refreshed. Compare the instance information of Eureka and Apollo. In the server, if the server does not exist, then re-execute the election logic.

It should be noted that the local cache refresh time may reach the level of several minutes in extreme cases, which means that a new server node may not be re-elected within a few minutes after the service is offline. The current limit of the entire cluster is unavailable. , this solution is not suitable for very strict business requirements.

For the problem of Eureka cache time synchronization, you can refer to the previous article. The Eureka service is offline too slow, and the phone is bombed by an alarm .

Dashboard Persistence Transformation

So far, we have implemented the high-availability solution. In the last step, as long as the configuration can be written to Apollo through the console that comes with Sentinel, the application will naturally monitor the configuration changes and achieve dynamic effect. Effect.

According to the official description, the official has implemented FlowControllerV2 for cluster current limiting, and there are simple cases in the test directory to help us quickly realize the logic of console persistence.

DynamicRuleProvider ,同时注入到Controller ,这里flowRuleApolloProvider Apollo查询数据, flowRuleApolloPublisher Used to write current limiting configuration to Apollo.

 @RestController
@RequestMapping(value = "/v2/flow")
public class FlowControllerV2 {
    private final Logger logger = LoggerFactory.getLogger(FlowControllerV2.class);

    @Autowired
    private InMemoryRuleRepositoryAdapter<FlowRuleEntity> repository;

    @Autowired
    @Qualifier("flowRuleApolloProvider")
    private DynamicRuleProvider<List<FlowRuleEntity>> ruleProvider;
    @Autowired
    @Qualifier("flowRuleApolloPublisher")
    private DynamicRulePublisher<List<FlowRuleEntity>> rulePublisher;


}

The implementation is very simple. The provider reads the configuration from the namespace through Apollo's open-api, and the publisher writes the rules through the open-api.

 @Component("flowRuleApolloProvider")
public class FlowRuleApolloProvider implements DynamicRuleProvider<List<FlowRuleEntity>> {

    @Autowired
    private ApolloManager apolloManager;
    @Autowired
    private Converter<String, List<FlowRuleEntity>> converter;

    @Override
    public List<FlowRuleEntity> getRules(String appName) {
        String rules = apolloManager.loadNamespaceRuleList(appName, ApolloManager.FLOW_RULES_KEY);

        if (StringUtil.isEmpty(rules)) {
            return new ArrayList<>();
        }
        return converter.convert(rules);
    }
}

@Component("flowRuleApolloPublisher")
public class FlowRuleApolloPublisher implements DynamicRulePublisher<List<FlowRuleEntity>> {

    @Autowired
    private ApolloManager apolloManager;
    @Autowired
    private Converter<List<FlowRuleEntity>, String> converter;

    @Override
    public void publish(String app, List<FlowRuleEntity> rules) {
        AssertUtil.notEmpty(app, "app name cannot be empty");
        if (rules == null) {
            return;
        }
        apolloManager.writeAndPublish(app, ApolloManager.FLOW_RULES_KEY, converter.convert(rules));
    }
}

ApolloManager realizes the ability to query and write the configuration through open-api . You need to configure the Apollo Portal address and token by yourself. You can check Apollo's official documentation by yourself.

 @Component
public class ApolloManager {
    private static final String APOLLO_USERNAME = "apollo";
    public static final String FLOW_RULES_KEY = "sentinel.flow.rules";
    public static final String DEGRADE_RULES_KEY = "sentinel.degrade.rules";
    public static final String PARAM_FLOW_RULES_KEY = "sentinel.param.rules";
    public static final String APP_NAME = "YourAppName";

    @Value("${apollo.portal.url}")
    private String portalUrl;
    @Value("${apollo.portal.token}")
    private String portalToken;
    private String apolloEnv;
    private String apolloCluster = "default";
    private ApolloOpenApiClient client;

    @PostConstruct
    public void init() {
        this.client = ApolloOpenApiClient.newBuilder()
                .withPortalUrl(portalUrl)
                .withToken(portalToken)
                .build();
        this.apolloEnv = "default";
    }

    public String loadNamespaceRuleList(String appName, String ruleKey) {
        OpenNamespaceDTO openNamespaceDTO = client.getNamespace(APP_NAME, apolloEnv, apolloCluster, "default");
        return openNamespaceDTO
                .getItems()
                .stream()
                .filter(p -> p.getKey().equals(ruleKey))
                .map(OpenItemDTO::getValue)
                .findFirst()
                .orElse("");
    }

    public void writeAndPublish(String appName, String ruleKey, String value) {
        OpenItemDTO openItemDTO = new OpenItemDTO();
        openItemDTO.setKey(ruleKey);
        openItemDTO.setValue(value);
        openItemDTO.setComment("Add Sentinel Config");
        openItemDTO.setDataChangeCreatedBy(APOLLO_USERNAME);
        openItemDTO.setDataChangeLastModifiedBy(APOLLO_USERNAME);
        client.createOrUpdateItem(APP_NAME, apolloEnv, apolloCluster, "default", openItemDTO);

        NamespaceReleaseDTO namespaceReleaseDTO = new NamespaceReleaseDTO();
        namespaceReleaseDTO.setEmergencyPublish(true);
        namespaceReleaseDTO.setReleasedBy(APOLLO_USERNAME);
        namespaceReleaseDTO.setReleaseTitle("Add Sentinel Config Release");
        client.publishNamespace(APP_NAME, apolloEnv, apolloCluster, "default", namespaceReleaseDTO);
    }

}

For other rules, such as downgrade, hotspot current limiting, you can refer to this method to modify, of course, the modification to be done by the console is definitely not this point, such as cluster flowId the default use of stand-alone auto-increment, this is definitely It needs to be modified, as well as the parameter transfer of the page, the modification of the query route, etc., which are relatively cumbersome, so I will not repeat them here. It is also a problem of workload.

Okay, that's all for this issue, I'm Ai Xiaoxian, see you in the next issue.


192 声望
67 粉丝
0 条评论
推荐阅读
RabbitMQ、RocketMQ、Kafka延迟队列实现
延迟队列在实际项目中有非常多的应用场景,最常见的比如订单未支付,超时取消订单,在创建订单的时候发送一条延迟消息,达到延迟时间之后消费者收到消息,如果订单没有支付的话,那么就取消订单。

艾小仙阅读 606

spring boot 锁
由于当前的项目中由于多线程操作同一个实体,会出现数据覆盖的问题,后保存的实体把先保存的实体的数据给覆盖了。于是查找了锁的实现的几种方式。但写到最后发现,其实自己可以写sql 更新需要更新的字段即可,这...

weiewiyi3阅读 9.1k

分布式高可用Mysql数据库Percona XtraDB Cluster 8.0 与 Proxysql 史上最详尽用法指南
PXC是Percona XtraDB Cluster的缩写,是 Percona 公司出品的免费MySQL集群产品。PXC的作用是通过mysql自带的Galera集群技术,将不同的mysql实例连接起来,实现多主集群。在PXC集群中每个mysql节点都是可读可写的...

apollo0084阅读 7.2k评论 2

利用Docker部署管理LDAP及其初次使用
前言:本周主要写了gitlabWebhook转github的项目,总体上没有遇到什么大问题,这周接触到了LDAP,于是就花时间实际操作了解了一下。

李明5阅读 865

记录本周问题
项目里两个地方都用到了hashmap。但是感觉自己用的时候并没有感觉非常的清晰。同时发现hashmap有线程不安全问题,而自己用的时候就是多线程来使用。于是在这里介绍一下。

weiewiyi5阅读 699

记录java 在遍历中删除元素 以及 mysql5.6版本添加unique失败
遍历中删除List或Queue等数据结构中,如何一边遍历一遍删除?1. 常犯错误ArrayList可能没遇到坑过的人会用增强for循环这么写: {代码...} 但是一运行,结果却抛 java.util.ConcurrentModificationException 异常即...

weiewiyi4阅读 761

Spring Security + JWT
Spring Security默认是基于session进行用户认证的,用户通过登录请求完成认证之后,认证信息在服务器端保存在session中,之后的请求发送上来后SecurityContextPersistenceFilter过滤器从session中获取认证信息、...

4阅读 1.4k

192 声望
67 粉丝
宣传栏