Continuing from the previous Sentinel cluster current limiting exploration , I briefly mentioned the principle of cluster current limiting last time, and then simply modified it with the official demo, and it can run normally and take effect.

This time, we need to go a step further and implement a high-availability solution for embedded cluster current limiting based on Sentinel, and package it into a middleware starter for use by three parties.

For high availability, we mainly need to solve two problems, which are the problems that need to be solved whether using embedded or independent mode. In comparison, the embedded mode is simpler.

  1. Cluster server automatic election
  2. automatic failover
  3. Sentinel-Dashboard persistence to Apollo

Cluster current limit

First of all, considering that most services may not need the function of cluster current limiting, an annotation is implemented to manually enable the cluster current limiting mode. Only when the annotation is enabled, will the cluster current limiting beans and current limiting be instantiated data.

 @Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Import({EnableClusterImportSelector.class})
@Documented
public @interface SentinelCluster {
}

public class EnableClusterImportSelector implements DeferredImportSelector {
    @Override
    public String[] selectImports(AnnotationMetadata annotationMetadata) {
        return new String[]{ClusterConfiguration.class.getName()};
    }
}

After this is written, when our SentinelCluster annotation is scanned, it will instantiate ClusterConfiguration .

 @Slf4j
public class ClusterConfiguration implements BeanDefinitionRegistryPostProcessor, EnvironmentAware {
    private Environment environment;

    @Override
    public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
        BeanDefinitionBuilder beanDefinitionBuilder = BeanDefinitionBuilder.genericBeanDefinition(ClusterManager.class);
        beanDefinitionBuilder.addConstructorArgValue(this.environment);
        registry.registerBeanDefinition("clusterManager", beanDefinitionBuilder.getBeanDefinition());
    }

    @Override
    public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {

    }

    @Override
    public void setEnvironment(Environment environment) {
        this.environment = environment;
    }
}

In the configuration, instantiate the ClusterManager which is used to manage the cluster current limit. This logic is the same as the one used in our previous article. Register to ApolloDataSource and then automatically monitor Apollo changes to achieve dynamic effect.

 @Slf4j
public class ClusterManager {
    private Environment environment;
    private String namespace;
    private static final String CLUSTER_SERVER_KEY = "sentinel.cluster.server"; //服务集群配置
    private static final String DEFAULT_RULE_VALUE = "[]"; //集群默认规则
    private static final String FLOW_RULE_KEY = "sentinel.flow.rules"; //限流规则
    private static final String DEGRADE_RULE_KEY = "sentinel.degrade.rules"; //降级规则
    private static final String PARAM_FLOW_RULE_KEY = "sentinel.param.rules"; //热点限流规则
    private static final String CLUSTER_CLIENT_CONFIG_KEY = "sentinel.client.config"; //客户端配置

    public ClusterManager(Environment environment) {
        this.environment = environment;
        this.namespace = "YourNamespace";
        init();
    }

    private void init() {
        initClientConfig();
        initClientServerAssign();
        registerRuleSupplier();
        initServerTransportConfig();
        initState();
    }

    private void initClientConfig() {
        ReadableDataSource<String, ClusterClientConfig> clientConfigDs = new ApolloDataSource<>(
                namespace,
                CLUSTER_CLIENT_CONFIG_KEY,
                DEFAULT_SERVER_VALUE,
                source -> JacksonUtil.from(source, ClusterClientConfig.class)
        );
        ClusterClientConfigManager.registerClientConfigProperty(clientConfigDs.getProperty());
    }

    private void initClientServerAssign() {
        ReadableDataSource<String, ClusterClientAssignConfig> clientAssignDs = new ApolloDataSource<>(
                namespace,
                CLUSTER_SERVER_KEY,
                DEFAULT_SERVER_VALUE,
                new ServerAssignConverter(environment)
        );
        ClusterClientConfigManager.registerServerAssignProperty(clientAssignDs.getProperty());
    }

    private void registerRuleSupplier() {
        ClusterFlowRuleManager.setPropertySupplier(ns -> {
            ReadableDataSource<String, List<FlowRule>> ds = new ApolloDataSource<>(
                    namespace,
                    FLOW_RULE_KEY,
                    DEFAULT_RULE_VALUE,
                    source -> JacksonUtil.fromList(source, FlowRule.class));
            return ds.getProperty();
        });
        ClusterParamFlowRuleManager.setPropertySupplier(ns -> {
            ReadableDataSource<String, List<ParamFlowRule>> ds = new ApolloDataSource<>(
                    namespace,
                    PARAM_FLOW_RULE_KEY,
                    DEFAULT_RULE_VALUE,
                    source -> JacksonUtil.fromList(source, ParamFlowRule.class)
            );
            return ds.getProperty();
        });
    }

    private void initServerTransportConfig() {
        ReadableDataSource<String, ServerTransportConfig> serverTransportDs = new ApolloDataSource<>(
                namespace,
                CLUSTER_SERVER_KEY,
                DEFAULT_SERVER_VALUE,
                new ServerTransportConverter(environment)
        );

        ClusterServerConfigManager.registerServerTransportProperty(serverTransportDs.getProperty());
    }

    private void initState() {
        ReadableDataSource<String, Integer> clusterModeDs = new ApolloDataSource<>(
                namespace,
                CLUSTER_SERVER_KEY,
                DEFAULT_SERVER_VALUE,
                new ServerStateConverter(environment)
        );

        ClusterStateManager.registerProperty(clusterModeDs.getProperty());
    }
}

In this case, the basic function of a cluster's current limit is almost OK. The above steps are relatively simple, and it can basically run according to the official documentation. Next, we need to implement the core functions mentioned at the beginning of the article.

Automatic election & failover

How to achieve automatic election? To be simple, don't think about it so much. After each machine is successfully started, it is directly written to Apollo. The first one to write successfully is the Server node.

In order to ensure the problems caused by concurrency in this process, we need to lock to ensure that only one machine successfully writes its own local information.

Since I use Eureka as the registration center, Eureka has CacheRefreshedEvent local cache refresh event. Based on this, whenever the local cache is refreshed, we will check whether the current server node exists, and then implement the election according to the actual situation.

First add our listener in spring.factories.

 org.springframework.boot.autoconfigure.EnableAutoConfiguration=com.test.config.SentinelEurekaEventListener

The listener will only take effect when the cluster current limit annotation SentinelCluster is enabled.

 @Configuration
@Slf4j
@ConditionalOnBean(annotation = SentinelCluster.class)
public class SentinelEurekaEventListener implements ApplicationListener<CacheRefreshedEvent> {
    @Resource
    private DiscoveryClient discoveryClient;
    @Resource
    private Environment environment;
    @Resource
    private ApolloManager apolloManager;

    @Override
    public void onApplicationEvent(EurekaClientLocalCacheRefreshedEvent event) {
        if (!leaderAlive(loadEureka(), loadApollo())) {
            boolean tryLockResult = redis.lock; //redis或者其他加分布式锁
            if (tryLockResult) {
                try {
                    flush();
                } catch (Exception e) {
                } finally {
                    unlock();
                }
            }
        }
    }
  
    private boolean leaderAlive(List<ClusterGroup> eurekaList, ClusterGroup server) {
        if (Objects.isNull(server)) {
            return false;
        }
        for (ClusterGroup clusterGroup : eurekaList) {
            if (clusterGroup.getMachineId().equals(server.getMachineId())) {
                return true;
            }
        }
        return false;
    }
}

OK, in fact, seeing the code, we already know that we have implemented the logic of failover. In fact, the reason is the same.

The server information in Apollo is empty when it is started for the first time, so the first machine to be locked and written is the server node. If the server goes offline later, the local registry cache is refreshed. Compare the instance information of Eureka and Apollo. In the server, if the server does not exist, then re-execute the election logic.

It should be noted that the local cache refresh time may reach the level of several minutes in extreme cases, which means that a new server node may not be re-elected within a few minutes after the service is offline. The current limit of the entire cluster is unavailable. , this solution is not suitable for very strict business requirements.

For the problem of Eureka cache time synchronization, you can refer to the previous article. The Eureka service is offline too slow, and the phone is bombed by an alarm .

Dashboard Persistence Transformation

So far, we have implemented the high-availability solution. In the last step, as long as the configuration can be written to Apollo through the console that comes with Sentinel, the application will naturally monitor the configuration changes and achieve dynamic effect. Effect.

According to the official description, the official has implemented FlowControllerV2 for cluster current limiting, and there are simple cases in the test directory to help us quickly realize the logic of console persistence.

DynamicRuleProvider ,同时注入到Controller ,这里flowRuleApolloProvider Apollo查询数据, flowRuleApolloPublisher Used to write current limiting configuration to Apollo.

 @RestController
@RequestMapping(value = "/v2/flow")
public class FlowControllerV2 {
    private final Logger logger = LoggerFactory.getLogger(FlowControllerV2.class);

    @Autowired
    private InMemoryRuleRepositoryAdapter<FlowRuleEntity> repository;

    @Autowired
    @Qualifier("flowRuleApolloProvider")
    private DynamicRuleProvider<List<FlowRuleEntity>> ruleProvider;
    @Autowired
    @Qualifier("flowRuleApolloPublisher")
    private DynamicRulePublisher<List<FlowRuleEntity>> rulePublisher;


}

The implementation is very simple. The provider reads the configuration from the namespace through Apollo's open-api, and the publisher writes the rules through the open-api.

 @Component("flowRuleApolloProvider")
public class FlowRuleApolloProvider implements DynamicRuleProvider<List<FlowRuleEntity>> {

    @Autowired
    private ApolloManager apolloManager;
    @Autowired
    private Converter<String, List<FlowRuleEntity>> converter;

    @Override
    public List<FlowRuleEntity> getRules(String appName) {
        String rules = apolloManager.loadNamespaceRuleList(appName, ApolloManager.FLOW_RULES_KEY);

        if (StringUtil.isEmpty(rules)) {
            return new ArrayList<>();
        }
        return converter.convert(rules);
    }
}

@Component("flowRuleApolloPublisher")
public class FlowRuleApolloPublisher implements DynamicRulePublisher<List<FlowRuleEntity>> {

    @Autowired
    private ApolloManager apolloManager;
    @Autowired
    private Converter<List<FlowRuleEntity>, String> converter;

    @Override
    public void publish(String app, List<FlowRuleEntity> rules) {
        AssertUtil.notEmpty(app, "app name cannot be empty");
        if (rules == null) {
            return;
        }
        apolloManager.writeAndPublish(app, ApolloManager.FLOW_RULES_KEY, converter.convert(rules));
    }
}

ApolloManager realizes the ability to query and write the configuration through open-api . You need to configure the Apollo Portal address and token by yourself. You can check Apollo's official documentation by yourself.

 @Component
public class ApolloManager {
    private static final String APOLLO_USERNAME = "apollo";
    public static final String FLOW_RULES_KEY = "sentinel.flow.rules";
    public static final String DEGRADE_RULES_KEY = "sentinel.degrade.rules";
    public static final String PARAM_FLOW_RULES_KEY = "sentinel.param.rules";
    public static final String APP_NAME = "YourAppName";

    @Value("${apollo.portal.url}")
    private String portalUrl;
    @Value("${apollo.portal.token}")
    private String portalToken;
    private String apolloEnv;
    private String apolloCluster = "default";
    private ApolloOpenApiClient client;

    @PostConstruct
    public void init() {
        this.client = ApolloOpenApiClient.newBuilder()
                .withPortalUrl(portalUrl)
                .withToken(portalToken)
                .build();
        this.apolloEnv = "default";
    }

    public String loadNamespaceRuleList(String appName, String ruleKey) {
        OpenNamespaceDTO openNamespaceDTO = client.getNamespace(APP_NAME, apolloEnv, apolloCluster, "default");
        return openNamespaceDTO
                .getItems()
                .stream()
                .filter(p -> p.getKey().equals(ruleKey))
                .map(OpenItemDTO::getValue)
                .findFirst()
                .orElse("");
    }

    public void writeAndPublish(String appName, String ruleKey, String value) {
        OpenItemDTO openItemDTO = new OpenItemDTO();
        openItemDTO.setKey(ruleKey);
        openItemDTO.setValue(value);
        openItemDTO.setComment("Add Sentinel Config");
        openItemDTO.setDataChangeCreatedBy(APOLLO_USERNAME);
        openItemDTO.setDataChangeLastModifiedBy(APOLLO_USERNAME);
        client.createOrUpdateItem(APP_NAME, apolloEnv, apolloCluster, "default", openItemDTO);

        NamespaceReleaseDTO namespaceReleaseDTO = new NamespaceReleaseDTO();
        namespaceReleaseDTO.setEmergencyPublish(true);
        namespaceReleaseDTO.setReleasedBy(APOLLO_USERNAME);
        namespaceReleaseDTO.setReleaseTitle("Add Sentinel Config Release");
        client.publishNamespace(APP_NAME, apolloEnv, apolloCluster, "default", namespaceReleaseDTO);
    }

}

For other rules, such as downgrade, hotspot current limiting, you can refer to this method to modify, of course, the modification to be done by the console is definitely not this point, such as cluster flowId the default use of stand-alone auto-increment, this is definitely It needs to be modified, as well as the parameter transfer of the page, the modification of the query route, etc., which are relatively cumbersome, so I will not repeat them here. It is also a problem of workload.

Okay, that's all for this issue, I'm Ai Xiaoxian, see you in the next issue.


艾小仙
203 声望69 粉丝