In the context of rapid iteration of the Internet, in order to achieve rapid launch, the system often chooses the fastest development mode, such as our common mvp version iteration. Most business systems are uncertain about the future business development, so over time, they often encounter various bottlenecks, such as system performance, inability to adapt to business logic, etc., which may involve An upgrade to the system architecture. System upgrades often include the most basic two parts: interface migration and reconstruction and data migration and reconstruction. In the process of system architecture upgrade, the most important thing is to ensure system stability, that is, users are not aware of it. Therefore, the purpose of the text is to provide a design idea that can be grayed out and rolled back, so as to achieve a stable architecture upgrade.
Scenes
In the iterative process of our system, scenarios such as reconstruction, data source switching, and interface migration are often involved. In order to ensure the smooth launch of the system, the interface migration process should ensure rollback and grayscale. Interface migration may also involve data migration, and the order of the two should not affect the stability of the system. To summarize, the goals of interface migration:
- Can be grayscale, even using the old and new interface can be controlled.
- Can be rolled back, such as using the new interface exception, can quickly roll back to the old interface.
- Do not invade the business logic, do not change the original business logic code, and then go offline as a whole after the migration is completed to prevent irreversible effects caused by direct intrusion and modification.
- The old interface is closed after the system runs smoothly, that is, the old data source can be accessed and the old interface can be smoothly offline.
Migration plan
This article mainly provides an idea for interface migration and data migration. In Section 3, there will be practical core code implementations. (The code just provides ideas, not code that can be run directly)
Overall Migration Plan
The following figure shows the idea of interface migration, referring to the proxy mode of cglib's jdk. Suppose you have an interface class (target class) to be migrated, then you need to rewrite a proxy class as the migrated interface. The selection of target classes and proxy classes is controlled by switches, which involve two levels:
- Main switch: used to control whether to switch the new interface in full, when the interface migration is stable and the data migration is completed (if any)
- Grayscale switch: You can set a grayscale switch list to control your interfaces/data that need to go through the proxy interface
For different interface logic, the implementation logic of the proxy interface will be different. The specific scenarios are described below.
single data query
For a single piece of data, the source can be judged by the data source. Based on the principles of grayscale and rollback, the routing rules for target classes and proxy classes are as follows:
- The main switch is judged first. If the main control switch is turned on, it means that the migration has been completed and the verification and verification are completed. At this time, the proxy interface can be used, so that the interface and data can be closed, and our migration goal can be achieved.
- If the data does not exist in the old data table, no matter whether the data exists in the new table or not, we can directly go to the proxy interface to collect the interface logic of the new data.
- If the data exists in the old data table, but not in the grayscale list, use the target class at this time (this can be done during rollback), and follow the original interface method, that is, the old logic, which will not affect the system function.
- If the data exists in the old data table, but is in the grayscale list, it means that the data has been migrated and needs to be verified. At this time, you can use the proxy class (this can be done in grayscale) to go to the new interface logic.
Multiple data queries
Different from the query of a single piece of data, we need to query all the qualified data in the new table and the old table. Multiple data queries involve the problem of data duplication (that is, the data will exist in both the old table and the new table), so it is necessary to The data is deduplicated and then merged to return the result.
Data Update
Because there is an intermediate time in the process of data migration to the system grayscale, we should double-write to maintain the consistency of the new and old table data when the data is updated. At the same time, in order to close the interface and data, we must first judge whether the master switch is turned on. If the master switch is turned on, the data update only needs to update the new table.
data insertion
To close the data and interface, we need to switch the incremental data. Therefore, we directly use the proxy class and insert the data into the new table to control the data increment of the old table. We only need to consider the existing data when migrating data.
practice
For example, in a retail scenario, each store has a unique identity identification store id, then our grayscale list can store the store id list, and grayscale according to the store dimension to granularize the scope of influence.
Agent distribution logic
The distribution logic is the core logic. Data deduplication rules and interface/repository layer proxy forwarding are all controlled based on this logic:
- First judge the main switch. If the main switch is turned on, the migration is complete. At this time, all new interface logic and data sources are passed through the proxy class.
- Judging the grayscale switch, if the grayscale store is included in the grayscale process, then the new interface is used through the proxy class; otherwise, the old logic of the original interface is used to realize the interface switching.
- The new data is forwarded to the proxy class, and the new logic and data are closed to prevent the generation of incremental data.
- The batch query interface needs to be forwarded to the proxy class, because it involves the process of deduplication and merging of new and old data.
/**
* 是否开启代理
*
* @param ctx 上下文
* @return 是:开启代理,否:不开启代理
*/
public Boolean enableProxy(ProxyEnableContext ctx) {
if (ctx == null) {
return false;
}
// 判断总开关
if (总开关打开) {
// 说明数据迁移完成,接口全部切换
return true;
}
if (单个门店操作) {
if (存在老数据源) {
// 判断是否在灰度名单,是则返回true;否则返回false;
} else {
// 新数据
return true;
}
} else {
// 批量查询,需要走代理合并新、老数据源
return true;
}
}
Interface proxy
The interface proxy is mainly intercepted by aspects and implemented by annotating methods. The agent is annotated as follows
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface EnableProxy {
// 用于标识代理类
Class<?> proxyClass();
// 用于标识转发的代理类的方法,默认取目标类的方法名
String methodName() default "";
// 对于单条数据的查询,可以指定key的参数索引位置,会解析后转发
int keyIndex() default -1;
}
The core logic of the implementation of the aspect is to intercept the annotation, and judge whether to use the proxy class according to the logic of the proxy distribution. If the proxy class is used, the proxy type, method name, and parameters need to be parsed, and then forwarded.
@Component
@Aspect
@Slf4j
public class ProxyAspect {
// 核心代理类
@Resource
private ProxyManager proxyManager;
// 注解拦截
@Pointcut("@annotation(***)")
private void proxy() {}
@Around("proxy()")
@SuppressWarnings("rawtypes")
public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
try {
MethodSignature methodSignature = (MethodSignature)joinPoint.getSignature();
Class<?> clazz = joinPoint.getTarget().getClass();
String methodName = methodSignature.getMethod().getName();
Class[] parameterTypes = methodSignature.getParameterTypes();
Object[] args = joinPoint.getArgs();
// 拿到方法的注解
EnableProxy enableProxyAnnotation = ReflectUtils
.getMethodAnnotation(clazz, EnableProxy.class, methodName, parameterTypes);
if (enableProxyAnnotation == null) {
// 没有找到注解,直接放过
return joinPoint.proceed();
}
//判断是否需要走代理
Boolean enableProxy = enableProxy(clazz, methodName, args, enableProxyAnnotation);
if (!enableProxy) {
// 不开启代理,直接放过
return joinPoint.proceed();
}
// 默认取目标类的方法名称
methodName = StringUtils.isNotBlank(enableProxyAnnotation.methodName())
? enableProxyAnnotation.methodName() : methodName;
// 通过反射拿到代理类的代理方法
Object bean = ApplicationContextUtil.getBean(enableProxyAnnotation.proxyClass());
Method proxyMethod = ReflectUtils.getMethod(enableProxyAnnotation.proxyClass(), methodName, parameterTypes);
if (bean == null || proxyMethod == null) {
// 没有代理类或代理方法,直接走原逻辑
return joinPoint.proceed();
}
// 通过反射,转发代理类方法
return ReflectUtils.invoke(bean, proxyMethod, joinPoint.getArgs());
} catch (BizException bizException) {
// 业务方法异常,直接抛出
throw bizException;
} catch (Throwable throwable) {
// 其他异常,打个日志感知一下
throw throwable;
}
}
}
Repository layer agent
If the proxy class is left, the logic will be forwarded to the ProxyManager, and the proxy class manager will be responsible for data distribution, deduplication, merging, updating, inserting and other operations.
single data query
The proxy query flowchart is shown in the figure below. The target method of the target interface will be intercepted by the aspect through the proxy, and the aspect determines whether the proxy interface needs to be used.
- If you don't need to go through the proxy interface (that is, the data source is old and not grayscaled), continue to go through the target interface
- If you need to go through the proxy interface (that is, the data source is new or the old data is in the grayscale list after migration), the proxy interface method is called, and the storage layer logic is further forwarded in the proxy interface method, and the ProxyManager uniformly closes the interface. . In the query logic of a single piece of data, it is only necessary to call the proxy storage layer service to query the new data source, and the logic is relatively simple.
For example, the information query of a single store, then our core controller ProxyManager method logic can be implemented like this:
public <T> T getById(Long id, Boolean enableProxy) {
if (enableProxy) {
// 开启代理,就走代理仓储层的查询服务
return proxyRepository.getById(id);
} else {
// 没开启代理,走原来仓储层的服务
return targetRepository.getById(id);
}
}
Multiple data query + deduplication
The deduplication logic for multiple pieces of data is the same, and the deduplication rules are as follows:
- The new table and the old table do not exist, the data is removed, and the result is not returned.
- The new table does not have the information of the old table data.
- The old table does not have the information to use the new table data.
- There are data in both the old table and the new table (the migration is completed). At this time, it is judged whether the master control is open and whether the data is in the grayscale list. One of them is to use the new table data; otherwise, the old table data is used.
Based on the above deduplication logic, all query interfaces can be abstracted into unified methods
- Query old data, business definition, encapsulate query logic with supply function
- Query new data, business definition, encapsulate query logic with supply function
- Merge and deduplication, abstracting a unified merge tool
The core process is shown in the figure below. The target method of the target interface will be intercepted by the aspect and forwarded to the proxy interface. The proxy interface can be further forwarded to the ProxyManager for query & merge where the data source is called. If the main switch is not turned on, it means that the full amount of data has not been migrated and verified, and the old data source still needs to be checked (to prevent data omission). If the switch is turned on, it means that the migration is completed, and the original storage layer service will not be called again at this time, which achieves the purpose of closing the old data source.
For example, batch query store lists can be combined in this way. The core implementation is as follows:
public <T> List<T> queryList(List<Long> ids, Function<T, Long> idMapping) {
if (CollectionUtils.isEmpty(ids)) {
return Collections.emptyList();
}
// 1. 查询老数据
Supplier<List<T>> oldSupplier = () -> targetRepository.queryList(ids);
// 2. 查询新数据
Supplier<List<T>> newSupplier = () -> proxyRepository.queryList(ids);
// 3. 根据合并规则合并,依赖合并工具(对合并逻辑进行抽象后的工具类)
return ProxyHelper.mergeWithSupplier(oldSupplier, newSupplier, idMapping);
}
The merge tool class is implemented as follows:
public class ProxyHelper {
/**
* 核心去重逻辑,判断是否采用新表数据
*
* @param existOldData 是否存在老数据
* @param existNewData 是否存在新数据
* @param id 门店id
* @return 是否采用新表数据
*/
public static boolean useNewData(Boolean existOldData, Boolean existNewData, Long id) {
if (!existOldData && !existNewData) {
//两张表都没有
return true;
} else if (!existNewData) {
//新表没有
return false;
} else if (!existOldData) {
//老表没有
return true;
} else {
//新表老表都有,判断开关和灰度开关
return 总开关打开 or 在灰度列表内
}
}
/**
* 合并新/老表数据
*
* @param oldSupplier 老表数据
* @param newSupplier 新表数据
* @return 合并去重后的数据
*/
public static <T> List<T> mergeWithSupplier(
Supplier<List<T>> oldSupplier, Supplier<List<T>> newSupplier, Function<T, Long> idMapping) {
List<T> old = Collections.emptyList();
if (总开关未打开) {
// 未完成切换,需要查询老的数据源
old = oldSupplier.get();
}
return merge(idMapping, old, newSupplier.get());
}
/**
* 去重并合并新老数据
*
* @param idMapping 门店id映射函数
* @param oldData 老数据
* @param newData 新数据
* @return 合并结果
*/
public static <T> List<T> merge(Function<T, Long> idMapping, List<T> oldData, List<T> newData) {
if (CollectionUtils.isEmpty(oldData) && CollectionUtils.isEmpty(newData)) {
return Collections.emptyList();
}
if (CollectionUtils.isEmpty(oldData)) {
return newData;
}
if (CollectionUtils.isEmpty(newData)) {
return oldData;
}
Map<Long/*门店id*/, T> oldMap = oldData.stream().collect(
Collectors.toMap(idMapping, Function.identity(), (a, b) -> a));
Map<Long/*门店id*/, T> newMap = newData.stream().collect(
Collectors.toMap(idMapping, Function.identity(), (a, b) -> a));
return ListUtils.union(oldData, newData)
.stream()
.map(idMapping)
.distinct()
.map(id -> {
boolean existOldData = oldMap.containsKey(id);
boolean existNewData = newMap.containsKey(id);
boolean useNewData = useNewData(existOldData, existNewData, id);
return useNewData ? newMap.get(id) : oldMap.get(id);
})
.filter(Objects::nonNull)
.collect(Collectors.toList());
}
}
Incremental data
The code is omitted, and the insertion method of the agent storage layer can be directly executed.
update data
Update data requires double writing. If the main switch is turned on (that is, the migration is completed), the writing of old data can be stopped because it will not be read again.
@Transactional(rollbackFor = Throwable.class)
public <T> Boolean update(T t) {
if (t == null) {
return false;
}
if (总开关没打开) {
// 数据没有迁移完毕
// 更新要双写,如有,保持数据一致
targetRepository.update(t);
}
// 更新新数据
proxyRepository.update(t);
return true;
}
practice
This article only proposes a migration solution idea, which may not be applicable to all scenarios, but in the process of system upgrade, the ultimate goal faced by engineers should be the same, that is, in order to make the system go online stably, and in the event of problems can be safely rolled back. The implementation logic of this paper is to forward the method of the target interface through annotations and aspects, and forward it to the proxy class interface, thereby switching to the new logic and new data source, and the ProxyManager will adapt the proxy distribution logic of the data source to complete the data distribution. Query, update, add logic.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。