This article will detail how the author refactors and optimizes the system design as the business grows.
write in front
The history of the existence of a single page is relatively long. I have experienced a lot of version changes since I took over it last year. At first, it was just a simple feeds flow. In order to improve the user experience and better help users to collect satisfactory products, While reconstructing the entire single page, we also added a list and a limited-time spike module. During the Double Eleven period, the add-on rate and conversion rate have been significantly improved. This year, 618 also added a new order-collection progress shopping bar module, which supports real-time order-collection progress display and the ability to settle and place orders, improving the user's order-collection experience. And while assembling a single page to complete the business iteration, it has also accumulated some general capabilities to support the rapid iteration of other businesses. In this article, I will introduce in detail how I reconstruct and optimize the system design in the case of business growth.
For some limited data that will not change for a period of time, in order to reduce downstream pressure and improve the performance of our own system, we often use multi-level cache to achieve this purpose. The most common is the local cache + redis cache to undertake. If the local cache does not exist, the data cached by redis will be taken and cached locally. If redis does not exist, it will be obtained from the data source. The basic code (to get the list data) )as follows:
return LOCAL_CACHE.get(key, () -> {
String cache = rdbCommonTairCluster.get(key);
if (StringUtils.isNotBlank(cache)) {
return JSON.parseObject(cache, new TypeReference<List<ItemShow>>(){});
}
List<ItemShow> itemShows = getRankingItemOriginal(context, rankingRequest);
rdbCommonTairCluster.set(key, JSON.toJSONString(itemShows), new SetParams().ex(CommonSwitch.rankingExpireSecond));
return itemShows;
});
Gradually, problems arose, and some online users occasionally did not see the list module for a period of time. The schematic diagram of the list module is as follows:
This kind of problem is the most difficult to troubleshoot and requires a certain amount of project experience. The first time I encountered this kind of problem, it took a lot of effort. To sum up, if a cache expires and the downstream service just returns an empty result, the request will be cached with an empty result. During the life cycle of the cache, the list module will disappear, but because some machines still have old data in the local cache, some users can see the scene, and some users can't.
Let's see how I optimized it. The main focus of the core is to distinguish whether the result returned by the downstream is true or false. If it is empty, it is necessary to cache the empty collection (during non-big promotions or when there is no data in some lists, the data itself is empty)
Extend the time of the value cache in redis, and add a cache with an updateable time (such as 60s expiration). When it is judged that the update time cache has expired, the data source will be re-read and the value value will be reassigned. Attention should be paid here. I will compare the new and old data. If the new data is empty and the old data is not empty, it will only update the time without replacing the value. The value ends with its own expiration time, and the modified code is as follows:
return LOCAL_CACHE.get(key, () -> {
String updateKey = getUpdateKey(key);
String value = rdbCommonTairCluster.get(key);
List<ItemShow> cache = StringUtils.isBlank(cache) ? Collections.emptyList()
: JSON.parseObject(value, new TypeReference<List<ItemShow>>(){});
if (rdbCommonTairCluster.exists(updateKey)) {
return cache;
}
rdbCommonTairCluster.set(updateKey, currentTime, cacheUpdateSecond);
List<ItemShow> itemShows = getRankingItemOriginal(context, rankingRequest);
if (CollectionUtils.isNotEmpty(itemShows)) {
rdbCommonTairCluster.set(key, JSON.toJSONString(itemShows), new SetParams().ex(CommonSwitch.rankingExpireSecond));
}
return itemShows;
});
In order to make this code reusable, I abstract the multi-level cache as an independent object, the code is as follows:
public class GatherCache<V> {
@Setter
private Cache<String, List<V>> localCache;
@Setter
private CenterCache centerCache;
public List<V> get(boolean needCache, String key, @NonNull Callable<List<V>> loader, Function<String, List<V>> parse) {
try {
// 是否需要是否缓存
return needCache ? localCache.get(key, () -> getCenter(key, loader, parse)) : loader.call();
} catch (Throwable e) {
GatherContext.error(this.getClass().getSimpleName() + " get catch exception", e);
}
return Collections.emptyList();
}
private List<V> getCenter(String key, Callable<List<V>> loader, Function<String, List<V>> parse) throws Exception {
String updateKey = getUpdateKey(key);
String value = centerCache.get(key);
boolean blankValue = StringUtils.isBlank(value);
List<V> cache = blankValue ? Collections.emptyList() : parse.apply(value);
if (centerCache.exists(updateKey)) {
return cache;
}
centerCache.set(updateKey, currentTime, cacheUpdateSecond);
List<V> newCache = loader.call();
if (CollectionUtils.isNotEmpty(newCache)) {
centerCache.set(key, JSON.toJSONString(newCache), cacheExpireSecond);
}
return newCache;
}
}
The code that obtains data from the data source is handed over to the external implementation, using the form of Callable, and at the same time constraining the data source type through generics, there is still a flaw that has not been resolved, that is, when converting String to object through fastJson, generics cannot be used. The type is directly transferred, and I use the externalized processing here, that is, the same as the way to obtain the data source, the external decides how to parse the string value obtained from redis. The calling method is as follows:
List<ItemShow> itemShowList = gatherCache.get(true, rankingRequest.getKey(),
() -> getRankingItemOriginal(rankingRequest, context.getRequestContext()),
v -> JSON.parseObject(v, new TypeReference<List<ItemShow>>() {}));
At the same time, I also adopted the builder mode to facilitate the rapid generation of the gatherCache class. The code is as follows:
@PostConstruct
public void init() {
this.gatherCache = GatherCacheBuilder.newBuilder()
.localMaximumSize(500)
.localExpireAfterWriteSeconds(30)
.build(rdbCenterCache);
}
The above code is relatively perfect, but ignores a detail point. If the local caches of multiple machines are invalid at the same time, the updateable time of redis happens to be invalid. At this time, multiple requests will be sent to the downstream concurrently (due to order collection). There is a local cache to cover the bottom line, and the number of concurrent hits to the downstream is very limited, which can basically be ignored). But when you encounter problems, you need to solve them and pursue perfect code. I did the following transformation:
private List<V> getCenter(String key, Callable<List<V>> loader, Function<String, List<V>> parse) throws Exception {
String updateKey = getUpdateKey(key);
String value = centerCache.get(key);
boolean blankValue = StringUtils.isBlank(value);
List<V> cache = blankValue ? Collections.emptyList() : parse.apply(value);
// 如果抢不到锁,并且value没有过期
if (!centerCache.setNx(updateKey, currentTime) && !blankValue) {
return cache;
}
centerCache.set(updateKey, currentTime, cacheUpdateSecond);
// 使用异步线程去更新value
CompletableFuture.runAsync(() -> updateCache(key, loader));
return cache;
}
private void updateCache(String key, Callable<List<V>> loader) {
List<V> newCache = loader.call();
if (CollectionUtils.isNotEmpty(newCache)) {
centerCache.set(key, JSON.toJSONString(newCache), cacheExpireSecond);
}
}
This solution uses distributed locks + asynchronous threads to process updates. Only one request will grab the update lock, and other requests will still return old data within the updateable time period under concurrent conditions. Since the redis encapsulation method does not have the atomic operation of setting the expiration time after grabbing the lock, I used the method of grabbing the lock first and then assigning the expiration time. In extreme scenarios, deadlock may occur, that is, just grabbing the lock. When the lock is reached, and then the machine crashes abnormally, resulting in the expiration time not being assigned, there will be a situation where it will never be updated. Although this situation is extreme, it still needs to be solved. The following are the two solutions I can think of. I chose the second way:
- Combining two-step operations into one atomic operation by using lua scripting
- Use the expiration time of value to solve the deadlock problem
PS Some general information taken from ThreadLocal cannot be obtained when using asynchronous thread processing, and must be reassigned
Single-core processing flow design
The order collection itself does not have its own data source. It is read from other services and displayed after various processing. Such code is the best and hardest to write. Just like the simplest assembly product information, the general code will be written like this:
// 获取推荐商品
List<Map<String, String>> summaryItemList = recommend();
List<ItemShow> itemShowList = summaryItemList.stream().map(v -> {
ItemShow itemShow = new ItemShow();
// 设置商品基本信息
itemShow.setItemId(NumberUtils.createLong(v.get("itemId")));
itemShow.setItemImg(v.get("pic"));
// 获取利益点
GuideInfoDTO guideInfoDTO = new GuideInfoDTO();
AtmosphereResult<Map<Long, List<AtmosphereFullDTO>>> atmosphereResult = guideAtmosphereClient
.extract(guideInfoDTO, "gather", "item");
List<IconText> iconTexts = parseAtmosphere(atmosphereResult);
itemShow.setItemBenefits(iconTexts);
// 预售处理
String preSalePrice = getPreSale(v);
if (Objects.nonNull(preSalePrice)) {
itemShow.setItemPrice(preSalePrice);
}
// ......
return itemShow;
}).collect(Collectors.toList());
The code can be quickly written and put into use, but the code is a bit messy. Developers with high code requirements may make the following improvements
// 获取推荐商品
List<Map<String, String>> summaryItemList = recommend();
List<ItemShow> itemShowList = summaryItemList.stream().map(v -> {
ItemShow itemShow = new ItemShow();
// 设置商品基本信息
buildCommon(itemShow, v);
// 获取利益点
buildAtmosphere(itemShow, v);
// 预售处理
buildPreSale(itemShow, v);
// ......
return itemShow;
}).collect(Collectors.toList());
Generally, this kind of code is relatively high-quality processing, but this is only for a single business. If you encounter multiple businesses that need to use this assembly, the simplest but you need to judge that the requested product assembly from the feeds flow module does not require benefits. , from the top N spike module does not need to deal with the pre-sale price.
// 获取推荐商品
List<Map<String, String>> summaryItemList = recommend();
List<ItemShow> itemShowList = summaryItemList.stream().map(v -> {
ItemShow itemShow = new ItemShow();
// 设置商品基本信息
buildCommon(itemShow, v);
// 获取利益点
if (!Objects.equals(soluction, FiltrateFeedsSolution.class)) {
buildAtmosphere(itemShow, v);
}
// 预售处理
if (!Objects.equals(source, "seckill")) {
buildPreSale(itemShow, v);
}
// ......
return itemShow;
}).collect(Collectors.toList());
This solution can clearly see the diversion structure of the entire main process, but it will make the main process untidy and reduce readability. Many people are used to writing this judgment in their own methods as follows. (Of course, there are also people who write a separate main process for each module. The above is just to simplify the code for the sake of easy understanding of the article. The actual main process is long, and most of them need to be processed. If each module creates its own main process, Will bring a lot of duplicate code, not recommended)
private void buildAtmosphere(ItemShow itemShow, Map<String, String> map) {
if (Objects.equals(soluction, FiltrateFeedsSolution.class)) {
return;
}
GuideInfoDTO guideInfoDTO = new GuideInfoDTO();
AtmosphereResult<Map<Long, List<AtmosphereFullDTO>>> atmosphereResult = guideAtmosphereClient
.extract(guideInfoDTO, "gather", "item");
List<IconText> iconTexts = parseAtmosphere(atmosphereResult);
itemShow.setItemBenefits(iconTexts);
}
Looking at the entire business logic of order collection, whether it is parameter assembly, product assembly, shopping cart assembly, or list assembly, the ability to assemble information is required, and they all have the following characteristics:
- The assembly of each or every few fields does not affect other fields, and even if there is an exception, it should not affect the assembly of other fields
- Under the consumer link, the performance requirements will be relatively high, and the assembly logic that can not be accessed will not be accessed, and the downstream will not be called if it can not be called downstream.
- If it is found that a write field is necessary during the assembly process, but there is no completion, the process will be terminated early
- The processing time of each method needs to be recorded, and the development can clearly know where the time is spent, which is convenient to find the code that needs to be optimized.
The above points are very small. Do not do it or do it alone will not affect the whole. When a single page contains so much assembly logic, if all the above logic is written once, a lot of redundant code will be generated. But for those who have high code requirements, if these points are not added, there is always a thorn in their hearts. Gradually, because of the incomplete design considerations before, I will make various patches, just like if I want to know the time-consuming of a certain method, I will write the following code:
long startTime = System.currentTimeMillis();
// 主要处理
buildAtmosphere(itemShow, summaryMap);
long endTime = System.currentTimeMillis();
return endTime - startTime;
This type of assembly is done in each domain of the order collection, including product assembly, parameter assembly, list assembly, and shopping cart assembly. According to the characteristics of the order collection business, after searching through various design patterns, the chain of responsibility + command pattern was finally chosen.
In GoF's Design Patterns, the Chain of Responsibility pattern is defined as follows:
Decouple the sending and receiving of requests, so that multiple receiving objects have the opportunity to process the request. String these receiving objects into a chain and pass the request along the chain,
Until some receiving object on the chain can handle it.
First, let's look at how the Chain of Responsibility pattern addresses the complexity of the code.
Splitting large blocks of code logic into functions and large classes into small classes is a common way to deal with code complexity. Using the chain of responsibility model, we continue to split each product assembly and design it into an independent class, which further simplifies the product assembly class, so that the code of the class will not be too much or too complicated.
Secondly, let's look at how the chain of responsibility mode makes the code meet the open-closed principle and improves the scalability of the code.
When we want to extend the new assembly logic, for example, we also need to add price hidden filtering. According to the code implementation method of the non-responsibility chain mode, we need to modify the code of the main class, which violates the open-closed principle. However, such modifications are relatively concentrated and acceptable. The implementation of the responsibility chain mode is more elegant, only need to add a new Command class (the actual processing class uses the command mode to do some business customization extensions), and add it to the Chain through the addCommand() function, other code No modification is required at all.
The next step is to use this mode to transform and upgrade the global order collection. The core architecture diagram is as follows
Each domain needs to meet the following conditions:
- Supports single processing and batch processing
- Support early blocking
- Support pre-judgment whether it needs to be processed
The processing class diagram is as follows
- [ChainBaseHandler]: core processing class
- [CartHandler]: Add-on domain processing class
- [ItemSupplementHandler]: commodity domain processing class
- [RankingHandler]: List domain processing class
- [RequestHanlder]: Parameter domain processing class
Let's first look at the core processing layer:
public class ChainBaseHandler<T extends Context> {
/**
* 任务执行
* @param context
*/
public void execute(T context) {
List<String> executeCommands = Lists.newArrayList();
for (Command<T> c : commands) {
try {
// 前置校验
if (!c.check(context)) {
continue;
}
// 执行
boolean isContinue = timeConsuming(() -> execute(context, c), c, executeCommands);
if (!isContinue) {
break;
}
} catch (Throwable e) {
// 打印异常信息
GatherContext.debug("exception", c.getClass().getSimpleName());
GatherContext.error(c.getClass().getSimpleName() + " catch exception", e);
}
}
// 打印个命令任务耗时
GatherContext.debug(this.getClass().getSimpleName() + "-execute", executeCommands);
}
}
The timeConsuming method in the middle is used to calculate the time-consuming, and the time-consuming needs to be wrapped before and after the execution method
private boolean timeConsuming(Supplier<Boolean> supplier, Command<T> c, List<String> executeCommands) {
long startTime = System.currentTimeMillis();
boolean isContinue = supplier.get();
long endTime = System.currentTimeMillis();
long timeConsuming = endTime - startTime;
executeCommands.add(c.getClass().getSimpleName() + ":" + timeConsuming);
return isContinue;
}
The specific implementation is as follows:
/**
* 执行每个命令
* @return 是否继续执行
*/
private <D extends ContextData> boolean execute(Context context, Command<T> c) {
if (context instanceof MuchContext) {
return execute((MuchContext<D>) context, c);
}
if (context instanceof OneContext) {
return execute((OneContext<D>) context, c);
}
return true;
}
/**
* 单数据执行
* @return 是否继续执行
*/
private <D extends ContextData> boolean execute(OneContext<D> oneContext, Command<T> c) {
if (Objects.isNull(oneContext.getData())) {
return false;
}
if (c instanceof CommonCommand) {
return ((CommonCommand<OneContext<D>>) c).execute(oneContext);
}
return true;
}
/**
* 批量数据执行
* @return 是否继续执行
*/
private <D extends ContextData> boolean execute(MuchContext<D> muchContext, Command<T> c) {
if (CollectionUtils.isEmpty(muchContext.getData())) {
return false;
}
if (c instanceof SingleCommand) {
muchContext.getData().forEach(data -> ((SingleCommand<MuchContext<D>, D>) c).execute(data, muchContext));
return true;
}
if (c instanceof CommonCommand) {
return ((CommonCommand<MuchContext<D>>) c).execute(muchContext);
}
return true;
The input parameters are all unified context, and the data in it is the data that needs to be assembled. The class diagram is as follows
MuchContext (multi-valued data assembly context), data is a collection
public class MuchContext<D extends ContextData> implements Context {
protected List<D> data;
public void addData(D d) {
if (CollectionUtils.isEmpty(this.data)) {
this.data = Lists.newArrayList();
}
this.data.add(d);
}
public List<D> getData() {
if (Objects.isNull(this.data)) {
this.data = Lists.newArrayList();
}
return this.data;
}
}
OneContext (single-valued data assembly context), data is an object
public class OneContext <D extends ContextData> implements Context {
protected D data;
}
Each domain can be implemented according to its own needs. The context of each implementation also uses the idea of the domain model. Some operations on input parameters are encapsulated here to simplify the acquisition cost of each command processor. For example, the input parameter is a series of operation collection List<HandleItem> handle. But the actual use is to distinguish each operation, then we need to initialize it in the context for easy access:
private void buildHandle() {
// 勾选操作集合
this.checkedHandleMap = Maps.newHashMap();
// 去勾选操作集合
this.nonCheckedHandleMap = Maps.newHashMap();
// 修改操作集合
this.modifyHandleMap = Maps.newHashMap();
Optional.ofNullable(requestContext.getExtParam())
.map(CartExtParam::getHandle)
.ifPresent(o -> o.forEach(v -> {
if (Objects.equals(v.getType(), CartHandleType.checked)) {
checkedHandleMap.put(v.getCartId(), v);
}
if (Objects.equals(v.getType(), CartHandleType.nonChecked)) {
nonCheckedHandleMap.put(v.getCartId(), v);
}
if (Objects.equals(v.getType(), CartHandleType.modify)) {
modifyHandleMap.put(v.getCartId(), v);
}
}));
}
Let's look at each command processor, the class diagram is as follows:
The command processor is mainly divided into SingleCommand and CommonCommand. CommonCommand is a common type, that is, the data is handed over to each command to be processed by itself, while the SingleCommand is for batch processing, and the data collection is disassembled in advance. The two core differences are that one executes the data loop at the framework layer, and the other processes the loop at each command layer. The main functions are:
- SingleCommand reduces repetitive looping code
- CommonCommand can improve performance for downstream batch processing
Below is an example of usage:
public class CouponCustomCommand implements CommonCommand<CartContext> {
@Override
public boolean check(CartContext context) {
// 如果不是跨店满减或者品类券,不进行该命令处理
return Objects.equals(BenefitEnum.kdmj, context.getRequestContext().getCouponData().getBenefitEnum())
|| Objects.equals(BenefitEnum.plCoupon, context.getRequestContext().getCouponData().getBenefitEnum());
}
@Override
public boolean execute(CartContext context) {
CartData cartData = context.getData();
// 命令处理
return true;
}
The final product is as follows, and the execution sequence of each command is clear at a glance
Multi-algorithm shunting design
After talking about some of the underlying code structure design, let's talk about the code design for the business layer. Collecting orders is divided into many modules, recommending feeds flow, list module, spike module, and search module. The overall effect diagram is as follows:
Using different algorithms for such different modules, the first design we can think of is that each module is a separate interface. Each assembles its own logic. But in the process of implementation, it will be found that there are many common logics in it, such as the recommended feeds stream and the time-limited seckill module, all of which use the order-collection engine, the algorithm logic is exactly the same, but there is more logic to obtain the seckill key, so I Will choose to use the same interface, so that the interface can be as general as possible. Here I choose the strategy factory pattern, the core class diagram is as follows:
[SeckillEngine]: Seckill engine, used for seckill module business logic encapsulation [RecommendEngine]: Recommendation engine, used to recommend feeds stream business logic encapsulation [SearchEngine]: Search engine, used for search module business logic encapsulation [BaseDataEngine]: General data engine , the general layer of the engine is extracted, and the general code is simplified [EngineFactory]: engine factory, which is used for routing modules to the appropriate engine. In this mode, for modules that may continue to accumulate, rapid development can be completed and put into use. This mode It is also a more general model that everyone will choose. I will not elaborate too much on the business here. Let me talk about my understanding of the strategy model. When it comes to the strategy model, some people think that its role is to avoid if- else branch judgment logic. In fact, this understanding is very one-sided. The main function of the strategy pattern is to decouple and control the complexity of the code, so that each part is not too complicated and the amount of code is too much. In addition, for complex code, the strategy pattern can also satisfy the open-closed principle. When adding new strategies, it minimizes and centralizes code changes and reduces the risk of introducing bugs.
PS Actually, design principles and ideas are more universal and important than design patterns. Having mastered the design principles and ideas of code, we can understand more clearly why we use a certain design pattern, and we can apply the design pattern more appropriately.
Clever functional design
Collect order shopping cart section
background for design
Order collection is the core link in the use of cross-store discount tools. Users have high demands for order collection. However, due to the fact that the order collection page does not support real-time order collection progress prompts and other issues, the user experience in order collection is poor. It is necessary to optimize the order collection experience to improve the efficiency of traffic conversion. However, due to some reasons, we had to independently develop a shopping cart for order collection, and at the same time add the progress of the order collection. The source of product data and dynamic computing power are still the Taobao shopping cart.
Basic frame structure design
A single-page shopping cart needs to display products that meet a certain cross-store full-reduction activity (the same is true for over-purchasing). I cannot directly use the shopping cart interface to directly return all product data and discount details. So I split the shopping cart access into two parts. The first step is to query all the additionally purchased products of the user through the data.query interface of the shopping cart (the product data only has information related to id, quantity, and time) . After filtering the active products once on the order collection page, call the dynamic calculation interface of the shopping cart for the remaining goods to complete the display of all the data in the entire order collection shopping cart. The process is as follows:
Paging Sort Design
During the big promotion period, most of the added items in the shopping cart meet the cross-store full-reduction activities. If all the items participate in the dynamic calculation and return at one time every time, the performance will be very poor, so it is necessary to do paging here. If the page display involves paging, the difficulty factor will increase exponentially. First, let's look at the sorting requirements of the shopping cart:
- The order in which you enter the single page for the first time needs to be consistent with the shopping cart needs to be placed together in the same store, and the stores are sorted in reverse order according to the time of adding a new product.
- If you clicked in from a certain store, the store needs to put it at the top of the collection page and take the initiative to check it.
- If a new product is found in the process, the product needs to be put on the top (no need to put other products in the store on the top)
- If the invalid product is found in the process, it needs to be bottomed (put it on the last page and sink to the bottom)
- If a failed product is found to become effective during the process, it needs to be moved up
Difficulty Analysis
- Sorting is not as simple as arranging according to the time dimension, the increased store dimension, and the ability of the store to top
- We don't have our own data source, and we have to reorder every time we check it out
- The order of the first entry is different from the order of subsequent new purchases
- Support pagination
Technical solution <br>The first thing I can think of is to find a place to store the sorted order. The first choice is definitely to use redis, but according to the evaluation, if the order of products is stored according to the user dimension, the number of users * 100 million activities will cost Hundreds of gigabytes of cache capacity and the need to maintain the life cycle of the cache are relatively troublesome. This kind of user-dimension cache is best cached by the client. How can I use the front-end to do the cache and make the front-end unaware? Here is my interface design:
itemList | [{"cartId": 11111,"quantity":50,"checked": Checked or not}] | All current front-end products |
---|---|---|
sign | {} | Flag, the front end does not need to pay attention to the things inside, the back end returns and passes it directly, if not, it will not pass |
next | true | whether to continue loading |
allChecked | true | Whether to select all |
handle | [{"cartId":1111,"quantity":5,"checked":true,"type":modify}] | type=modify update, checked checked, nonChecked unchecked |
The sign object server returns to the front end, and the next request needs to pass the sign object intact to the server. The paging information is stored in the sign, and the ordering of the products is required. The sign object is as follows:
public class Sign {
/**
* 已加载到到权重
*/
private Integer weight;
/**
* 本次查询购物车商品最晚加购时间
*/
private Long endTime;
/**
* 上一次查询购物车所有排序好的商品
*/
private List<CartItemData> activityItemList;
}
specific plan
- When entering for the first time, do the initial sorting according to the time of purchase of the product and the dimension of the store, and mark the weight (the first 200, the second 199, and so on), and save it in the activityItemList of the sign object, take the first page of data, and put The minimum weight of the page and the latest add-on time endTime of all products are recorded in the sign synchronously. and return the sign to the front end
- When the front-end loads the next page, it re-transmits the sign field returned by the back-end last request to the back-end. The back-end judges the size of the weight in the sign, fetches the data of the next page in turn, and writes the latest minimum weight to the sign at the same time. Return to the front end.
- During the period, if it is found that the additional purchase time of a product is greater than the endTime in the sign, it will take the initiative to put it to the top, and the weight will use the default maximum number of 200.
- Since it is impossible to know whether the products are invalid and can be checked during sorting, it is necessary to re-sort the invalid products after the products are completed (calling the dynamic calculation interface of the shopping cart).
If there are no invalid products on this page, no treatment will be done. If this page is full of invalid products, no treatment will be done (in order to deal with the case where the last few pages are all invalid products)
If there is a next page, put the expired product on the next page and sink it to the bottom. If the current page is the last page, it will sink directly to the bottom
The program timing diagram is as follows:
product check design
After the items in the shopping cart are selected, the order price of the selected items and various discounts that can be enjoyed will appear. The selection conditions are mainly divided into:
- Check, Uncheck, Check All
- Load next page with all selections
- Changes in the number of checked items
The effect diagram is as follows:
difficulty
- The more items that are checked, the longer the rt for dynamic calculation. When 50 items are checked together, the page interface return time is about 1.5s.
- In the case of selecting all, the drop-down loading needs to actively check the newly loaded product
- Reduce calls to dynamic calculations as much as possible (such as loading unchecked items, modifying the number of unchecked items)
Design
- Since all checked items may need to be calculated, the front end needs to inform the server of the check status of all currently loaded item data
- When there are more than 50 checked products, the dynamic calculation interface is no longer called, and the total price is calculated directly with the local price, and the discount details and order collection progress are downgraded.
- The front end performs a merge operation based on the results returned by the back end, reducing unnecessary computational overhead
The overall logic is as follows:
At the same time, for the check processing, I encapsulate various actions for obtaining product information into the domain model (such as checked products, all products, next page products, operated products, easy to reuse, ⬆️The code design has been mentioned) , the logic codes for obtaining various commodities are as follows:
List<CartItemData> activityItemList = cartData.getActivityItemList();
Map<Long, CartItem> alreadyMap = requestContext.getAlreadyMap();
Map<Long, CartItem> checkedItemMap = requestContext.getCheckedItemMap();
Map<Long, CartItemData> addNextItemMap = Optional.ofNullable(cartData.getAddNextItemList())
.map(o -> o.stream().collect(Collectors.toMap(CartItemData::getCartId, Function.identity())))
.orElse(Collections.emptyMap());
Map<Long, HandleItem> checkedHandleMap = context.getCheckedHandleMap();
Map<Long, HandleItem> nonCheckedHandleMap = context.getNonCheckedHandleMap();
Map<Long, HandleItem> modifyHandleMap = context.getModifyHandleMap();
The logic code of the check processing is as follows:
boolean calculateAllChecked = isCalculateAllChecked(context, activityItemList);
activityItemList.forEach(v -> {
CartItemDetail cartItemDetail = CartItemDetail.build(v);
// 新加入的品,加入动态计算列表,并勾选
if (v.getLastAddTime() > context.getEndTime()) {
cartItemDetail.setChecked(true);
cartData.addCalculateItem(cartItemDetail);
// 勾选操作的品,加入动态计算列表,并勾选
} else if (checkedHandleMap.containsKey(v.getCartId())) {
cartItemDetail.setChecked(true);
cartData.addCalculateItem(cartItemDetail);
// 取消勾选的品,加入动态计算列表,并去勾选
} else if (nonCheckedHandleMap.containsKey(v.getCartId())) {
cartItemDetail.setChecked(false);
cartData.addCalculateItem(cartItemDetail);
// 勾选商品的数量修改,加入动态计算
} else if (modifyHandleMap.containsKey(v.getCartId())) {
cartItemDetail.setChecked(modifyHandleMap.get(v.getCartId()).getChecked());
cartData.addCalculateItem(cartItemDetail);
// 加载下一页,加入动态计算,如果是全选动作下,则将该页商品勾选
} else if (addNextItemMap.containsKey(v.getCartId())) {
if (context.isAllChecked()) {
cartItemDetail.setChecked(true);
}
cartData.addCalculateItem(cartItemDetail);
// 判断是否需要将之前所有勾选的商品加入动态计算
} else if (calculateAllChecked && checkedItemMap.containsKey(v.getCartId())) {
cartItemDetail.setChecked(true);
cartData.addCalculateItem(cartItemDetail);
}
});
PS Some people here may find that so many if-else think it is bad code. If the if-else branch judgment is not complicated and the code is not much, there is no problem. After all, the if-else branch judgment is the syntax provided by almost all programming languages, and there is a reason for its existence. Following the principle of KISS, how simple it is, is the best design. It is an over-design to have to use the strategy mode and come up with more than n classes.
Marketing commodity engine key design
background for design
Cross-store full discount and category coupons are recalled from the engine by couponTagId + couponValue, couponTagId is the ump's activity id, and couponValue records the full discount information. With the iteration of demand, we need to display products that meet the full reduction across stores and at the same time meet other marketing methods (such as limited-time spikes). Here we have been able to filter out the products that meet the full reduction across stores, but if we filter out the products that are currently in effect What about the limited-time spike?
Detailed index design
The recall of shopping guides mainly relies on the inverted index, and the key to our second-kill product recall is taking effect, so my idea is to write the time into the key, and the following design is available:
Field example: mkt_fn_t_60_08200000_60
index | example | describe |
0 | mkt | Marketing Tools Platform |
1 | fn | top N |
2 | t | first N minutes |
3 | 60 | 60 minutes before the start is the warm-up time |
4 | 08200000 | August 20 at 0:00 |
5 | 60 | 60 minutes after the start is the end time |
The user can traverse all the current keys, calculate the currently valid key locally, and then recall it. The specific details will not be elaborated here.
final summary
The original intention of the design is to improve the quality of the code
We often talk about one word: original intention. The word actually means, why are you doing this in the end. No matter how far you go, how many iterations the product has gone through, and how many times the direction has changed, the "original intention" will generally not be changed casually. In fact, the same goes for writing code. Applying design patterns is just a method, the ultimate goal is to improve the quality of the code. Specifically, it is to improve the readability, scalability, and maintainability of the code. All designs are made around this original intention.
Therefore, when doing code design, you must first ask yourself, why do you design this way, why do you apply this design pattern, whether this can really improve the quality of the code, and what aspects of the quality of the code can be improved. If it is difficult for you to explain clearly, or the reasons given are far-fetched, you can basically conclude that this is an over-engineering, designed for design's sake.
The process of design is to have problems first and then have plans
In the design process, we must first analyze the pain points of the code, such as poor readability, poor scalability, etc., and then use design patterns to improve it, instead of seeing a certain scene. , I feel that it is very similar to the application scenario of a certain design pattern I saw in a book before, so I apply it without considering whether it is suitable or not. Finally, if someone asks, I will find a few more painless. , very unspecific pseudo-requirements to prevaricate, such as improving the scalability of the code, satisfying the open-closed principle, and so on.
The designed application scenario is complex code
The main function of the design pattern is decoupling, that is, using a better code structure to split a large piece of code into smaller classes with more single responsibility, so that it can meet the characteristics of high cohesion and low coupling. The main purpose of decoupling is to deal with the complexity of the code. Design patterns are created to solve complex code problems.
Therefore, for complex code, such as a large amount of project code, a long development cycle, and a large number of people involved in development, we need to spend more time on design in the early stage. The more complex the code, the more time we spend on design. Not only that, every time you submit the code, you must ensure the quality of the code, and you must go through enough thinking and careful design, so as to avoid bad code.
On the contrary, if you are only involved in a simple project with a small amount of code and not many developers, then a simple solution is good for a simple problem. Don’t introduce overly complex design patterns to complicate simple problems.
Continuous refactoring can effectively avoid overdesign
Applying design patterns will improve the scalability of the code, but it will also reduce the readability of the code and increase the complexity. Once we introduce a complex design, even if there is no need for expansion for a long time, it is impossible for us to delete this complex design, and we have to carry this complex design forward all the time.
In order to avoid wrong prediction leading to over-design, I prefer the development method of continuous refactoring. Continuous refactoring is not only an important means of ensuring code quality, but also an effective way to avoid over-engineering. The framework code for the core process processing above is also written in refactoring again and again.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。