1. What else can be done to improve the efficiency of R&D?
The improvement of R&D efficiency has always been the subject of our pursuit. From the initial tooling to engineering, engineers do their best to achieve faster code writing to meet the growing business needs, and then, small programs and other platforms With the rise of, engineers have begun to study solutions for multi-terminal unified development, allowing us to write code that runs across terminals at one time, further improving efficiency. But the personalized business is still growing explosively, so we can't help but ask how we can continue to innovate to improve our research and development efficiency.
We think about "seeking change." At a time when intelligent thinking is becoming more and more hot, traditional R&D efficiency enhancement methods have encountered bottlenecks. Can we solve it with intelligent thinking? We wondered, since the road of writing code faster seems to be almost gone, can we implement code generation based on AI means, let us write less code, so the direction of our exploration has turned "Front-end" + "Intelligent", hoping to use AI and machine learning capabilities to expand the front-end competence circle, open up the workflow of design and research and development, and achieve large-scale production.
2. Smart code-the chosen road?
The Deco smart code project is an exploration of our team in the direction of "front-end intelligence". We try to start from the entry point of design draft generation code (Design To Code), and complement existing design to research and development. Further improve the efficiency of production and research.
In a daily requirement development process, it is often necessary to follow a fixed set of workflows. Product submission requires PRD. The interaction designer outputs the interaction draft according to the PRD, and then the visual designer outputs the product visual draft, and then enters the front-end development workflow. For front-end engineers, the input source is visual draft + PRD, and the output result is the online page code.
What Deco expects to solve is the relatively low value for front-end engineers in the above process, and the work that can be handled with reuse thinking:
- UI visual draft restoration, that is, page reconstruction, writing HTML + CSS;
- Reusable business logic binding;
- Identification and replacement of existing components in the design draft
Taking the "design draft generated code" as the starting point, we explored the use of intelligent solutions to replace the traditional manual page reconstruction (analysis of layer styles + cut pictures, etc.), hoping to extract structured information from the original information of the visual draft Data description, and then combined with various intelligent algorithms to output maintainable page codes.
Deco has passed the preliminary verification of the 618 promotion and has been continuously upgraded and polished. It has been widely used in the ongoing research and development of the Double 11 personalized venues, covering about 90% of the promotion floor modules, and bringing about 48% of the business research and development. Efficiency improvement.
3. How to implement a design draft generation code scheme
3.1, generate static code
The first step of design draft intelligent code generation is to generate static code, and the core of this step is how to generate a "structured data description" information based on the design draft. This data is called D2C Schema.
Deco design draft intelligently generates static code mainly to do two things:
- Extract "structured data description" from the visual draft;
- Express "structured data description" as code;
Essentially, the Deco smart code extracts the D2C Schema from the original information of the visual draft through the design tool plug-in, and then combines the rule system, computer vision, intelligent layout, deep learning and other technologies to process the D2C Schema, and converts it into a reasonable and semantic layout The D2C Schema JSON data is finally converted into multi-terminal code with the help of a DSL parser.
3.1.1. Processing design draft
A Sketch document is a compressed document (file suffix ".sketch") composed of several layer meta information (divided into Document and Pages, etc.) and resource files (mainly pictures). We need to pass the layer meta information After processing, a piece of data for layout algorithm service processing is obtained.
Through the development of the Sketch plug-in, the API provided by Sketch is used to help us manipulate Sketch documents. After obtaining the layer information, we can process and filter the data. The processing of layer information is mainly divided into two layers:
- Design draft processing layer: decouple the Symbol in the design draft into actual layers, and then perform various processing on the layers, such as filtering invisible layers, merging necessary layers, processing masks, etc.;
- Layer information processing layer: extract useful information from the layer, transform the information, remove useless layers, flatten the layer information, etc.
The following figure is the processing flow of layer information:
In addition to the basic processing of layer information, we have established a series of optimization rules for data export to increase the rationality of layout and semantics. For example, in some large-scale promotion design drafts, the design of a complex background image may be composed of several vector graphics under a layer group (as shown in the figure below). If these layers are exported intact, it will bring a lot to the layout. Complexity and uncertainty.
In the process of combining images, for the situation where all the layers in a layer group are vector graphics, we will combine it into one image, which will greatly reduce the difficulty of layout. The final combined picture effect is as follows:
Of course, these optimization rules mentioned above cannot satisfy all situations, after all, designers are free. In order to improve the rationality of the layout and semantics, we provide some standard protocols for the design drafts for the use of designers and developers.
3.1.2, restore the design draft through the layout algorithm
The data processed by the design draft plug-in needs to be processed by the layout algorithm to obtain the code structure expression data with good visual restoration and reasonable layout structure.
The element data exported by the plug-in is all element information based on the absolute positioning of the upper left corner (0, 0) as the coordinate origin coordinate, and in general (no active grouping, no AI recognition, etc.) elements are all Flat, that is, there is no affiliation between elements.
In the front-end development process, the absolute positioning layout does not meet the development requirements in terms of scalability and readability, so if it is not resolved, it will become a one-off code. Therefore, a layout algorithm is needed to improve the scalability and readability of the generated code for subsequent secondary development.
The operation process of the layout algorithm layer consists of three steps: data structure conversion, layout derivation, and style calculation.
Data structure conversion is to convert Schema JSON data into a structure similar to a DOM tree, which can perform node insertion, deletion, and search operations.
Layout derivation is required after data conversion processing. In this step, row and column segmentation derivation is performed, which generally includes: spatial layout algorithm, projection layout algorithm, background image layout algorithm, feature detection layout algorithm, coordinate derivation algorithm, background layer and redundancy Layer detection algorithm and so on. After the layout derivation, the Layout structure has a clear hierarchical relationship and adjacent relationship, and we can obtain a well-divided, organized, and structured node structure.
After the layout structure is generated, the style calculation needs to be performed. It is a series of calculations on the results obtained through the layout derivation layer. For example, based on the hierarchical relationship, the Flexbox main axis and lateral axis can be calculated through coordinate calculation; based on the adjacent relationship, you can Calculate the margin and other styles between adjacent ones. Most layouts of Deco style use Flexbox, and some special cases require absolute positioning.
After processing by the layout algorithm, we can obtain Schema JSON data with good reducibility and reasonable structure.
3.1.3, generate semantic code
When the design draft data is processed by the layout algorithm, we can get a relatively well-structured code, but at this time we will find that the code is still not well readable due to the lack of corresponding semantic class names for the node elements. In order to finally get the code that can be developed for the second time, we need to add semantic processing after the layout algorithm to make the code have good semantics.
The first problem to be solved by semantics is how to add semantic class names to element nodes.
In order to achieve this goal, we can first review how to add class names to element nodes during our development. Take the following single product diagram as an example.
The above picture is an example of a product map. We will judge that this is a product based on factors such as the picture, price, and copy below the picture. Then we can assign the class name goods
to this area, and the nodes in the area, such as pictures You can assign the class name goods_pic
, the text below the picture can give the class name goods_tit
, and the price can give the class name price
. This is the general logic for adding class names to element nodes.
It can be seen that when we usually determine the semantics of a region and a component, we need to determine the semantics based on the semantic combination of nodes in the region. For example, the above commodity components need to rely on internal pictures, prices, copywriting and other elements to determine the semantics. To determine the class name. Therefore, the semantic processing method is to start from the child nodes of the container element, first determine the semantics of the child nodes, and then infer the semantics of the container element, inferring layer by layer, and finally infer the complete node tree Semantics.
In semantic processing, our main processing object is the JSON Schema data processed by the layout algorithm. We call it the layout tree. At this time, the layout tree has a good structure and we can perform semantic inference operations on it. The process of inference is to start from the leaf nodes of the tree, bubble up layer by layer to the branch node, and finally bubble to the root node.
At present, the basis for our inference is mainly the position, style, size, sibling node and other factors of the node. At the same time, we will combine the types of different nodes and combine some intelligent means to perform auxiliary inference. For example, the smallest leaf node may generally be of two types: picture and text. For text, we can analyze the part of speech and semantics of the text through NLP; for some pictures, we can use picture classification or recognition to determine picture classification or extract pictures The key information on the image is used to determine the semantics of the picture.
In order to determine the semantics of each node, we need to combine a series of rules to reason about the existing facts (information such as style, location, etc.), and at the same time, after some rules of reasoning, we will get new facts and need to go through other rules. Only after reasoning can the final result be obtained. Therefore, this is a reasoning system based on rule-based reasoning, and we can help us make reasoning decisions by implementing a forward chain reasoning engine.
For example, in the process of inferring the above commodity components, first we first find the text node with the price factor, named price
, and then we find price
in the tree at a similar level, and the image node meets the size of the product image Requirements, so that we can basically determine that the container that contains both the price and the characteristics of the product map is the product container, and then based on the number of elements in the container, whether there is a piece of text near the image, and the NER analysis of the text, we can determine this paragraph Whether the text is a product name, so as to determine its semantic category name.
In the entire semantic processing, the above judgment rules are only the tip of the iceberg. We combined the entire e-commerce scene, analyzed a large number of design drafts and online cases, and summarized a large number of judgment rules to help us rationalize semantic naming. In the process, AI methods such as NLP analysis, image classification and recognition are used to help obtain more accurate semantic pseudonyms.
3.1.4, generate DSL
After the previous steps of processing, we can generate code. In order to support the generation of codes for different DSLs, we abstract the semantically processed data into virtual DOM-like node description data, and then process the node description data into individual Similar to DSL code, at the same time, a good interface is reserved to support the expansion of new DSL.
3.2, let the code have a soul
After generating static code, we will find that sometimes existing components appear in the design draft. The best way is that we can identify the existing components in the design draft, and then reuse them when generating the code.
In order to solve this problem, we surrendered our attention to deep learning. Using the target detection and classification algorithms in deep learning, we can identify and locate the existing components in the design draft, and finally map them to components in the component library. , To achieve the purpose of component reuse.
The overall processing flow of component recognition and mapping is that for the input design draft image, it is first reasonably cut based on the visual algorithm. After the cut image is identified, the UI block contained in the image is identified, and then the UI block is mapped to the Schema JSON. Then, the most likely component type is identified through the classification algorithm for the block, and then the identification information is written into the node of the Schema JSON, so as to finally realize the component identification mapping.
At present, we have completed the training of the big promotion business component library, which can accurately identify the big promotion components in the positioning design draft. However, we found that if we want other businesses to access the component recognition mapping scheme, there are many difficulties, and other business teams do not have AI processing capabilities. If we are allowed to conduct component training, it will greatly increase the workload of our team. In order to solve these problems, we think about how to share the AI capabilities that Deco has accumulated, so we designed an AI open platform.
The AI open platform provides developers with zero-threshold access to image classification, target recognition and other AI capabilities. Developers can create and manage custom data sets on the AI open platform, and directly perform model training after selecting a preset algorithm model With preview evaluation, in this way, if there is a new component library that needs training, it can be handed over to the business team to operate on its own through the AI open platform.
3.3. Landing in the business
At this point, we have the overall plan for generating code from the design draft, but there is still a long way to go before the implementation of the plan. We need a platform to carry our plan. When there is an error in the static code generated by the design draft, we can pass The visual editor performs secondary adjustments, and while obtaining static code, we can also add logic processing such as componentization, field binding, and life cycle.
3.3.1, componentization
We can use the "component marking" function to mark a certain node as a component. After marking as a component, we can set the state and data in the component, set the component input, set the component life cycle, etc., and finally generate code When generated in the form of components.
3.3.2, data definition
We can define the global data of the page or the internal data of the component according to our needs, and we can also define the shared data by setting the React context.
3.3.3, asynchronous data request
For the most common asynchronous data request scenarios, Deco provides a visual form, which can quickly generate asynchronous data request codes through simple configuration.
3.3.4, event binding
Deco provides a variety of node events including click events, as well as the definition of component life cycles and other events. Users can edit the logic code in the event.
3.3.5, attribute editing and data binding
In addition, Deco has provided component mapping capabilities. On this basis, it has opened up component attribute editing and data binding capabilities to realize the connection between pages and dynamic data.
4. Future prospects
Nowadays, it seems that smart code is a very worthy road to explore. It is a seed. Maybe it has just germinated at this time, but we have a lot of expectations for it. We hope to explore the road of front-end intelligence through Deco, and explore AI and The various possibilities of front-end integration, and more importantly, we hope that through Deco, we can start the efficiency revolution of production and research, and explore ways to reduce business costs and increase efficiency at a time when various front-end engineering, platforms, and methodologies are becoming more complete. a method.
At the same time, we have been working hard on intelligence. Perhaps in the future, we can directly implement design and delivery, which is undoubtedly another innovation for the industry.
The road is long and long, requiring technical people to search from top to bottom.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。