头图

Author: Ygor Serpa Editor: Xiaobai

After nearly six years of working in machine learning, artificial intelligence, and development, I've listed seven lessons I've learned from successful and failed projects over the years. The bottom line is that despite learning countless models and techniques, an efficient, professional data scientist and algorithm engineer learns to avoid complexity as much as possible . After all, what really drives business value is the effective solution to pressing problems, not the blind pursuit of state-of-the-art technology.

More specifically, in actual business, the problems that data scientists need to solve are never single and isolated, but complex and diverse. There are many things a data scientist can do, such as improving an existing model, deploying a new model, redoing a particular step, etc. Experience has taught me that people tend to get caught up in models and forget about other things, but often the things that are forgotten and ignored tend to be decisive in solving the problem.

I have been working on AI since 2015 as the lead or sole developer on several AI projects. There are two important aspects of my career development: (1) I have to figure out most of the project and development on my own; (2) Most of the time I work with people who have little idea what AI can really do Work.

Through these experiences, I have concluded the following experience:

1. The simplest models usually perform best
2. Prioritize proven methods in production
3. Everything in life can be improved, the most important thing is to know what to prioritize
4. Finding AI application scenarios and solutions is more important than optimizing existing models
5. The accuracy of the code is as important as the accuracy of the results
6. Don’t use AI for AI’s sake
7. AI needs to be tightly coupled with the business

1. The simplest models usually perform best

The optimal solution to different problems varies, some require neural networks to solve, some rely on XGBoost, and some special problems can also be reasonably solved using decision trees, logistic regression, and linear regression.

After learning about all the fantastic techniques, algorithm engineers are always tempted to look for new techniques in the Scikit-learn library, but don't. Instead, start with the simplest method you can, and if linear interpolation will do the trick, use that. Blind pursuit of fancy ends up getting nothing.

In practice, simpler methods may not solve as many problems as other methods, but the core advantage is that the simpler the model, the easier it is to explain how it works, explain how well it fits the data, explain The reason for its prediction, and the easier it is to debug the model, etc. Using fancy Gaussian processes or state-of-the-art networks does not bring any additional gain. Where possible, linear interpolation is preferred, followed by logistic regression and decision trees.

Tips: It took months to learn complex classifiers, calibration, and preprocessing, but eventually found that the simplest method was the best.

2. Prioritize proven methods in production

Putting aside other factors and looking at neural networks intuitively, the newly released network is too fancy. All gorgeous embellishments are ultimately just embellishments. We need a tried-and-true model that works well in any uncertain environment.

In my experience, ResNet is exactly that: tried and tested.

In the three years I've worked with training networks almost every day, I've always picked a day in the project schedule to find new loss functions, activation functions, etc. to try to solve the problem. During this process I found that 95% of the time, the results with the innovative method are about the same as the results I get with Cross entropy/L2, ReLU, CNN and Adam functions etc. (with 1% fluctuation).

Tips: Should we not try new things? of course not. But remember, sticking to the tried and true method is always the safest bet.

3. Everything in life can be improved, the most important thing is to know what to prioritize

Frankly, this is probably the most important piece of information in this article.

We can always improve the model and always make it run faster, but the question is what to do with our highest priority.

I've been in countless meetings discussing scalability issues that don't exist or planning how to avoid some potential changes that will never happen. In fact, most people don't need a 10% better model, what they really need is 10% more customers.

It's delightful to spend a few days debugging these crazy beasts we call the web, but the impact on real business value is limited. When a business starts an AI project, the most pressing issue is building an AI model. Once you have the model, the next key point is to get more data and put the AI model into actual production. Once these two problems are solved, the most important question becomes how to expand the application scope of AI. For real business, applying AI on a larger scale often leads to a higher ROI than continuous optimization and improvement of a single point of AI model.

Frankly speaking, most business managers have no idea what AI is and what AI can do, and they might imagine that AI will perform superhuman feats. Therefore, mastering the model and educating those around it is also an important job for data scientists and algorithm engineers.

*Tips: Take a hard look at everything you do and have, identify the weakest link, identify exactly what to change will bring the most business value, and then continue to improve and iterate on these areas.


4. Finding AI application scenarios and solutions is more important than optimizing existing models

As mentioned in the previous tip, in my experience improving models that are already in production is usually a low priority task. But with good AI development production and management tools, we can continuously input data to retrain the model to get a better model.

If there is currently no such feedback loop in your company's AI development and production management, then you can bring this up at the next meeting as a key point for companies to promote AI applications. When enterprises establish efficient and smooth data access and processing, model training, model deployment and application channels, AI developers can allocate more time to the exploration of AI application scenarios and solutions. Expanding the application of AI and exploring new scenarios and solutions may bring greater business value than improving AI models that are already enabled. The exploration of new scenarios and solutions means a series of systematic work to access the data, explore the data, find a suitable model, train the model, and apply the model to production to see what happens.

Tips: The notable shift required to become a more efficient AI professional is to move from focusing solely on improving existing models to actively exploring new AI application scenarios, which are usually where the higher business value lies.

5. The accuracy of the code is as important as the accuracy of the results

AI has a high tolerance for common mistakes such as missing steps or poor math, especially in numerically oriented methods such as Gradient Boosters, Support Vector Machines (SVM), and Neural Networks.

As a simple example, suppose we forget to normalize the input data, the model will learn from the unnormalized data. But the problem is that AI models often have two separate code paths: training and inference. Assuming that there are mathematical errors in the training code of the model, but not in the inference script, the results will be disastrous: the model has all parameters based on fits to the corrupted dataset, and after it is subsequently put into production, the output It is also bound to be the wrong result for the client. Another common mistake is when we made a mistake in custom data augmentation that made the model work twice as hard to decode an entry that would never appear in production.

Method flaws can be hidden in training and inference scripts in many forms, such as failure to normalize, wrong data types, corrupt data augmentation, etc. AI developers need to make sure that everything in the code is correct, preferably with a visual way to check and identify these errors.

Finally, reaching 90% accuracy does not mean that the model can be put into production. The code with high accuracy may be a bug in itself. Don't trust any piece of code.

Tips: There may be some terrible errors in the code that go unnoticed because their impact on the training results is not severe enough to overturn the entire model system. But putting such a model into production can adversely affect the business, so take testing of the model seriously.

6. Don’t use AI for AI’s sake

AI is not a panacea for all problems. Some problems are better handled with traditional code. The code is readable, deterministic, understandable, and most importantly, the result of processing is more precise. When we sort an array, we're not expecting an "89% chance of getting the array sorted". Such problems, which can be solved more precisely with traditional code, do not require the help of AI.

Others, such as object detection or image generation, cannot be easily solved with traditional code and rely on AI algorithms. The problem with an AI model is that it is never perfect, a model that is 90% accurate, still has a 10% chance of being wrong.

Tips: It is very important to accurately judge the applicable scenarios of AI, do not "use AI for AI's sake".

7. AI needs to be tightly coupled with the business

Enterprises are not applying AI and big data to demonstrate the power of machine learning or mastery of neural networks, but to drive business value by applying AI to unconventional tasks. So, how to measure business value?

At present, the evaluation of the model is mainly based on accuracy indicators, such as the number of correct predictions, the number of objects found, and the detected human joint points. However, these are local metrics to judge the local specific impact of the model in solving the task. The generation of business value comes from the tight coupling between the model and the business, which effectively affects the global value such as profit, conversion rate, and customer churn. However, measuring the impact of a single intervention such as AI on the overall business of an enterprise is often extremely challenging, which relies on the close and efficient cooperation of the algorithm team and the business team. At the same time, it is also necessary to select a feasible comparison baseline to exclude the interference of other factors. For example, let's say the business value metric we're tracking is sales data, which can fluctuate seasonally.

Tips: More important than AI itself is to make AI have a valuable impact on application products/business/user life. Model accuracy is a good local measure, but it does not represent business value. The full play of AI business and commercial value depends on the mutual trust and close cooperation between the algorithm team and the business team.

This translation is based on:
Ygor Serpa, 7 Truths To Be a Better AI Professional


From experience to actual combat

This Saturday (4/16) 14:00, we will hold a practical IDP Meetup, from the perspective of data scientists and big data engineers, to share how the AI development and production platform can help solve their daily work problems.

Event details: https://segmentfault.com/e/1160000041670998


Baihai_IDP
139 声望447 粉丝

IDP是AI训推云平台,旨在为企业和机构提供算力资源、模型构建与模型应用于一体的平台解决方案,帮助企业高效快速构建专属AI及大模型。