Hello everyone! I love to eat, play, and learn technology. I am a new Internet celebrity in the IT industry and a good friend of developers - BitBear!
Have you watched "The Metamorphosis of Youth"? To clarify, it is not the future appearance of Bit Bear Cosplay~
BitBear: The guests invited by the March story rendezvous have a lot of background! Sitting in our [Bit Bear Live Room] is the community star I met at Pycon China Beijing Station and a very popular teacher Li Zhe'ao in the Microsoft MVP team; there is also a good friend of Bit Bear in the online live room. OpenVINO Chinese Community The Nono introduced me to Intel's big coffee, Mr. Shao Wenjian!
[Bit Bear Story Collection] As an important column of the Bit Bear live broadcast room, we regularly invite technical experts and industry pioneers as guests. Not only the most "hot" technologies, but also more personal stories and interesting topics to be unlocked! Please continue to pay attention, and encourage everyone to recommend the "hero" you want to know. Welcome to the live broadcast room, together with BitBear, and chat with experts at zero distance.
Welcome everyone to visit the MSLearn learning platform and plan the learning path with Bit Bear to accelerate the progress
BitBear: Let me reveal to you an exclusive revelation of [BitBear Story Collection] - the superhero in Mr. Li Zhe'ao's heart is Ultraman! He has seen the entire series of Ultraman! Whether Mr. Shao Wenjian also has his favorite heroes, let's ask Mr. Shao to talk about technical dry goods first, please look for the Easter eggs carefully in the text!
Shao Wenjian: Cloud computing, object computing, edge computing, edge alliance... These terms have been very popular recently, so what exactly is edge computing? To put it simply, edge computing plays a linking role, connecting the IoT devices on the end side with the core network data center in the cloud.
The importance of edge computing can be seen from this set of research data: 45% of data is stored, analyzed and manipulated at the edge, and 43% of AI tasks occur on edge devices. The report, from 2017, predicts a 15-fold explosion in the deployment of edge devices with AI by next year, 2023.
BitBear : Mr. Shao Wenjian summed up four factors that promote the development of edge computing. The field of intelligent video edge computing he specializes in explains the existence of these needs very well.
1. The need for low latency. Now many communities can automatically turn on and off the gate for license plate recognition, but if your license plate is photographed from the camera, transmitted to the core network through the network, and then to the data center, it will be queued together with a large amount of data gathered here. If it is sent back, the delay of this cycle may be ten seconds or even tens of seconds, which is completely unacceptable in practical applications. In industrial production lines, machine vision has higher requirements for delay, which is basically in the millisecond level, so the characteristics of low delay are very important for many video applications.
2. Bandwidth cost. Although 4G and 5G are developing rapidly now, the video data is still too large. The video bit rate is from 1Mbps to even 4~10Mbps, and now 1 billion IP cameras have been deployed. If all the data is to be collected in the cloud If so, the resulting bandwidth cost is unaffordable under the current technology and operation model.
3. Data security and personal privacy. People are paying more and more attention to privacy protection. The images of faces and vehicles are important information. The country is formulating standards that require data involving personal privacy to be calculated at the edge and desensitized before uploading to the cloud for the next step.
4. Stable connection. I am a communication student, and it is not easy to achieve a stable end-to-end video connection. If you understand video codecs, you know that video data is actually continuous. If the connection is unstable, loss of one frame will result in the loss of dozens or even hundreds of frames of data in a row.
The following figure takes the security scene as an example to show the implementation of intelligent video edge computing:
Intelligent video edge computing has many applications in various fields, such as video structuring based on artificial intelligence, which converts all the information we are interested in in the video into data in a structured database, which requires very high AI processing.
Shao Wenjian: Our Intel OpenVINO community has been contributing to this field. Everyone knows that Intel provides a variety of hardware products, from Atom to Core to Xeon. We also have FPGA products and VPU (vision processor) based vision acceleration products. These hardware products and solutions can be connected through software tools and software development kits provided by Intel.
Here are two of them: Intel® Media SDK and OpenVINO. Media SDK is a high-performance software tool for video encoding, decoding and image processing. After many years of development, it will come to an end this year. Its replacement/next generation is called oneVPL, which will be launched this year, with higher efficiency than Media SDK. , and basically keep the programming and interface style of Media SDK.
OpenVINO is Intel's artificial intelligence software for rapid deployment and acceleration of AI inference workloads. The following figure is the basic working mode of OpenVINO, the two core modules are the model optimizer and the inference engine.
OpenVINO has experienced 4 years of development from 2018 to 2022. The 2022.1 version of OpenVINO is also a relatively large milestone for us, mainly with the following new features:
1. Open Model Zoo supports more domain series models and adds 33 new models; simplifies the management of dependency libraries; launches a new API 2.0, which is consistent with mainstream AI programming methods.
2. Improve compatibility, simplify the parameters of Model Optimizer, support Dynamic input shape... directly support PaddlePaddle model.
3. The deployment supports the integration of many preprocessing tasks into the model; supports Auto Plugin; adds performance configuration hints, such as the ability to choose between low latency and high throughput; optimizes First Inference Latency.
The 2022.1 version has been greatly improved in terms of production efficiency, compatibility and performance. It is a great convenience for the Chinese community and Chinese artificial intelligence users. I hope our community users and developers have the opportunity to You can go to the OpenVINO website to download and try OpenVINO.
BitBear: I've been fascinated. I heard some information just now and I have something to do with Mr. Li Zhe'ao, who is sitting next to me. I'll give Mr. Li Zhe'ao a special opportunity to ask Mr. Shao Wenjian questions on behalf of everyone.
Li Zhe'ao: Questions such as reasoning or deep learning, governance of edge devices or distribution of models, and some management and control aspects, does OpenVINO provide a set of out-of-the-box solutions for model distribution and grayscale models?
Shao Wenjian: As far as I know, there is not yet. Now there is an application called OpenVINO Model Server, which turns the inference of a network into a service, which can be adjusted by other applications; out of the box, it depends on whether OpenVINO plans to provide it in the future. Such capabilities also need to be integrated with edge platforms, especially platforms based on k8s or k3s.
Li Zhe'ao: I would like to inquire, what other development directions will OpenVINO have this year or even next year? Is there anything to reveal?
Shao Wenjian: We previously launched the latest CPU of Alder Lake desktop version and mobile version, which is the industry's first hybrid architecture CPU. The next-generation Xeon server Sapphire Rapids will also be released in the second half of this year, with new instruction-level AMX (Intel Advanced Matrix Extensions), which can directly support the matrix layer. I think the primary task of OpenVINO is performance optimization, support for Intel hardware, to bring out the performance of the new architecture and improve compatibility.
Li Zhe'ao: Looking forward to how OpenVINO will run on the new Xeon server.
Shao Wenjian: If you reveal it a little bit, there will definitely be surprises.
BitBear: Alright, alright, if you keep asking, it won't involve trade secrets! Or ask Mr. Li Zhe'ao to share some content with everyone!
Li Zhe'ao: My sharing may be more popular than Mr. Shao, and everyone is not allowed to laugh or hit me. I actually mainly want to talk about Python today, focusing on 3.9, or some of its changes in the past year, have you used Python before?
BitBear : When Benxiong first started the live broadcast, he had an online workshop with Microsoft Cloud Walker Teacher Lu (Lu Jianhui). Although it was only a few classes, it was considered a preliminary understanding.
Li Zhe'ao: Python is now used in many occasions. For example, deep learning, PyTorch or TensorFlow, etc. mentioned by Mr. Shao just now, they all use Python as the front end of DSL. At the same time, Python also has many applications in back-end development and traditional SRE scenarios, such as Douban, Ele.me, and Toutiao, all of which use Python as their main service, as well as websites such as Instagram and Reddit in foreign countries.
By 2022, everyone may have some complaints, such as writing dynamic types for a while, then refactoring the whole family XXX, and being scolded by the boss to death, right? Also, the performance of Python is not good, or Python lacks some things that are standard in other languages. Everyone has always said that Python is a dynamic language with rich expressiveness, but at the same time it is not perfect in many aspects. Looking at the time to this year, Python mainly focuses on the enhancement of three aspects of syntax features, the enhancement of the standard library and the performance improvement. The enhancement of the standard library also includes cleaning up some historical debts.
The syntax improvement I am most impressed with is PEP, PEP (Python Enhancement Proposal) is short for Enhancement Proposal. 634, 635, 636 This Python match (pattern matching) finally came, according to different 400, 404, 418, return different things, different processing. Before, everyone could only use if or else to simulate such a special syntactic sugar, then after these three PEPs, that is, after 3.10, Python laughed and said that it finally has something that the C language has had for seven years - switch case, may you feel so excited? We can look at this slightly more complex example:
Compared with the if/else on various branches before, it can achieve semantic agreement to a large extent, that is, we can write code in a way that is similar to human language, or in an intuitive way, and we can see this paragraph at a glance. What the code is doing, it also supports more complex destructuring and parsing behavior... A proposal that split the Python community before is called PEP 572's walrus operator... It allows us to write more flexible code, but I personally don't want to use it It's too hilarious, and the code that is too magical will be beaten to death by colleagues and bosses when refactoring.
PEP 612 proposes a special type called parameter type. When we used the Decorate decorator before, you might not know how to mark the parameter object. Maybe everything should be marked...
After 612, we have a special thing like ParamSpec, which makes it easier for us to use type hints, which bridge the gap between type hints and Python's previous and dynamic features. The third point is that everyone has always been concerned about the performance of Python. This is a long-standing problem. In some scenarios that require high performance, such as PyTorch or TensorFlow introduced by Mr. Shao just now, and Tai Chi graphics like my current company, we will choose to use it. As a DSL in a field, Python additionally takes over some of its runtimes and does some processing... In common industrial scenarios, writing a web server and doing some servers, you may feel that Python is in some cases with relatively large traffic. I'm hungry. Python is our core service, and Python does require more memory resources than languages like Go. Some people will think that I don't need performance when I use Python, or you need performance, why use Python? Both of these views are too extreme, and we still want to strike a balance.
You can see that in most of my projects, the guaranteed Python has basically got a performance improvement of more than 20%~25%, and currently it is done without introducing git or optimizing the performance of GIL. We can see that the community has made a lot of efforts in this regard, and I am looking forward to Python getting faster and faster while maintaining performance and syntax sweetness, reaching a balance point, so that more people can use it better.
BitBear: How did Mr. Li Zhe'ao get connected with Microsoft MVP, and what kind of activities have you participated in after joining the MVP?
Li Zhe'ao: I was a soft fan a long time ago. The notebook in my dreams is Surface Book 2. I learned that the Microsoft MVP project was in my college days, and it felt far away. After 15 years of Microsoft's full embrace of open source, as a Python developer, I have indeed felt the strong support of Microsoft, such as the author of VS Code Python, as well as the core developers such as Jupiter, and Guido... Love House and Wu I've become a total scumbag. As you may know, Cynthia is the current manager of Reactor, and has also been the organizer of PyCon China since 2013. During the process of organizing PyCon China, Cynthia told me, since you like Microsoft and Python, do you want to apply? What about MVP? I gave it a try. The first time the information was written too briefly, it was rejected. The second application passed, and I will officially become the MVP in 2020.
BitBear: When you participate in various community sharing activities, you are likely to meet our Microsoft MVP. If you want to apply, you can also ask them for advice. Be careful not to write too brief application materials! When Mr. Li Zhe'ao talked about his student days, I recalled that in an article I read about Mr. Shao Wenjian before, it was mentioned that Mr. Shao had gone through several different majors such as communication engineering, computer and multimedia communication from undergraduate to doctorate. Starting from the rich experience, does Mr. Shao have any mental journey?
Shao Wenjian: At that time, we were not as thoughtful as today's students, and it was easy to obtain all kinds of information. At that time, we basically still listened to the teachers and parents. The teacher said that I was good in science and could study communication, and I would develop very well in the future, so I chose the undergraduate major. Graduate school wasn't as popular as it is now, but I was exposed to the Internet during my senior year. I should have been the first batch of people to use the Internet in China. I remember very clearly that it was a 64K dedicated line. I was immediately attracted to the Internet, and I was able to access the whole world through a thin line, which had a very big impact on me. The impact and influence happened that our school also has a network major, so I switched to graduate school, mainly focusing on network transmission and network security. It is another major change that I came out to start a business - to do artificial intelligence at the edge... Because whether it is learning communication or networking, a lot of work is still done at the edge, and even a lot of optimization based on assembly has been done in embedded systems.
I think everyone needs to broaden their horizons, if you can drill deep in one area, then you continue to drill, but for most people, it is still the courage to try new things. Take Python as an example. Python has a variety of libraries. It takes a lot of attempts to do a Web-related job today and a data governance tomorrow.
BitBear: Many developers that BitBear knows are starting businesses. Mr. Shao, as a personal experiencer, has also received Series A financing, and sharing his experience is essential.
Shao Wenjian: My suggestion is to start a business while you are young, even before the age of 35, when conditions permit. This is the stage when you have the most energy and the strongest learning ability in your life. Starting a business is to squeeze yourself and squeeze yourself 300%. , the ability will increase rapidly... Of course, it is still the same sentence, starting a business is risky. There is a joke that the most unsuccessful thing in the past few years is to sell a house to start a business, right? In addition, stop loss and take profit are very important. It is possible that you will not be able to spend two or three years on it, and you need to make a decisive decision.
Li Zhe'ao: I personally do not support selling houses and starting businesses.
BitBear: Mr. Li Zhe'ao joked with me before he started that he wanted to advertise, and now is the chance!
Li Zhe'ao: We are recruiting people for Taichi Graphics now, and our commercial products are incubating. The open source of our Taichi programming language itself also needs students in the fields of compilers, graphics, and deformation computing to join in, from R&D to SRE to operation All the classmates are recruiting, welcome to find BitsBear (WeChat: BitsBear) to get the internal push train~
BitBear: The big exclusive of [BitBear Live Room]! The two teachers broke the news that they have any way to relax themselves after work?
BitBear: I'll start first! Mr. Shao's avatar is Dr. Egghead from "Sonic the Hedgehog". Without Ben Xiong's special visit, who would have thought that the chief engineer of Intel's IoT Video Division used such a cute and vivid avatar?
Li Zhe'ao: A cup of tea, sitting on a balcony for a day... When I feel frustrated, I will go to the community to find some projects, solve some issues, and use some PR to change my mood. If I get tired of writing code, I may look at the papers I am interested in - of course, this is a relatively tiring relaxation, to completely relax, then I may review "Ultraman" and take a look inside. Classic lines, look at Digimon again, I like the first part the most, and hate the sixth part the most. I have always been a fan of special photography, I like Ultraman, and I don't have any bad hobbies anyway, so I rely on these to decompress.
Upcoming "New Ultraman"
Shao Wenjian: Can I say that when you go to the community to help others solve the issue, it is like playing a game hahaha.
Li Zhe'ao: I think it's very interesting. Sometimes I'm tired in the company, so I may go to see the issue. When discussing with others, you will check all kinds of information. While expanding your knowledge, it will also divert your attention and make your head break.
Shao Wenjian: I like running. There is a theoretical basis for this. Exercise can produce dopamine to make you happy. If you haven't been working well for a while, or something isn't going well, go for a run, it can really generate dopamine, but remember to do what you can.
Li Zhe'ao: Yes, but for a lazy person like me, when I'm in a bad mood, I just want to lie on the bed. My girlfriend chases me up and asks me to exercise, but I can't even get up.
Bit Bear has something to say
March's MVP Hero Story featured not only our Microsoft MVPs, but also Intel's super technologists. It is a great honor for BitBear to be able to spend the live broadcast time with the two of you, and the experience points are also Up!
Teacher Shao Wenjian's experience is rich and colorful, and Bit Bear even thinks it is a bit legendary! Learning from multiple majors, successful entrepreneurial practice, and seamless switching of multiple roles in program development, product engineer, and R&D architect have now become the core engine supporting the global video business promotion of Intel's IoT Division. In terms of career and technology, Bit Bear feels that Mr. Shao is a peak that needs to be looked up to. But in life, Mr. Shao encourages everyone who is moving forward to have the courage to try and break through themselves, and also recommends running, playing football and other positive ways to relieve stress. Thank you Mr. Shao for caring about the live broadcast equipment and operation mode of BitBear! Mr. Li Zhe'ao is indeed very popular in the Python community! Although he himself said that he prepared the content under the pressure of Ben Xiong's DDL, but unlike pure technical learning, Bit Bear saw the thinking of developers and broadened his logical dimension. Mr. Li Zhe'ao has a distinctive mark of a contemporary developer. Sitting in the live broadcast room, he also brought a lot of positive and jumping emotions to Bit Bear and the audience. I hope that Mr. Zhe'ao, who is a super soft fan like Bit Bear, will interact with Bit Bear more and continue to be active in the Microsoft MVP family!
Although the technical direction and experience of the two guests in this issue are very different, this does not affect the wonderful degree of the two whether it is technology or story sharing! I hope you all enjoy the charm of the remix of [Bit Bear Stories] in March as much as I do!
Do you like [Bit Bear Story Collection] in March? What do you think is the Easter egg in this issue? What other technical sharing or big coffee do you want to see? Welcome to comment below the article, or forward the article to the circle of friends, express your views and @bitbear. Of course, Xiongzi, who loves developers the most, has prepared a gift for everyone! Let me see you and bring the gift to you!
Advance notice-April [Bit Bear Story Collection] Surprise superposition, live up to the appointment! Lock my channel, great things are coming!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。