What can you see with the arrival of ultra-video?

阿里云视频云
中文

When human beings are in a state of superiority, there is always imagination to break the balance.

In 1905, Einstein denied absolute space-time and triggered three major revolutions in the physical world. Yang Zhenning once said, "Einstein didn't miss the point because he had a more free vision of time and space. have a free vision, 160f786b092588 must be able to observe the same subject at the same time close and far at the same time ."

In 2021, the Alibaba Cloud Video Cloud Panorama Innovation Summit will try to stand in the near and far view, and observe the hyper-video topics of this era in a panoramic view.

image.png

What kind of era is this?

This is the era of hyper-video.

The video allows the flowing text and images to evolve into the language of the times, and the video encapsulates emotions, positions, horizons, and thinking in a three-dimensional manner. Video is constantly breaking and extending in the time domain and space domain.

Video is a natural history, including text, audio and video, space, gravity, humanities and emotions. It presents a picture of the world without boundaries, expresses freedom and creates new freedom.

image.png

In the era of hyper-video, video has derived more new forms and built a new content chain, the so-called super content ; video has gradually evolved into a human-centered interaction, carrying multi-dimensional senses, and even experiences beyond time and space, the so-called Super-interaction ; Video allows everything to be a match, between people and people, people and things, people and nature, inductive links, generating a kind of super social ability and phenomenon, the so-called hyperlink .

Video has become a new language of the times, and video has become a new cultural movement in the new century; at the other end of the ultra-future, the physical perception boundary between the real world and the virtual world will be blurred, and finally a digital twin of the whole scene will be realized.

Of course, 5G is a booster for the evolution of this era, connecting everything. And "cloud + video" is a catalyst for scene innovation, integrating reality and reality.

Following this, all content and interactions will undergo fusion in this era.

Where is the end of content and interaction?

Let's talk about the content first.

Technology, all kinds of technology, first of all present a meaningful world.

technology makes meaning and conveys emotion . It's like this, just like when the bandwidth is limited, people focus on the transmission of information; when the bandwidth is high, what people convey through multi-dimensional information is emotion. In an interview in 2001, Jobs began to expect more emotions to be conveyed through the Internet. Today, the technology of the video cloud can be realized.

If technology helps content convey emotions, then looking back at the evolution of content, we can see a clear context: from a line of text, a painting, to an image, to today’s live broadcast, short videos overflowing, and then to information and knowledge. The video-based presentation of the whole scene, until the gradual video-based content of the whole scene, finally evolved into an immersive content form that is mainly three-dimensional and interactive. In this evolutionary process, the growth power of greater density, more dimensions, more senses, and topological space has been highlighted.

Today, we can foresee the immersive learning field in advance. Through the full integration of 5G, XR, holographic projection, digital twins and cloud networks, we can visualize and visualize abstract knowledge to create online and offline borderless classrooms. . The ability to evolve news reading into experience "space news", using limited virtual, ultra-high-definition technology, 3D and 360 panoramic technology, people can get a sense of presence and participation, making the news industry face great disruption. More common is the immersive cultural museum, which combines virtual/augmented reality, holographic projection, and intelligent interaction with cultural tourism IP to form an industrial prototype of immersive and interactive narrative.

image.png

In foreign countries, immersive concerts will be put on the stage. Sony and Verizon will cooperate to launch the "Madison Bill Immersive VR Concert" this winter. It is said that the experience combines 3D motion capture, volume capture and 3D reconstruction technology, developed using a game engine. At the same time, Panasonic also announced that it has cooperated with Illuminariums Entertainment to create a large-scale immersive entertainment center, with 46 4K projections built in the venue, combined with LiDAR sensors for interaction, and spatial audio integration, with a high degree of customization.

Taste carefully, and imagine the form of immersive content. In the content form, we can overview the linear growth path from physical immersion, virtual immersion, virtual hybrid immersion, to ubiquitous intelligent immersion. will reconstruct the experience through the form of global interaction. Thousands of unique content .

image.png

Look at the interaction again.

"The History of Science" mentioned that " a revolutionary change in modern thinking lies in moving from a finite closed world to an infinite universe ." Looking carefully at the deduction of interaction, this is also true.

From offline to online, all scenes are trying to free up space and create unboundedness. Based on the promotion of technology and commerce, people's interactions are slowly turning to full-scene online, and the final form will also be an immersive interactive relationship. It is not difficult to find that multi-terminal links, multi-person sharing, breaking space, and seamless integration of virtual and real are exactly this evolutionary trend. At the end point that can be seen, human-computer interaction and brain-computer interface are the focus of exploration.

If you look at the 60 years of interactive development, it can be divided into three major development eras, The next ten years will focus on human-computer interaction, sensors, online social communication, brain-computer interfaces, and feature recognition .

image.png

Source: International Journal of Human–Computer Interaction
《Mapping Human–Computer Interaction Research Themes and Trends from Its Existence to Today: A Topic Modeling-Based Review of past 60 Years》

From an interactive perspective, information will be naturally transferred from one interactive object to another, and digital will coexist and enhance with physics. Academically, interaction can be divided into: interaction between physical and digital continuum, implicit interaction, sensory environment and perception interaction, public space interaction, virtual reality and augmented reality interaction. And this ultimate immersive interaction, core is exploring more natural interaction methods, hoping to release people's abilities of stereo vision, touch, proprioception, etc., so that interaction is no longer limited to two-dimensional visual channels and visual feedback .

In terms of new interactive experience, the latest 2021CES showed us the remote VR control solution of Pollen Robotics, the smart mirror AR beauty salon system of CareOS, and the holographic accessories announced by the holographic technology company IKIN, which can turn smart phones or computer screens into Naked eye 3D effect. Of course, there is also the VR social networking that Facebook has always laid out, trying to try another life in the virtual world.

The "6G Era Vision Report" released by Samsung recently mentioned that highly immersive XR and high-quality mobile holographic experience will be common scenes in 10 years.

The end of content and interaction is probably a complex of immersive fields, and intelligence has gradually "immersed" us into a pan-immersive era of the integration of virtual and real. It is not the future, it is happening.

Ecological supply and AI control

Traveling back from the future and the evolution of the times, the line of sight is leveled, and it falls on the existing content ecology and technical support.

Deepen the trend of video and look at the full spectrum of video content. The entire industry chain covers content production, marketing and communication, distribution platforms, playback terminals, and technical support. Cloud computing, audio and video technologies strongly support the development of the entire industry chain of video content.

image.png

Driven by the consumption of new video culture, new technologies are evolving and being applied, and new production methods and content forms are being born.

We know that the expansion of the new cultural consumption of video requires a digital short video supply system on the one hand, and a production capacity of ultra-high-definition video on the other, to bring the masses into the wave of digital content and into the real 8K era.

Ultra-high-definition video is a new round of inter-generational evolution of video technology after analog, standard definition, and high-definition. It is equivalent to 5G and artificial intelligence as an important development direction of today's new generation of information technology. At present, content production is at the shortest level of ultra-high-definition production, and the advancement and development of the content service layer plays a decisive role in the commercial implementation of ultra-high-definition.

AI can play a key value in this. We can think like this. understands vision as two levels of biology and physics. The biological world is human visual perception, and physical phenomena are the various inductions of light, including the brightness of light. , Detailed description, and time-related information .

In this regard, the role of AI is mainly divided into two parts. The first and most basic is the understanding of videos or images, including our common classification, marking, detection, segmentation, etc., which are also related to people. Because people understand the world first. The second is production-related, such as our production, editing, processing, erasing, wiping, etc., and related to the underlying vision, which is enhancement related, but how can we use AI technology to empower video in the underlying vision? , Is also the key.

In terms of the ultra-high-definition capabilities conferred by AI, for vision, a very important result is a brand-new audio-visual experience, and the experience is related to many things. The first is richer details, such as looking at a thing, if the resolution is very low or the information experience is very poor, how to enrich the details, especially in 8K today. The second is more vivid colors. In terms of color depth, color gamut, and brightness, this is also a very important part of the experience. The third is a more immersive experience, the so-called large viewing angle, panoramic viewing angle, and stereo surround. In addition, it also includes a wider range of applications in all walks of life.

image.png

AI drives HD to move forward, intelligence is the most basic, and whether it can do things adaptively in different scenarios, AI technology does not have the so-called universal ability, so in different scenes of cartoons, news characters, and biographies, it can There is a very good system, not a single model, a universal model to process, so it is very important to be able to adaptively adopt the best algorithms for different scenarios. Therefore, self-adaptive, high-quality, and self-evaluating intelligent AI technology is the key to Dharma Academy's efforts.

In addition to ultra-high definition, AI is also empowering the efficiency of ultra-content consumption.

The fragmented consumption time of current users continues to increase. The number of users of short videos has exceeded 773 million, and the market size of short videos has exceeded 200 billion yuan. However, we all understand that on the content supply side, making a high-level video is faced with difficulties in creative production and tool implementation, and it is even more difficult to produce high-efficiency large-scale production. In this regard, the Alibaba Entertainment Media AI platform can achieve five major functions through AI research and development: dynamic material extraction, template video production, intelligent editing technology, intelligent material processing, and interactive special effects.

Combining its own business characteristics, Grand Entertainment hopes to increase efficiency and promote distribution on the platform side, and create more and better products and tools for the industry; on the consumer side, provide users with more new consumption patterns and new video consumer interactive experiences; On the industry side, we can cooperate with more B-side PGC or MCN.

Nowadays, based on the linkage of technology and ecology, Alibaba Cloud Video Cloud is also upgrading the entire media production model to a new era-an intelligent production architecture integrated with the cloud. This architecture includes the four core links of content creation, material management, editing and packaging, rendering and compositing, and has rich functions such as cloud guidance, cloud editing, and AI processing and production. Therefore, with the support of the integrated cloud architecture and AI capabilities, content production in the media industry will give more possibilities. This production-based model will greatly reshape the content industry, free true content creators from the tedious and repetitive work, and create richer content, forms, and models.

Video power has changed business logic

The evolution of the times, the blessing of technology, and the linkage of ecology have more to fall on the point of business.

In the past, when talking about the overall value of the Internet, traffic value was used routinely. From the mobile side, the easiest thing is how many devices are covered every month and every week, but now we have to look at it in terms of time. In just three years, the time spent by users on the entire video segmentation product has changed from 1,600 billion minutes to 480 billion minutes. The numbers are amazing.

image.png

Facing the huge commercial space behind the phenomenon, we must think about how to cooperate with the drive and innovate more.

When we talk about video dissemination, its origin is a carrier of information dissemination. If information dissemination itself needs to be classified, it can be divided into one-to-one communication or dissemination, one-to-many or many-to-many, and another dimension. Divided into delay and real-time.

The carrying capacity of video can be combined with many industries. Therefore, used to watch videos. We basically talked about the video industry and the video track. At this stage, we will think that all fields will be so combined with video. It is just like cloud computing, and it is no longer regarded as more An industry concept, but a basic capability at the bottom of the new Internet economy . With this capability, various industries can do some innovative things, based on the cloud, based on the video, based on the video cloud.

The video cloud will become an indispensable option for the industry's video industry and the technical base of the big video industry.

As a kind of digital and intelligent infrastructure, the video cloud not only greatly reduces the barriers to entry for video applications, but also continuously promotes the prosperity of the big video industry by promoting the improvement of industrial efficiency.

Thinking from the demand side, the video cloud can provide enterprises with video capabilities or make products video, and can use more production, processing, transmission, and consumption value-added capabilities. For example, live e-commerce is the first to feel deeply. The entire e-commerce body is changing. Originally, only a few major live e-commerce can be seen, but videoization has given the platform the ability to transform e-commerce, which makes many of the current e-commerce platforms. Content platforms and even start-up companies have a very large traffic center. The anchor can be the center of e-commerce, but in the past, it did not exist.

image.png

In addition, in the field of online education, online education has been groping for many years without a way to fully realize it. Later, the emergence of live broadcast has solved a certain immersion problem in the industry. Students can have more interaction with teachers, which can solve some problems of learning efficiency. In essence, videoization does solve some of the problems of immersion and effectiveness in education, so that online education has finally found the logic of its realization in the past few years. For the field of e-commerce and education, Xu Fanlei, deputy general manager of the iResearch Institute,’s analysis is very accurate.

In addition to e-commerce and education, which have the highest video penetration rate, wide-ranging Internet entertainment, digital and intelligent transformation of the media industry, and mobile collaborative office for enterprises are also key areas for the application of video cloud technology. Based on video cloud technology, new business scenarios are still being opened up, from new e-commerce, new education, new social networking, new finance, new medical, and even more industries and more industry transformations.

era, the penetration of video, and the transformation of interaction have caused tremendous changes in the industry's monetization logic, traffic direction, and organizational form.

In this regard, Alibaba Cloud Video Cloud and iResearch also jointly researched and released the "2021 China Video Cloud Scene Application Insight White Paper". From the perspective of cloud innovation, it fully demonstrates the full scenes and links of video applications, targeting space and blind spots. , Opportunities, and case in-depth analysis, and strive to lay important practical value for the commercial market of the video cloud track.

Competitions and open source are magnifiers of social imagination

In the era of hyper-video, the imagination of the video cloud goes far beyond business scenarios, but is more about benefiting the whole people and creating diversified social values.

Just in February this year, Alibaba Cloud and Intel jointly hosted the Global Video Cloud Innovation Challenge in cooperation with Youku’s strategic technology. This competition is the world’s first competition focusing on the application and innovation of video cloud technology in the entire industry. Hosted by Tianchi platform and Alibaba Cloud Video Cloud, the preliminary round attracted 4600 participating teams from universities around the world. During the race, you can see innovative projects that are constantly emerging, full of social value and new vitality, such as safe parking projects realized by visual algorithms and elderly care projects.

It is worth mentioning that the competition, through cooperation with Youku platform, provided a large-scale high-precision video segmentation data set for contestants to train models, and finally polished into an authoritative data set in the field of video segmentation, which is very rare. The data set has a solid data level, including 180,000 frames and the largest video target data set of up to 300,000, and it is industry-leading in labeling accuracy and content breadth. At the same time, the content type is highly suitable for real scenes and the scenes are diversified, which is of extremely high significance for the video industry.

As an important production factor in the information age, data is known as a new power source and an important foundation for the development of artificial intelligence technology.

Through cooperation with Alibaba Group's internal Taobao, Tmall, Aliyun, Youku, AE and other business teams, as well as Tsinghua University, Shanghai Jiaotong University, National Astronomical Observatory of Chinese Academy of Sciences, Chinese Computer Society, Chinese Information Society of China, Union Hospital, Ruijin Hospital, etc. In cooperation with external authoritative scientific research institutions, the Tianchi Competition platform has opened up more than 60 scarce industrial data sets with real business scenarios, including e-commerce, finance, logistics, medical, energy, etc., and has made outstanding contributions to the cultivation of global computer vision talents. Multi-technology developers create a wider space.

It has to be said that the technological innovation competition that stimulates surging energy and the large-scale authoritative open source data set empowers more dimensional social imagination, and the blooming of technology on this basis is very exciting.

If you are also graceful and immersed in imagination

In the final analysis, regardless of technology, business, ecology, or resources, everything is for human emotions and experience.

Technology is constantly interpenetrating with many fields, and art is probably the special field we most want to touch, and it is also the nerve line closest to the soft emotions of human beings.

The "Imagine" Alibaba Cloud Video Cloud Panorama Innovation Summit on 7.10, from the perspective of the organizer, truly started from imagination, trying to draw the distance between people and space with a sense of immersion in a visual channel.

image.png

image.png

Of course, from the perspective of technological cross-border art, we are deeply concerned about the realization of aesthetic creation in the digital age.

We have found that contemporary art creators are also constantly relying on their imagination and interdisciplinary ability to devote themselves to the fusion of technology and art. In the era of digital interaction, the artistic behavior of creation and dissemination is fully new, and profound changes are also being produced in the sense, experience and thinking of artistic aesthetics. aesthetic drive technology, technology feeds back aesthetics.

In the era of digital interaction, the pursuit of ultimate aesthetics is the pursuit of profession, and behind the profession lies creative efficiency and creative ability. Technology is undoubtedly an important tool to help creative multi-sensory and multi-dimensional realization, and AI tools based on deep learning are assisting in such a process and giving wings to the creative brain.

At the same time, the reconstruction of visual interaction by digital intelligence is also a very important experience evolution. At the summit, with "cross-border intelligent manufacturing" as the core, it tries to present some content and interactive new experience devices, such as generation-based confrontation networks and Migration learning technology of cartoon drawing, virtual shooting to create real-time rendering screen, virtual idol through facial and motion capture technology, all are exploring new art-based and human-based technology experiences.

is the limited vision that Alibaba Cloud Video Cloud sees in the new era, and the infinite content is yet to be imagined.

image.png

In the era of hyper-video, video cloud is everywhere

Video cloud is a new field of interdisciplinary

It is the integrated digital intelligence capability of the cloud

The video cloud is the imagination of the future of mankind

Is opening a new, infinite, and free world

there is imagination, there is a video cloud.

All the speech content of this video cloud panorama innovation summit will be released one after another on the "Video Cloud Technology" public account.

"Video Cloud Technology" Your most noteworthy audio and video technology public account, pushes practical technical articles from the front line of Alibaba Cloud every week, and exchanges and exchanges with first-class engineers in the audio and video field. The official account backstage reply [Technology] You can join the Alibaba Cloud Video Cloud Product Technology Exchange Group, discuss audio and video technologies with industry leaders, and get more industry latest information.
阅读 331

160 声望
1.5k 粉丝
0 条评论
你知道吗?

160 声望
1.5k 粉丝
宣传栏