en:lang="en-US"
1
1
https://www.panoramaaudiovisual.com/en/2023/09/26/vvc-vs-mpeg-5-evc-lcevc-estandar-futuro-broadcast/

VVC - MPEG-5 EVC LCEVC Future broadcast formats

In this Tribune, Dr. David Guillermo Fernández Herrera, CTO of YOU HAVE VIDEX, analyzes in depth the VVC and MPEG-5 EVC / LCEVC video compression formats, great candidates to become a standard in the broadcast industry in the short and medium term.

In the last few decades, we have experienced incredible advancement in the field of multimedia applications. Among the various factors that have intervened, it is worth highlighting the contribution of the video compression standards. Since the late 1980s, a multitude of standards have been developed, from H.261 to the recent VVC (Versatile Video Coding), always seeking to obtain high data compression rates without reducing visual quality.

This objective has allowed, every approximately 10 years, a new standard is released that provides a 50% reduction in the required bandwidth compared to its predecessor. This has been the case of H.265/HEVC (High Efficiency Video Coding, released in 2013) versus H.264/AVC (Advanced Video Coding, 2003).

In 2020, the first version of the standard was published VVC and two other new standards within the family MPEG-5, MPEG-5 EVC (Essential Video Encoding) y MPEG-5 LCEVC (Low Complexity Enhancement Video Coding).

Discovering VVC

VVC promises reduce by 50% the generated data rate (bitstream) with respect to H.265. However, this improvement is obtained at the expense of considerably increasing the algorithm complexity and consequently, requiring greater computational resources, greater power consumption and, therefore, greater economic cost to meet the requirements that allow real-time encoding/decoding of high (HD) and ultra-high definition (UHD) signals. This is something we have become accustomed to with the release of each new standard, and VVC has been no exception, as it features a large number of new techniques which add great complexity to the design of an encoder or decoder.

Both the rate of compression like the video visual quality They depend directly on the implementation made in the encoder, of what techniques have been used and how they have been implemented.

It is very important to highlight that video compression standards define the syntax of the bitstream (how to build it) and the method to follow to perform the decoding (how to reconstruct a video from the bitstream), but they do not describe the implementation of the encoder. This means that the standard indicates, for example, how to convert a motion vector to a sequence of bits, but it does not define how the encoder obtains the value of said vector by searching for the motion that exists between the different images that make up the video. Both the rate of compression like the video visual quality They depend directly on the implementation made in the encoder, of what techniques have been used and how they have been implemented. Therefore, the performance of an encoder can vary significantly from one implementation to another.

In the first implementations of H.265 encoders it could be confirmed that, when compared to mature implementations of H.264 encoders using configurations that allow the same amount of computational resources to be used (CPU load, memory usage, etc.), better results were obtained in H.264. Currently, as there are already sufficiently mature H.265 implementations, the situation has changed. The purpose of this comment is alert on encoder implementations that arise after the release of a new standard. Some are conceived as mere marketing products that include simple implementations of new techniques seeking to reach the market as soon as possible, but ultimately fall under their own weight when detailed comparisons are made with mature encoders based on previous standards. The implementations carried out based on an exhaustive study of the standard and the state of the art, and that have been optimized in strict testing environments, are those that They really exploit the contributions of each new standard.


Steps forward with VVC in the UHD world

Among the contributions within VVC, it is worth highlighting the use of large coding units (128x128 pixels), which greatly reduce the bandwidth required when applied in areas of homogeneous textures in the resolution UHD. Another very interesting contribution is the use of multiple types of partitions, which allows the image to be divided into coding units that adapt to the content, perfectly delimiting the different objects in the image. This level of division opens the doors to the use of artificial intelligence algorithms in video compression, such as segmentation, where each pixel of the image is classified individually. Through segmentation, regions, individuals or objects of interest can be detected and delimited in different classes within each image and different compression techniques can be applied depending on the level of visual quality that is desired to be obtained for each class. In other words, allocate more bits to regions of interest to improve its visual quality based on a previous image analysis.

VVC provides the necessary tools to optimize the distribution of UHD content and there are already commercial implementations that make this possible; but these solutions use large amount of computing resources and they represent a important investment.

These tools are just a small introduction to all those used by VVC and, applied correctly, can represent a very important qualitative leap. As a greater number of articles are published on how to efficiently use these techniques and greater efforts are dedicated to optimizing the different algorithms, we will begin to obtain VVC implementations that allow improving the performance of their predecessors at a reasonable computational cost. VVC provides the necessary tools to optimize content distribution in UHD and there are already commercial implementations that make it possible; but these solutions use great amount of computing resources and they represent a important investment.


MPEG5-EVC / LCEVC: natural evolution?

MPEG-5 EVC It has the advantage of having a royalty-free basic profile, which solves a weakness of H.265, where license management has become a very cumbersome process. But within the MPEG-5 family, it stands out MPEG-5 LCEVC.

This new standard is based on adding additional layers of quality improvement on a base layer obtained using other compression standards. This concept is a big jump compared to its predecessors, since it allows teams to continue using hardware acceleration (based on H.264, H.265, VP9, ​​etc.) to process the base layer and, through software, improve its performance thanks to the additional layers. On the decoder side, if MPEG-5 LCEVC is not supported, only the base layer will be decoded allowing compatibility with existing infrastructures and devices.

Therefore, MPEG-5 LCEVC introduces the compression enhancement data flow concept, reduces processing complexity and it is backward compatible; all this, without introducing additional latency into the system.


Adoption of VVC and MPEG-5 LCEVC

Although three years have passed since the release of the first version of the standard VVCCurrently, there is no hardware support in any device for VVC, which slows down its adoption and allows us to predict that its expansion will not accelerate until 2026 or 2027. Even without hardware support, we are seeing a series of indicators in the market that indicate its potential. One of these indicators is the adoption of VVC in application standards such as DVB (adopted VVC as Next Generation Codec, in 2022), SBTVD (for its base layer, in 2021), SCTE (included it among its standards in 2023) and its inclusion in ATSC 3.0. The effort dedicated to the implementation of VVC solutions by companies such as Qualcomm, Huawei, Hikvision, MainConcept, Ateme or Bitmovin, could be another indication of its future wide adoption.

MPEG5-LCEVC is a palpable reality, its implementation is currently feasible, and by adding software layers both in encoding and decoding, provides a significant improvement in the performance of any system based on previous standards.

Due to the lack of implementations open source mature VVC and the resources allocated by commercial encoders (8 times more than H.264 and 4 times more than HEVC), MPEG-5 LCEVC It is emerging as a very interesting option to improve the performance of existing systems based on previous standards. Of course, in the case of content or multimedia service providers, where the possibility of dedicating powerful encryption servers It's not a problem, it should not be lost sight of VVC for the significant improvements that it contributes to compression efficiency and the indicators on its future adoption that we have discussed.

As implementations progress, lower cost devices will begin to appear where it is possible to run VVC. The same was true for its predecessors: the first implementations of H.265 could only run on powerful servers; but, as the years went by, optimized implementations ran on other types of devices such as FPGA, GPU and, finally, ASIC. With this, its use became democratized.

Although VVC currently does not have devices that perform decoding through hardware acceleration (mobile phones, set-top boxes, GPU in computers, etc.), it is emerging as the great candidate to decrease the required bandwidth to distribute content UHD in the coming years.

MPEG5-LCEVC is a palpable reality, its implementation is currently feasible, and by adding software layers in both encoding and decoding, it provides a significant improvement in the performance of any system based on previous standards. Thanks to the compatibility it offers, it can be deployed using the current infrastructure and without the need to create a parallel data flow, which represents a significant cost reduction compared to the implementation of a standard that requires the application of flows due to incompatibility problems. However, although VVC currently does not have devices that perform decoding through hardware acceleration (mobile phones, set-top boxes, GPU in computers, etc.), it is emerging as the great candidate to reduce the bandwidth required to distribute UHD content in the coming years.

In the end, the bets will be broadcasters, manufacturers and suppliers, as well as specific needs in the increasingly broad concept broadcast, who will define the winner in a battle with two very powerful technologies that solve many of the challenges involved in the transmission of content HD and UHD.

David Guillermo Fernández Herrera CTO Ai VidexDr. David Guillermo Fernández Herrera

CTO of YOU HAVE VIDEX

By, Sep 26, 2023, Section:Emission, Study, Grandstands

Other articles about

Did you like this article?

Subscribe to our NEWSLETTER and you won't miss anything.