MPEG can also be green

Introduction

MPEG has given humans the means to add significant more effectiveness and enjoyment to their lives. This comes at a cost, though. Giving billions of people the means to stream video streamed to anywhere at any time of the day, adds to global energy consumption. Enhanced experiences provided by newer featurers such as High Dynamic Range further adds energy consumption in the display. More sophisticated compression algorithms consume more energy, even though this can be mitigated by more advanced circuit geometry.

In 2013 MPEG issued a Call for Proposal on “Green MPEG” requesting technologies that enable reduction of energy consumption in video codecs. In 2016 MPEG released ISO/IEC 23001-11 Green Metadata, followed by a number of ancillary activities.

It should be clear that Green Metadata should not be seen as an attempt at solving the global problem of energy consumption. More modestly Green Metadata seeks to reduce power consumption in the encoding, decoding, and display process while preserving the user’s quality of experience (QoE). At worst Green Metadata can be used to reduce the QoE in a controlled way.

The standard does not require changing the operation of a given encoder or decoder (i.e. changing the video coding standard). It just requires to be able to “access” and “influence” appropriate operating points of any or the encoder, decoder or display.

A system view

Green Metadata has been developed having as target metadata suitable for influencing the video encoding, decoding and display process. The framework, however, could be easily generalised by replacing “video” and “display” with “media” and “presentation”. However, the numerical results obtained in the video case cannot be directly extrapolated to other media.

Let’s start from the figure representing a conceptual diagram of a green encoder-decoder pair.

Figure 1 – Conceptual diagram of a green encoder-decoder pair

The Green Video Encoder (GVE), is a regular video encoder that generates a compressed video bitstream and also a stream of metadata (G-Metadata) for use by a Green Video Decoder (GVD) to reduce power consumption. When a return channel is available (e.g. on the internet), the GVD may generate feedback information (G-Feedback) that the GVE may use to generate a compressed video bitstream that demands less power for the GVD to decode.

To understand what is actually standardised by Green Metadata, it is worth digging a little bit in the following high-level diagram and see what is the new “green component” that is added. The figure below helps to understand such green components.

Figure 2 – Inside a green encoder-decoder pair

The GVE generates G-Metadata packaged by the G-Metadata Generator for transmission to a GVD. The GDV G-Metadata Extractor extracts the G-Metadata payload and passes the GVE G-Metadata to the GVD Power Manager along with G-Metadata coming from the GVD. The GVD Power Manager, based on the two G-Metadata streams and possibly other input such as user’s input (not shown in figure), may send

  1. Power Control data to the Video Decoder to change its operation
  2. G-Feedback data to the G-Feedback Generator to package it for transmission to the GVE.

At the GVE side the G-Feedback Extractor extracts the G-Feedback data and passes them to the GVE Power Manager. This may send Power Control data to the Video Encoder to change its operation.

To examine a bit more in detail how G-Metadata can be used, it is helpful to dissect the Video Encoder and Decoder pair.

Figure 3 – Inside the encoder and decoder

The Video Encoder is composed of a Media Preprocessor (e.g. a video format converter) and a Media Encoder. The Video Decoder is made of a Media Decoder and a Presentation Subsystem (e.g. to drive the display). All subsystems send G-Metadata and receive Power Contro. The Presentation Subsystem only receives Power Control.

What is standardised in Green Metadata? As always, the minimum that is required for interoperability. This means the Encoder Green Metadata and the Decoder Green Feedback (in red in the figure) that are exchanged by systems which are potentially manufactured by different entities. Other data formats inside the GVE and the GVD are a matter for GVE and GVD manufacturers to decide because they do not affect interoperability but may affect performance. In particular, the logic of the Power Manager that generates Power Control is the differentiating factor beyween implementations.

Achieving reduced power consumption

In the following the 3 areas positively affected by the use of the Green Metadata standard – encoder, decoder and display – will be illustrated.

Encoder. By using a segmented delivery mechanism (e.g. DASH), encoder power consumption can be reduced by encoding video segments with alternate high/low quality. Low-quality segments are generated by using lower-complexity encoding (e.g. fewer encoding modes and reference pictures, smaller search ranges etc.). Green Metadata include the quality of the last picture of each segment. The video decoder enhances the low-quality segment by using the metadata and the last high-quality video segment.

Decoder. Lowering the frequency of a CMOS circuit implementing a video decoder reduces power consumption because this roughly increases linearly with the clock frequency and quadratically with the voltage applied. In a software decoder picture complexity can be used to control the CPU frequency.

One type of Green Metadata signals the duration and degree of complexity of upcoming pictures. This can be used to select the most appropriate setting and offer the best QoE for a desired power-consumption level.

Display. The display adaptation technique known as backlight dimming reduces power consumption by dimming the LCD backlight while RGB values are scaled in proportion to the dimming level (RGB values do not have a strong influence on power consumption).

Green Metadata need to be carried

ISO/IEC 23001-11 only specifies the Green Metadata. The way this information is transported depends on the specific use scenarios (some of them are described in Context, Objectives, Use Cases and Requirements for Green MPEG).

Two transports have been standardised by MPEG. In the first Green Metadata is transported by a Supplementary Enhancement Information (SEI) message embedded in the video stream. This is a natural solution since Green Metadata are due to be processed in a Green Video Decoder that includes a regular video decoder. In this case, however, transport is limited to decoder metadata, not display metadata. In the second, suitable for a broadcast scenario, all Green Metadata is transported in the MPEG-2 Transport Stream.

Conclusion

Power consumption is a dimension that had not been tackled by MPEG, but the efforts that have led to the Green Metadata standard have been rewarded: with the currently standardised metadata 38% of video decoder power and 12% of video encoder power can be saved without affecting QoE and up to 80% of power can be saved with some degradation of the QoE. Power saving data were obtained using the Google Nexus 7 platform and the Monsoon power monitor, and a selection of video test material.

Interested readers can know more by visiting the MPEG web site and, more so, by purchasing the Green Metadata standard from the ISO website or from a National Body.

Posts in this thread (in bold this post)

 

 

 

 

The life of an MPEG standard

Introduction

In How does MPEG actually work? I described the way MPEG develops its standards, an implementation of the ISO/IEC Directives for technical work. This article describes the life of one of MPEG most prestigious standards: MPEG-2 Systems, which has turned 26 in November 2018 and has played a major role in creating the digital world that we know.

What is MPEG-2 Systems?

When MPEG started, standards for compressed video and later audio was the immediate goal. But it was clear that the industry needed more than that. So, after starting MPEG-1 video compression and audio compression, MPEG soon started to investigate “systems” aspects. Seen with today’s eyes, the interactive CD-ROM target of MPEG-1 was an easy problem because all videos on a a CD-ROM are assumed to have the same time base, and bit delivery is error free and on-time because the time interval between a byte leaving the transmitter is the same as the time interval at its arrival at the receiver.

In July 1990, even before delivering the MPEG-1 standard (November 1992), MPEG started working on the much more challenging “digital television” problem. This can be described as: the deliver of a package of digital TV programs with different time bases and associated metadata over a variety of analogue channels – terrestrial, satellite and cable. Of course operators expected to be able to do the same operations in the network that the television industry had been accustomed to do in the several decades since TV distribution had become common place.

A unique group of experts from different – and competing – industries with their different cultural backgrounds and many countries, and the common experience of designing from scratch the MPEG-1 Systems standard, designed the MPEG-2 Systems standards, again from a blank sheet of paper.

The figure illustrates the high-level structure of an MPEG-2 decoder: waveforms are received from a physical channel (e.g. a Hertzian channel) and decoded to provide a bistream containing multiplexed TV programs. A transport stream demultipler and decoder extracts audio and video streams (and typically other streams not shown in the figure) and a clock that is used to drive the video and audio decoders.

The structure of the transport bitstream is depicted in the figure. The stream is organised in fixed-length packets of 188 bytes of which 184 bytes are used for the payload.

The impact of MPEG-2 Systems

MPEG-2 Systems is the container and adapter of the digital audio and video information to the physical world. It is used every day by billions of people who receive TV programs from a variety of sources, analogue and, often, digital as well (e.g. IPTV).

MPEG-2 Systems was approved in November 1994 while some companies who could not wait had already made implementations before the formal release of the standard. That date, however, far from marking the “end” of the standard, as it often happens, it signaled the beginning of a story that continues unabated today. Indeed, in the 26 years after its release, MPEG-2 Systems has been constantly evolving, while keeping complete backward compatibility with the original 1994 specification.

MPEG-2 Systems in action

So far MPEG has developed 34 amendments (ISO language to indicate the addition of functionality to a standard), 3 additional amendments are close to completion and one is planned. After a few amendments are developed, ISO requests that they be integrated in a new edition of the standard. So far 7 MPEG-2 Systems editions have been produced covering the transport of non-MPEG-2 native media and non-media data. This is an incomplete lists of the trasnport functionality added:

  1. Audio: MPEG-2 AAC, MPEG-4 AAC and MPEG-H 3D
  2. Video: MPEG-4 Visual, MPEG-4 AVC and its extensions (SVC and MVC), HEVC, HDR/WCG, JPEG2000, JPEG XS etc.
  3. Other data: streaming text, quality metadata, green metadata etc.
  4. Signalling: format descriptor, extensions of the transport stream format (e.g. Tables for splice parameters, DASH event signalling, virtual segment etc.), etc.

Producing an MPEG-2 Systems amendment is a serious job. You need experts with the full visibility of a 26 years old standard (i.e. don’t break what works) and the collaboration of experts of the carrier (MPEG-2 Systems) and of the data carried (audio, video etc.). MPEG can respond to the needs of the industry because it has available all component expertise.

Conclusions

MPEG-2 Systems is probably one of MPEG standards that is less “visible” to its users. Still it is one of the most important enablers of television distribution applications impacting the life of billions of people and tens of thousands of professionals. Its continuous support is vital for the well-being of the industry.

The importance of MPEG-2 Systems has been recognised by the Academy of Television Arts and Sciences who has awarded MPEG an Emmy for it.

MPEG-2 Systems Amendments

The table below reports the full list of MPEG-2 Systems amendments, The 1st column gives the edition, the 2nd column the sequential number of the amendment of that edition, the 3rd column the title of the amendment and the 4th the dates of the approval stages.

E

A Title

Date

1 1 Format descriptor registration 95/11
2 Copyright descriptor registration 95/11
3 Transport Stream Description 97/04
4 Tables for splice parameters 97/07
5 Table entries for AAC 98/02
6 4:2:2 @HL splice parameters
7 Transport of MPEG-4 content 99/12
2 1 Transport of Metadata 02/10
2 IPMP support 03/03
3 Transport of AVC 03/07
4 Metadata Application Format CP 04/10
5 New Audio P&L Signaling 04/07
3 1 Transport of Streaming Text 06/10
2 Transport of Auxiliary Video Data
3 Transport of SVC 08/07
4 Transport of MVC 09/06
5 Transport of JPEG2000 11/01
6 MVC operation point descriptor 11/01
7 Signalling of stereoscopic video 12/02
8 Simplified carriage of MPEG-4 12/10
4 1 Simplified carriage of MPEG-4 12/07
2 MVC view, MIME type etc. 12/10
3 Transport of HEVC 13/07
4 DASH event signalling 13/07
5 Transport of MVC depth etc. 14/03
5 1 Timeline for External Data 14/10
2 Transport of layered HEVC 15/06
3 Transport of Green Metadata 15/06
4 Transport of MPEG-4 Audio P&L 15/10
5 Transport of Quality Metadata 16/02
6 Transport of MPEG-H 3D Audio 16/02
7 Virtual segment 16/10
8 Signaling of HDR/WCG 17/01
9 Ultra-Low-Latency & JPEG 2000 17/07
10 Media Orchestration & sample variants
11 Transport of HEVC tiles
6 1 Transport of JPEG XS
  2 Carriage of associated CMAF boxes  

 

Posts in this thread (in bold this post)

Genome is digital, and can be compressed

Introduction

The well-known double helix carries the DNA of living beings. The human DNA contains about 3.2 billion nucleotide base pairs represented by the quaternary symbols (A, G, C, T). With high-speed sequencing machines today it is possible to “read” the DNA. The resulting file contains millions of “reads”, short segments of symbols, typically all of the same length, and weighs an unwieldy few Terabytes.

The upcoming MPEG-G standards, developed jointly by MPEG and ISO TC 276 Biotechnology, will reduce the size of the file, without loss of information, by exploiting the inherent redundancy of the reads and make at the same time the information in the file more easily accessible.

This article provides some context, and explains the basic ideas of the standard and the benefits it can yield to those who need to access genomic information.

Reading the DNA

There are two main obstacles preventing a direct use of files from sequencing machines: the position of a read on the DNA sample is unknown and the value of each symbol of the read is not entirely reliable.

The picture below represents a 17 reads with a read length of 15 nucleotides. These have been aligned to a reference genome (first line). Reads with a higher number start further down in the reference genome.

Reading column-wise, we see that in most cases the values have exactly the value of the reference genome. A single difference (represented by isolated red symbols) may be caused by read errors while a quasi completely different column (most symbols in red) may be caused by the fact that 1) a given DNA is unlikely to be exactly equal to a reference genome or 2) the person with this particular DNA may have health problems.

Use of genomics today

Genomics is already used in the clinical practice. An example of genomic workflow is depicted in the figure below which could very well represent a blood test workflow if “DNA” were replaced by “blood”. Patients go to a hospital where a sample of their DNA is taken and read by a sequencing machine. The files are analysed by experts who produce reports which are read and analysed by doctors who decide actions.

Use of genomics tomorrow

Today genomic workflows take time – even months – and are costly – thousands of USD per DNA sample. While there is not much room to cut the time it takes to obtain a DNA sample, sequencing cost has been decreasing and are expected to continue doing so.

Big savings could be achieved by acting on data transport and processing. If the size of a 3 Terabytes file is reduced by, say, a factor of 100, the transport of the resulting 30 Gigabytes would be compatible with today’s internet access speeds of 1 Gbit/s (~4 min). Faster data access, a by-product of compression, would allow doctors to get the information they are searching, locally or from remote, in a fraction of a second.

The new possible scenario is depicted in the figure below.

MPEG makes genome compression real

Not much had been done to make the scenario above real (zip is the oft-used compression technology today) until the time (April 2013) MPEG received a proposal to develop a standard to losslessly compress files from DNA sequencing machines.

The MPEG-G standard – titled Genomic Information Representation – has 5-parts: Parts 1 and 2 are expected to be approved at MPEG 125 (January 2018) and the other parts are expected to follow suit shortly after.

MPEG-G is an excellent example of how MPEG could apply its expertise to a different field than media. Part 1, an adaptation of the MP4 File Format present in all smartphones/tablets/PCs, specifies how to make and transport compressed files. Part 2 specifies how to compress reads and Part 3 how to invoke the APIs to access specific compressed portions of a file. Part 4 and 5 are Conformance and Reference Software, respectively.

The figure below depicts the very sophisticated operation specified in Part 2 in a simplified way.

An MPEG-G file can be created with the following sequence of operations:

  1. Put the reads in the input file (aligned or unaligned) in bins corresponding to segments of the reference genome
  2. Classify the reads in each bin in 6 classes: P (perfect match with the reference genome), M (reads with variants), etc.
  3. Convert the reads of each bin to a subset of 18 descriptors specific of the class: e.g., a class P descriptor is the start position of the read etc.
  4. Put the descriptors in the columns of a matrix
  5. Compress each descriptor column (MPEG-G uses the very efficient CABAC compressor already present in several video coding standards)
  6. Put compressed descriptors of a class of a bin in an Access Unit (AU) for a maximum of 6 AUs per bin

Therefore MPEG-G file contains all AUs of all bins corresponding to all segments of the reference genome. A file may contain the compressed reads of more than one DNA sample.

The benefits of MPEG-G

Compression is beneficial but is not necessarily the only or primary benefit. More important is the fact that while designing compression, MPEG has given a structure to the information. In MPEG-G the structure is provided by Part 1 (File and transport) and by Part 2 (Compression).

In MPEG-G most information relevant to applications is immediately accessible, locally and, more importantly, also from remote without the need to download the entire file to be able to access the information of interest. Part 3 (Application Programming Interfaces) makes this fast access even more convenient because it facilitates the work of developers of genomics applications who may not have in-depth information of the – certainly complex – MPEG-G standard.

Conclusions

In the best MPEG tradition, MPEG-G is a generic standard, i.e. a standard that can be employed in a wide variety of applications that require small footprint of and fast access to genomic information.

A certainly incomplete list includes: Assistance to medical doctors’ decisions; Lifetime Genetic Testing; Personal DNA mapping on demand; Personal design of pharmaceuticals; Analysis of immune repertoire; Characterisation of micro-organisms living in the human host; Mapping of micro-organisms in the environment (e.g. biodiversity).

Standards are living beings, but MPEG standards have a DNA that allows them to grow and evolve to cope with the manifold needs of its ever-growing number of users.

I look forward to welcoming new communities in the big family of MPEG users.

Posts in this thread (in bold this post)

 

Compression standards and quality go hand in hand

Introduction

When I described the MPEG workflow in How does MPEG actually work? I highlighted the role of quality assessment across the entire MPEG standard life cycle: at the time of issuing a Call for Evidence (CfE) or a Call for Proposals (CfP), carrying out Core Experiments (CE) or executing Verification Tests.

We should consider, however, that in 30 years the coverage of the word “media” has changed substantially.

Originally (1989-90) the media types tested were Standard Definition (SD) 2D rectangular video and stereo audio. Today the video data types include also High Definition (HD), Ultra High Definition (UHD), Stereoscopic Video, High Dynamic Range (HDR) and Wide Colour Gamut (WCG), and Omnidirectional (Video 360).

The video information can be 2D, but also multiview and 3D: stereoscopic, 3 degrees of freedom + (3DoF+), 6 degrees of freedom (6DoF) and various forms of light field. Audio has evolved to different forms of Multichannel Audio and 6DoF. Recently Point Clouds were added to the media types for which MPEG has applied subjective quality measurements to develop compression standards.

In this article I would like to go inside the work that comes together with subjectively assessing the quality of media compression.

Preparing for the tests

Even before MPEG decides to issue a CfE or CfP for compression of some type of media content, viewing or listening to content may take place to appreciate the value of a proposal. When a Call is issued MPEG has already reached a pretty clear understanding of the use cases and requirements (at the time of a CfE) or the final version of them (at the time of a CfP).

The first step is the availability of appropriate test sequences. Sequences may already be in the MPEG Content Repository or are spontaneously offered by members or are obtained from industry representatives by issuing a Call for Content.

Selection of test sequences is a critical step because we need sequences that are suitable for the media type and representative of the use cases, and in a number that allows us to carry out meaningful and realistic tests.

By the CfE or CfP time, MPEG has also decided what is the standard against which responses to the CfE or CfP should be tested. For example, in the case of HEVC, the comparison was with AVC and, in the case of VVC, the comparison was with HEVC. In the case of Internet Video Coding (IVC) the comparison was with AVC. When such a reference standard does not exist (this was the case for, e.g., all layers of MPEG-1 Audio and Point Cloud Compression) the codec built during the exploratory phase that groups together state of the art tools is used.

Once the test sequences have been selected, the experts in charge of the reference software are asked to run the reference software encoder and produce the “anchors”, i.e. the video sequences encoded according to the “old” standard. The anchors are made available in an FTP site so that anybody intending to respond to the CfE or CfP can download them.

The set up used to generate the anchors are documented in “configuration files” for each class of submission. In the case of video these are (SD/HD/UHD, HDR, 360º) and design conditions (low delay or random access). Obviously, in order to have comparable data, all proponents must use the same configuration files when they encode the test sequences using their technology.

As logistic considerations play a key role in the preparation of quality tests, would-be proponents must submit formal statements of intention to respond to the CfE or CfP to the Requirements group chair (currently Jörn Ostermann) and Test group chair (currently Vittorio Baroncini), and the chair of the relevant technology group, 2 months before the submission deadline.

At the meeting before the responses to the CfP/CfE are due, an ad hoc group (AHG) is established with the task of promoting awareness of the Call among the industry, to carry out the tests, draft a report and submit conclusions on the quality tests to the following MPEG meeting.

Carrying out the tests

The actual tests are entirely carried out by the AHG under the leadership of AHG chairs (typically the Test chair and a representative of the relevant technology group).

Proponents send on hard disk drives or on an FTP site their files containing encoded data to the Test Chair by the deadline specified in the CfE/CfP.

When all drives are received the Test chair performs the following tasks

  1. Acquire special hardware and displays for the tests (if needed)
  2. Verify that the submitted files are all on disk and readable
  3. Assign submitted files to independent test labs (sometimes even 10 test labs are concurrently involved is a test run)
  4. Make copies and distribute the relevant files to the test labs
  5. Specify the tests or provide the scripts for the test labs to carry out.

The test labs carry out a first run of the tests and provide their results for the Test chair to verify. If necessary, the Test chair requests another test run or even visits the test labs to make sure that the tests will run properly.

When this “tuning” phase has been successfully executed, all test labs run the entire set of tests assigned to them using test subjects. Tens of “non-expert” subjects may be involved for several days.

Here is a sample of what it means to attend subjective quality tests

Test report

Tests results undergo a critical revision according to the following steps

  1. The Test chair collects all results from all test labs performs a statistical analysis of the data, prepares and submits a final report to the AHG
  2. The report id discussed in the AHG and may be revised depending on the discussions
  3. The AHG draws and submits its conclusions to MPEG along with the report
  4. Report and conclusions are added to all the material submitted by proponents
  5. The Requirements and the technology group in charge of the media type evaluate the material and rank the proposals. Because of the sensitivity of some of the data all the material is not made public.

This signals the end of the competition phase and the beginning of the collaboration phase.

Other media types

In general the process above has been described having specifically rectangular 2D or 360º video in mind. Most of the process applies to other media types with some specific actions to be made for each of them, e.g.

  1. 3DTV: for the purpose of 3D HEVC tests, polarised glasses as in 3d movies and autostereoscopic displays were used;
  2. 3DoF+: a common synthesiser will be used in the upcoming 3D0F+ tests to synthesise views that are not available at the decoder;
  3. Audio: in general subjects need to be carefully trained for the specific tests;
  4. Point Clouds: videos generated by a common presentation engine consisting of point clouds animated by a script (by rotating the object and seeing it from different viewpoints) were tested for quality as if they were natural videos (NB: there were no established method to assess the quality of point clouds before. It was demonstrated that the subjective tests converged to the same results as the objective measurements).

Verification tests

Verification tests are executed with a similar process. Test sequences are selected and compressed by experts running reference software for the “old” and the “new” standard. Subjective tests are carried out as done in CfE/CfP subjective tests. Test results are made public to provide the industry with guidance on the performance of the new standard. See as examples the Verification Test Report for HDR/WCG Video Coding Using HEVC Main 10 Profile and the MPEG-H 3D Audio Verification Test Report.

Conclusions

Quality tests play an enabling role in all phases of development of a media compression standard. For 30 years MPEG has succeeded in mobilising – on a voluntary basis – the necessary organisational and human resources to perform this critical task.

I hope that, with this post, I have opened a window to an aspect of MPEG life that is instrumental to offer industry the best technology so that users can have the best media experience.

Posts in this thread (in bold this post)

Digging deeper in the MPEG work

Introduction

In How does MPEG actually work? I described the “MPEG standards work flow” from a proposed an idea to the release of the corresponding standard and its verification. I also highlighted the key roleplayed by MPEG experts as the real makers of MPEG standards.

In this article I would like to introduce the role played by the peculiar MPEG organisation and some of its members in facilitating the work of MPEG experts.

The MPEG membership

MPEG operates in the framework of the International Organization for Standardization (ISO). To be entitled to attend and actively participate in the MPEG work, experts or their companies must be members of one of the 162 national standards organisations who are members of ISO. Respondents to an MPEG Call for Evidence (CfE) or Call for proposals (CfP) are allowed to attend a meeting where their proposal is presented for the first time. To continue attending, however, proponents are required to become formal members (and they have better to do so if they want to defend their proposals).

The MPEG organisation

The figure below depicts the conceptual MPEG organisation as of today. In its 30 years MPEG has created and disbanded several groups created for specific needs. More about the history of the MPEG groups and their chairs can be found here.

 

New ideas are typically submitted to the Requirements group. If they are considered worth of further explorations they are typically discussed in ad hoc groups (AHG) after the meeting. As explained in How does MPEG actually work? the AHG will report its findings at the following MPEG meeting. After a few iterations, MPEG will produce a Call for Evidence (CfE) or Call for Proposals (CfP) with due publicity in the Press Release. At that time. the AHG is charged with the task to disseminate the CfE or CfP, prepare for the logistics of the tests and perform a first assessment of the responses.

This does not mean that the Requirements group is no longer involved in this standard because the group typically continues the development of Use Cases and Requirements while the technical work progresses. When necessary, requirements are reviewed with the appropriate technology groups and may give rise to new CfPs whose outcome is fed into the work of the relevant technology groups after an assessment.

Today the task of the Test group is no longer confined to the assessment of the quality of proposals at CfE and CfP time. Designing and executing appropriate quality tests in support to Core Experiments has become the norm especially in the Video group.

If a Call requests Proposals, the Requirements group, in conjunction with the Test group and possibly one or more technology group, reviews the result of the AHG and makes a final assessment. If the technologies submitted are judged to be sufficient to initiate the development of a standard, the activity is tranferred to that/those group(s) for standard development.

Current MPEG Chairs are

Requirements Jörn Ostermann Leibniz Univ. Hannover
Systems Youngkwon Lim Samsung
Video Lu Yu, Jens-Rainer Ohm, Gary Sullivan Zhejiang Univ., RWTH Aachen, Microsoft
Audio Schuyler Quackenbush Audio Research Labs
3DGC Marius Preda Institut Mines Télécom
Test Vittorio Baroncini GBTech
Communication Kyuheon Kim Kyunghee Univ.

The chairs come from a variety of countries (CN, DE. FR, IT, KR, US) and organisations (small/medium size and large companies, and Universities).

Joint meetings

A key MPEG feature is the immediate availability of the necessary technical expertise to discuss matters that cross organisational boundaries. The speed of development and quality of MPEG standards would hardly be possible if MPEG did not have the ability to timely deploy the necessary expertise to address multi-faceted issues.

Let’s take as an example this video where the basketball player is represented as a compressed dynamic point cloud in a 360ᵒ video. The matter is raised at a meeting, discussed at a Chairs meeting where the need for a joint meeting is identified. This is proposed to the MPEG plenary and held with the participation of Requirements, Systems and 3D Graphics Coding experts. The outcome of such a meeting may be anything from the acknowledgement that “we don’t know well enough yet”, to the identification of technologies that can simply be developed as a collaborative effort of MPEG experts or require a CfP.

As a typical example let’s consider some of the joint meetings held at MPEG 124 (20 in total):

  1. Anchors for 3DoF+; Participants from Video and Test meet to select the anchors (reference sequences) to be used in the 3DoF+ CfP (Monday, 15:00 to 16:00)
  2. 3DoF+ CfP; Participants from Requirements, Video and Test meet to discuss the text of the CfP that was published at MPEG 124 (Tuesday, 09:00 to 10:00)
  3. 3DoF+ CfP & viewing; Participants from Video and Test meet to review the text of and view the content for the CfP (Wednesday, 11:30 to 12:30)
  4. 3DoF+ viewing; Participants from Video and Test meet to view 3DoF+ content for the CfP (Thursday, 12:00 to 13:00)
  5. Systems aspects for Point Cloud Compression; Participants from 3DG and Systems meet to discuss the topic intruduced in this session (Tuesday, 09:00 to 10:00 and Thursday, 12:00 to 13:00)
  6. MPEG-I Scene graph; Participants from Requirements, Systems and Audio meet to discuss Audio needs for a scene description technology (Thursday, 09:00 to 10:00)
  7. Neural Network Compression CfE results; Participants from Requirements and Video meet to discuss the CfE results and decide whether there is room for a CfP (Tuesday, 16:00 to 17:00)

Meeting documents

How are documents fed into the MPEG process? This is not a marginal aspect if we consider that the number of submissions uploaded can easily cross 1000 documents. In an age of (almost) pervasive broadband internet, it is hard to imagine how MPEG could operate in a totally “paper-based” fashion when its membership was already around 300. In October 1995 Pete Schirling (then with IBM) made available to MPEG a document management system that allowed MPEG members to upload their documents and download those uploaded by the other members. Wo Chang (NIST) in 2000 and Christian Tulvan (Institut Mines Télécom) in 2005 have taken over the system that ensures the high efficiency of MPEG operation.

Recently ISO has developed its own document management system for use by all entities in ISO. However, MPEG has been exempted from using it because the traffic generated by uploading and downloading MPEG document in the weeks around an MPEG meeting would bring the system down. Incidentally, a requirement for hosting an MPEG meeting is the availability of internet access at 1 Gbit/s.

Meeting information

For MPEG experts (and those who do not attend yet) a place with so many “hotbeds” discussing requirements, and assessing, integrating and testing media technologies is exciting, but how can MPEG experts know what happens where and when at a meeting? Christian, the designer and maintainer of the MPEG document management system, has come to help again and designed a system that answers those questions. MPEG members can have a full view of which meetings are held where and when to discuss which topics or documents. The design is responsive so MPEG experts can get the information from their smartphones.

The figure shows a how the service looked like at MPEG 124 (October 2018). The page all meetings of all groups, but filtered views are possible. By clicking on each meeting details (room, documents etc.) can be obtained. 

Mission control

Another major development made by Christian provides the chairs with a series of functionalities to manage the MPEG workplan and timeline. Some of these, like the one below, are open to MPEG members.

The figure shows the MPEG timeline for the portion related to the MPEG-I, MPEG-CICP and MPEG-G standards. On the top there are icons to create new activities, show different table previews, provide project-wise views and more.

Another remarkable functionality is the creation of the document that collects all the results from a meeting. Group chairs enter their own results independently and the system organises them by standards for approval at the Friday plenary. This has made the review of decisions shorter and allowed MPEG to slash the time taken by plenaries.

MPEG assets

MPEG defines as assets the data associated with the development of standards: URIs, Schemas, Content, Software and Conformance testing data.

The first two are publicly available here and here.

By Content assets we mean the collection of all test data (content) used for CfEs and CfPs, and in subsequent Core Experiments. MPEG relies on the good will of companies interested in a standard to provide relevant test data. These are typically licensed for exclusive use in MPEG standardisation because they typically have high value, e.g. from the content and technology viewpoints. Content owners license their content directly to individual MPEG experts.

Software assets is the collection of all software developed or under development as reference software of MPEG standards. With the approach taken by MPEG to develop most of its standards from reference software and give it a normative status, its importance can hardly be overstated.

Conformance testing data is the collection of all data that can be used to test an implementation for conformance to an MPEG standard, published or under development. The process of developing conformance testing suites is painstaking and depends on the good will of MPEG members and their companies. On the other hand conformance testing suites are vital for the creation of ecosystems of interoperable implementations.

Content, Software and Conformance testing data for most data used in the development of MPEG standards in the last 30 years are stored in the Media Repository, Software Repository and Conformance Testing Repository hosted by Institut Mines Télécom. We are talking of a total of several TeraBytes of data.

Conclusions

This article can only give hints at the level of commitment and dedication of so many MPEG members – with the support of their companies. Of course, MPEG standards are produced by MPEG experts, but the MPEG process works because so many other people and organisations contribute, some in a visible and some others in a less visible form to make MPEG what it is.

Posts in this thread