MPEG communicates


MPEG standards are great communication tools and MPEG itself is – or tries to be – a good communicator, of course using all available media.

This article is for those who want to be informed about MPEG, without having to digest one of its handy 500-page standards 😊

MPEG web site

Almost everything that will be mentioned in this post can be found, not necessarily in an easy way, in the MPEG web page: press releases, MPEG column, video tutorials, White papers, investigations and technical notes, ad hoc groups, events, social networks, liaisons and meetings. The next meeting will be held at Marrakesh, MA from 14 to 18 January 2019

Press releases

Since its early days, MPEG took care of informing the world of its work. At the beginning this was done only on major occasions. Now MPEG does that systematically at every meeting. The news considered the most important news gets the headlines and all other achievements get a note. The rule is to mention in a press release all Calls for Evidence (CfE) and Call for Proposals (CfP), and the standards under development that reach the stage of Committee Draft (CD) or Final Draft International Standard (FDIS). News are prepared by the relevant group chairs, edited and distributed to the press by Prof. Christian Timmerer of University of Klagenfurt.

Let’s take as look at the press release from MPEG 124 (Macau SAR, CN) to have an example.

  1. The headline news is “Point Cloud Compression – MPEG promotes a video-based point cloud compression technology to the Committee Draft stage” because we believe this will be a very important standard.
  2. The second news is “MPEG issues Call for Proposals on Compressed Representation of Neural Networks”.
  3. Two video compression-related CfPs “MPEG issues Call for Proposals on Low Complexity Video Coding Enhancements” and “MPEG issues Call for Proposals for a New Video Coding Standard expected to have licensing terms timely available”
  4. News on “Multi-Image Application Format (MIAF) promoted to Final Draft International Standard”
  5. Anticipation of a new CfP “3DoF+ Draft Call for Proposal goes Public”.

The press release page containing all press release since January 2006 here.

If you want to be added to the distribution list, please send an email to Christian.

MPEG Column

When possible the news in the press releases have links to documents relevant to the news. We try to avoid links to the mythical 500-pages documents but clearly knowing more about a news item typically has a higher entry level.

With the MPEG column MPEG tries to be facilitate understanding of what its standards do. At every meeting, brief notes are published that explain the purpose and, to some extent, the working of MPEG standards.

Want to know about High Dynamic Range (HDR)? the Common Media Application Format (CMAF)? The new standard to view omnidirectional video (OMAF)? By going here you will see all articles published in the column and will have the opportunity to get answers to many questions on these and other MPEG standards.

The MPEG column is managed by the professional journalist Philip Merrill who is able to understand MPEG standards and explain them to the public.

Do you think an article on a particular standard would be of interest? Please send email to the chairman of the Communication Group Prof. Kyuheon Kim of Kyunhee University. We will do our best to satisfy your request.

Video tutorials

How could MPEG miss the opportunity to have its own series of “MPEG standards tutorials”, of course using its audio and video compression standards? By going to Video tutorials on MPEG standards you will be able to understand what MPEG has done to make its standards “green”, what is the Multimedia Preservation Application Format that manages multimedia content over the ages, what is the High Efficiency Video Coding (HEVC) standard, what is MPEG-H 3D Audio, what is MPEG Media Transport (MMT) used in ATSC 3.0, what is Dynamic Adaptive Streaming over HTTP (DASH) for audio-visual strteaming over the internet and much more.

The content is delivered by the best experts in the field. The videos that you see are the result of the shooting and post-processing performed by Alexis Tourapis.

White papers, investigations and technical notes

MPEG makes its best efforts to provide the smoothest entry path to its standards and several types of publicly accessible papers are functional to this strategy. White papers are published with the goal to

  1. Communicate that we are investigating some promising ideas as in Investigations about parts of standards
  2. Describe the purpose of an entire suite of standards, as in the case of White papers about standards or for a single part of a standard like in White papers about parts of standards
  3. Provide specific guidance about the use of standards as in Technical notes about parts of standards.

As a rule, the purpose of white papers is not to describe how the technology work but about what the standard is for, the problems it solves and the benefits that MPEG expects users will get by using it.

Investigations, White papers and Technical notes can be found here.


ISO is a private association registered in Switzerland. Standards are developed pro bono by participants in the working groups, but the cost of the organisation is covered by the sale of standards and other sources. Therefore you should not expect to find ISO standards on the public MPEG web page. If you need a standard, you should go to the ISO web site where your can easily buy online all the standards on sale. In some cases, MPEG requests ISO to make standard public because the standards is particularly relevant or because the standard is already publicly available (as is the case of all standards developed jointly with ITU-T).

MPEG posts all public documents at Standard documents from the last meeting, e.g. Use cases, Requirements, Calls for Evidence, Calls for Proposals, Working Drafts up to Committee Drafts, Verification Test results and more. MPEG does this because it wants to make sure that the industry is aware of, can comment on, and contribute to what to develop standards.

Ad hoc groups

Since 1990 MPEG has created ad hoc groups (AHG). According to the rules “AHGs are established for the sole purpose of continuing work between consecutive MPEG meetings”, but they are a unique way to have work done outside MPEG meetings in a coordinated way. The scope of an AHG, however, is limited by the rule: “The task of an AHG may only cover preparation of recommendations to be submitted to MPEG”.

AHGs are not permanent organisations but established at the end of a meeting to last until the following meeting, To have a feeling of what AHGs are about you can see the AHGs established at 124th MPEG meeting.

AHGs are mentioned as part of MPEG communication because anybody can join the email reflector of an AHG and even attend AHG meetings.

MPEG events

In certain cases, MPEG organises events open to the public. Some events are held during and co-located with an MPEG meeting, but events outside MPEG meetings are also held. These are some of the goals an MPEG event can have:

  1. To present a particular standard under development or just released as in Workshop on MPEG-G (Shenzhen, October 2018)
  2. To introduce the MPEG workplan such as MPEG Workshop on Immersive Services Roadmap (Gwangju, January 2018)
  3. To demonstrate what the industry is doing with an MPEG standards such as OMAF Developers’ Day (Gwangju, January 2018)
  4. To frame a particular segment of the MPEG activity in the general context of the industry such as Workshop on 5G/ Beyond UHD Media (La Jolla, February 2016).

Reference software and conformance

MPEG standards can be made public by a decision of the ISO Central Secretariat. MPEG requests ISO to make all reference software and conformance standards publicly available. The rationale of this request is that if developers look at the reference software, they need to buy the standard to make a conforming implementation. To create a healthy ecosystem of interoperable products, services and applications – the conformance testing suites, too, must make freely available. This entices more users to buy the standard.

Social networks

MPEG has a Twitter account @MPEGgroup ( This is used by a group of MPEG social champions to spread information on the currently hottest MPEG topics: MPEG-I Requirements, Neural Network Compression, MPEG-G, OMAF, File format, Network Based Media Processing, MPEG-I Visual (3DoF+ and 6DoF), Audio, Point Cloud Compression, Internet of Media Things.

Please subscribe to receive brief notes on MPEG-related news and events.


MPEG develops standards technologies that are used by many industries all over the world. MPEG requests to liaise with many standards committees and industry fora for several purposes: to get use cases and requirements, to jointly develop standards, to promote adoption of the standard once it has been developed and to receive further requests for new functionalities.

These are some of the organisations MPEG has liaisons with: the Third Generation Partnership Project (3GPP), Advanced Television Systems Committee, Inc. (ATSC), Internet Engineering Task Force (IETF), Society of Motion Picture and Television Engineers (SMPTE), Audio Engineering Society (AES), European Broadcast Union (EBU), Hybrid Broadcast Broadband TV (HbbTV), Society of Cable Telecommunications Engineers (SCTE), World Wide Web Consortium (W3C) and many more.

Administrative documents

At every meeting MPEG publishes several documents – that I call “administrative” for lack of a better name – but are very important because they include organisational information. The following documents relate to MPEG 124 (Macau SAR, October 2018):

  1. list of ad hoc groups established at the meeting
  2. call for patents related to standards under development
  3. list of MPEG standards produced since day 1 and those planned
  4. MPEG work plan with a summary description of all activities under way including explorations
  5. MPEG timeline with planned dates of development of all standard
  6. MPEG terms of reference
  7. MPEG schemas
  8. MPEG URIs.


MPEG is fully aware of the social and industrial implications of its standards. This post conveys the MPEG policy of being open to the world, to the extent that ISO rules allow.

Posts in this thread (in bold this post)

How does MPEG actually work?


In Life inside MPEG I have described the breadth, complexity and interdependence of the  work program managed by the MPEG ecosystem that has fed the digital media industry for the last 30 years. I did not mention, however, the actual amount of work developed by MPEG experts during an MPEG meeting. Indeed, at every meeting, the majority of work items listed in the post undergo a review prompted by member submissions. At the Macau meeting in October 2018, there were 1238 such documents covering the entire work program described in the post, actually more because that covers only a portion of what we are actually doing .

What I presume readers cannot imagine – and I do not claim that I do much more than they do – is the amount of work that individual companies perform to produce their submissions. Some documents may be the result of months of work often based on days/weeks/months of processing of audio, video, point clouds or other types of visual test sequences carried out by expensive high performance computers.

It is  clear why this is done, at least in the majority of cases: the discovery of algorithms that enable better and/or new audio-visual user experiences may trigger the development and deployment of highly rewarding products, services and applications. By joining the MPEG process, such algorithms may become part of one or more standard helping the industry develop top performance _interoperable_ products, services and applications.

But what is the _MPEG process_? In this post I would like to answer this question. As I may not be able to describe everything, I will likely have to revisit this issue in the future. In any case the  figure below should be used as a guide in the following.

How it all starts

The key players of the MPEG process are the MPEG members: currently ~1,500, ~1/4 are from academia and ~3/4 from industry, of which ~500 attend quarterly MPEG meetings.

Members bring ideas to MPEG for presentation and discussion at MPEG meetings. If the idea is found interesting and promising, an ad hoc group is typically established to explore further the opportunity until the next meeting. The proposer of the idea is typically appointed as chair of the ad hoc group. In this way MPEG offers the proposer the opportunity to become the entretrepreneur who can convert an idea into a product, I mean, an MPEG standard.

To get to a standard it may take some time: the newer the idea, the more time it may take to make it actionable by the MPEG process. After the idea has been clarified, the first step is to understand: 1) the context for which the idea is relevant, 2) the “use cases” the idea offers advantages for and 3) the requirements that a solution should satisfy to support the use cases.

Even if idea, use context, use cases and requirements have been clarified, it does not mean that technologies necessarily exist out there that can be assembled to provide the needed solution. For this reason, MPEG typically produces and publishes a Call for Evidence (CfE) – sometimes more than one – with attached context, use cases and requirements. The CfE requests companies who think they have technologies satisfying the requirements to demonstrate what they can achieve, without requesting them to describe how the results have been achieved. In many cases respondents are requested to use specific test data to facilitate comparison. If MPEG does not have them, it will ask industry to provide them.

Testing technologies for the idea 

If the result of the CfE is positive, MPEG will move to the next step and publish a Call for Proposals (CfP), with attached context, use cases, requirements, test data and evaluation method. The CfP requests companies who have technologies satisfying the requirements to submit responses to the CfP where they demonstrate the performance _anddisclose the exact nature of the technologies that achieve the demonstrated results.

Let’s see how this process has worked in the specific case of neural network compression. Note that the last row in italic refers to a future meeting. Links point to public documents on the MPEG web site.




120 17 Oct 1.      Presentation of idea of compressing neural networks

2.      Approval of CNN in CDVA (document)

121 18 Jan 1.      Release of Use cases and requirements 
122 18 Apr 1.      New release of Use cases and requirements 

2.      Release of Call for Test Data 

3.      Approval of Draft Evaluation Framework

123 18 Jul 1.      Release of Call for Evidence, Use cases and requirements

2.      New release of Call for Test Data

3.      New version of Draft Evaluation Framework

124 18 Oct 1.      Approval of Report on the CfE

2.      Release of Call for Proposals, Use cases and requirements, Evaluation Framework

126 19 Mar 1.      Responses to Call for Proposals

2.      Evaluation of responses based on Evaluation Framework

We can see that Use Cases and Requirements are updated at each meeting and made public, Test data are requested to the industry and the Evaluation Framework is developed well in advance of the CfP. In this particular case it will take 18 months just to move from the idea to CfP responses.

MPEG gets its hands on technology

A meeting where CfP submissions are due is typically a big event for the community involved. Knowledgeable people say that such a meeting is more intellectually rewarding than attending a conference. How could it be otherwise if participants not only can understand and discuss the technologies but also see and judge their actual performance? Everybody feels like being part of an engaging process of building a “new thing”.

If MPEG comes to the conclusion that the technologies submitted and retained are sufficient to start the development of a standard, the work is moved from the Requirements subgroup, which typically handles the process of moving from idea to proposal submission and assessment, to the appropriate technical group. If not, the idea of creating a standard is – maybe temporarily – dropped or further studies are carried out or a new CfP is issued calling for the missing technologies.

What happens next? The members involved in the discussions need to decide which technologies are useful to build the standard. Results are there but questions pop up from all sides. Those meetings are good examples of the “survival of the fittest” principle, applied to technologies as well as to people.

Eventually the group identifies useful technologies from the proposals and builds an initial Test Model (TM) of the solution. This is the starting point of a cyclic process where MPEG experts

  1. Identify critical points of the RM
  2. Define which experiments – called Core Experiments (CE) – should be carried out to improve TM performance
  3. Review members’ submissions
  4. Review CE technologies and adopt those bringing sufficient improvements.

At the right time (which may very well be the meeting where the proposals are reviewed or several meetings after), the group produces a Working Draft (WD). The WD is continually improved following the 4 steps above.

The birth of a “new baby” is typically not without impact on the whole MPEG ecosystem. Members may wake up to the need to support new requirements or they realise that specific applications may require one or more “vehicle” to embed the technology in those application or they come to the conclusion that the originally conceived solution needs to be split in more than one standard.

These and other events are handled by convening joint meetings between the group developing the technology and all technical “stakeholders” in other groups.

The approval process

Eventually MPEG is ready to “go public” with a document called Committee Draft (CD). However, this only means that the solution is submitted to the national standard organisations – National Bodies (NB) – for consideration. NB experts vote on the CD with comments. If a sufficient number of positive votes are received (this is what has happened for all MPEG CDs so far), MPEG assesses the comments received and decides on accepting or rejecting them one by one. The result is a new version of the specification – called Draft International Standard (DIS) – that is also sent to the NBs where it is assessed again by national experts, who vote and comment on it. MPEG reviews NB comments for the second time and produces the Final Draft International Standard. This, after some internal processing by ISO, is eventually published as an International Standard.

MPEG typically deals with complex technologies that companies consider “hot” because they are urgently needed for their products. As much as in companies the development of a product goes through different phases (alpha/beta releases etc., in addition to internal releases), achieving a stable specification requires many reviews. Because CD/DIS ballots may take time, experts may come to a meeting reporting bugs found or proposing improvements to the document under ballot. To take advantage of this additional information that the group scrutinises for its merit, MPEG has introduced an unofficial “mezzanine” status called “Study on CD/DIS” where proposals for bug fixes and improvements are added to the document under ballot. These “Studies on CD/DIS” are communicated to the NBs to facilitate their votes on the official documents under ballot.

Let’s see in the table below how this part of the process has worked for the Point Cloud Compression (PCC) case. Only the most relevant documents have been retained. Rows in italic refer to future meetings.




120 17 Oct 1.      Approval of Report on PCC CfP responses

2.      Approval of 7 CEs related documents

121 18 Jan 1.      Approval of 3 WDs

2.      Approval of PCC requirements

3.      Approval of 12 CEs related documents

122 18 Apr 1.      Approval of 2 WDs

2.      Approval of 22 CEs related documents

123 18 Jul 1.      Approval of 2 WDs

2.      Approval of 27 CEs related documents

124 18 Oct 1.      Approval of V-PCC CD

2.      Approval of G-PCC WD

3.      Approval of 19 CEs related documents

4.      Approval of Storage of V-PCC in ISOBMFF files

128 19 Oct 1.      Approval of V-PCC FDIS
129 20 Jan 1.      Approval of G-PCC FDIS

I would like to draw your attention to “Approval of 27 CEs related documents” in the July 2018 row. The definition of each of these CE documents requires lengthy discussions by involved experts because they describe the experiments that will be carried out by different parties at different locations and how the results will be compared for a decision. It should not be a surprise if some experts work from Thursday until Friday morning to get documents approved by the Friday plenary.

I would also like to draw your attention to “Approval of Storage of V-PCC in ISOBMFF files” in the October 2018 row. This is the start of a Systems standards that will enable the use of PCC in specific applications, as opposed to being just a data compression specification.

The PCC work item is currently seeing the involvement of some 80 experts. If you think that there were more than 500 experts at the October 2018 meeting, you should have a pretty good idea of the complexity of the MPEG machine and the amount of energies poured in by MPEG members.

The MPEG process – more than just a standard

In most cases MPEG performs “Verification Tests” on a standard  produced to provide the industry with precise indications of the performance that can be expected from an MPEG compression standard. To do this the following is needed: specification of tests, collection of appropriate test material, execution of reference or proprietary software, execution of subjective tests, test analysis and reporting.

Very often, as a standard takes shape, new requirements for new functionalities are added. They become part of a standard either through the vehicle of “Amendments”, i.e. separate documents that specify how the new technologies can be added to the base standard, or “New Editions” where the technical content of the Amendment is directly integrated into the standard in a new document.

As a rule MPEG develops reference software for its standards. The reference software has the same normative value as the standard expressed in human language.

MPEG also develops conformance tests, supported by test suites, to enable a manufacturer to judge whether its implementation of 1) an encoder produces correct data by feeding them into the reference decoder or 2) a decoder is capable of correctly decoding the test suites.

Finally it may happen that bugs are discovered in a published standard. This is an event to be managed with great attention because industry may already have released some implementations.


MPEG is definitely a complex machine because it needs to assess if an idea is useful in the real world, understand if there are technologies that can be used to make a standard supporting that idea, get the technologies, integrate them and develop the standard. Often it also has to provide integrated audio-visual solutions where a line up of standards nicely fit to provide a system specification. At every meeting MPEG works simultaneously on tens of intertwined activities taking place at the same time.

MPEG needs to work like a company developing products, but it is not a company. Fortunately one can say, but also unfortunately because it has to operate under strict rules. ISO is a private organisation, but it is international in scope.

Posts in this thread (in bold this post)

Life inside MPEG


In my earlier post 30 years of MPEG, and counting? I brought evidence of what MPEG has done in the last 30 years to create the broadcast, recording, web and mobile digital media world that we know. This document tries to make the picture more complete by looking at the current main activities and deliveries in the next few months/years.

The MPEG work plan at a glance

The figure below shows the main standards developed or under development by MPEG in the 2017-2023 period organised in 3 main sections:

  • Media Coding (e.g. MP3 and AVC)
  • Systems and Tools (e.g. MPEG-2 TS and File Format)
  • Beyond Media (currently Genome Compression and Neural Network Compression).

Navigating the MPEG work plan

Video Coding Audio Coding 3D Graphics Coding
Font Coding Genome Coding Neural Network Coding
Media Description Systems support IPMP
Transport Application Formats API
Media Systems Conformance Reference Implementation

Video coding

In the Video coding area MPEG is currently handling 4 standards (MPEG-4, -H, -I and -CICP) and several Explorations.

MPEG-I ISO/IEC 23090 Coded representation of immersive media is the container of standards needed for the development of immersive media devices, applications and services.

MPEG is currently working on Part 3 Versatile Video Coding, the new video compression standard after HEVC. VVC is developed jointly with VCEG and is expected to reach FDIS stage in October 2020.


  1. An exploration on a new video coding standard that combines coding efficiency (similar to that of HEVC), complexity (suitable for real time encoding and decoding) and usability (timely availability of licensing terms). In October 2018 a Call for Proposals was issued. Submissions are due in January 2019 and FDIS stage for the new standard is expected to be reached in January 2020.
  2. An exploration on a future standard that defines a data stream structure composed by two streams: a base stream decodable by a hardware decoder, and an enhancement stream suitable for software processing implementation with sustainable power consumption. This activity is supported by more than 30 major international players in the video distribution business. A Call for Proposal has been issued in October 2018. Submissions are due in March 2019 and FDIS stage is expected to be reached in April 2020.
  3. Several explorations on Immersive Video Coding
    • 3DoF+ Visual: a Call for Proposal will be issued in January 2019. Submissions are due in March 2019 and the FDIS is planned for July 2020. The result of this activity is not meant to be a video coding standard but a set of metadata that can be used to provide a more realistic user experience in OMAF v2. Indeed, 3DoF+ Visual will be a part of MPEG-I part 7 Immersive Media Metadata. Note that 3 Degrees of Freedom (3DoF) means that a user can only make yaw, pitch, roll movements, but 3DoF+ means that the user can also displace the head to a limited extent.
    • Several longer-term explorations on compression of 6DoF visual (Windowed-6DoF and Omnidirectional 6DoF) and Compression of Dense Representation of Light Fields. No firm timeline for standards in these areas has been set.

Audio coding

In the Audio coding area MPEG is handling 3 standards (MPEG-4, -D, and -I). Of particular relevance is the MPEG-I part 3 Immersive Audio activity. This is built upon MPEG-H 3D Audio – which already supports a 3DoF user experience – and will provide a 6DoF immersive audio VR experience. A Call for Proposal will be issued in March 2019. Submissions are expected in January 2020 and FDIS stage is expected to be reached in April 2021. As for 3DoF+ Visual, this standard will not be about compression, but about metadata.

3D Graphics coding

In the 3D Graphics coding area MPEG is handling two parts of MPEG-I.

  • Video-based Point Cloud Compression (V-PCC) for which FDIS stage is planned to be reached in October 2019. It must be noted that in July 2018 an activity was initiated to develop standard technology for integration of a 360 video and V-PCC objects.
  • Geometry-based Point Cloud Compression (G-PCC) for which FDIS stage is planned to be reached in January 2020.

The two PCC standards employ different technologies and target different application areas: entertainment and automotive/unmanned aerial vehicles,

Font coding

In the Font coding area MPEG is working on MPEG-4 part 22.

MPEG-4 ISO/IEC 14496 Coding of audio-visual objects is a 34-part standard that made possible large scale use of media on the fixed and mobile web.

Amendment 1 to Open Font Format will support complex layouts and new layout features. FDAM stage will be reached in April 2020.

Genome coding

MPEG-G ISO/IEC 23092 Genomic Information Representation is the standard developed in collaboration with TC 276 Biotechnology to compress files containing DNA reads from high speed sequencing machines.

In the Genome coding area MPEG plans to achieve FDIS stage for Part 2 Genomic Information Representation in January 2019. MPEG has started investigating additional genome coding areas that would benefit from standardisation.

Neural network coding

Neural network compression is an exploration motivated by the increasing use of neural networks in many applications that require the deployment of a particular trained network instance potentially to a large number of devices, which may have limited processing power and memory.

In the Neural network coding area MPEG has issued a Call for Evidence in July 2018, assessed the responses received in October 2018 and collected evidence that justifies the Call for Proposals issued at the same October 2018 meeting. The goal of the Call is to make MPEG aware of technologies to reduce the size of trained neural networks. Responses are due in March 2019. As it is likely that in the future more Calls will be issued for other functionality (e.g., incremental representation), an expected time for FDIS has not been identified yet.

Media description

Media description is the goal of the MPEG-7 standard which contains technologies for describing media, e.g. for the purpose of searching media.

In the Media description area MPEG has completed Part 15 Compact descriptors for video analysis (CDVA) in October 2018. By exploiting the temporal redundancy of video, CDVA extracts a single compact descriptor that represents a video clip rather than individual frames, which was the goal of Compact Descriptors for Visual search (CDVS).

Work in this area continues to complete reference software and conformance.

System support

In the System support area MPEG is working on MPEG-4, -B and -I. In MPEG-I MPEG is developing

  • Part 6 – Immersive Media Metrics which specifies the metrics and measurement framework to enhance the immersive media quality and experiences. 3DoF+ Visual metadata will be one component of this standard
  • Part 7 – Immersive Media Metadata which specifies common immersive media metadata focusing on immersive video (including 360° video), images, audio, and timed text.

Both parts are planned to reach FDIS stage in July 2020.

Intellectual Property Management and Protection

In the IPMP area MPEG is developing an amendment to support multiple keys per sample. FDAM stage is planned to be reached in March 2019. Note that IPMP is not about _defining_ but _employing_ security technologies to digital media.


In the Transport area MPEG is working on MPEG-2, -4, -B, -H, -DASH, -I, -G and Explorations.

MPEG-2 ISO/IEC 13818 Generic coding of moving pictures and associated audio information is the standard that enabled digital television.

Part 2 Systems continues to be an extremely lively area of work. After producing Edition 7, MPEG is working on two amendments to carry two different types of content

  • JPEG XS (a JPEG standard for low latency applications)
  • CMAF (an MPEG Application Format).


Part 12 ISO Based Media File Format is another extremely lively area of work. Worth mentioning are two amendments

  • Compact Sample-to-Group, new capabilities for tracks, and other improvements – has reached FDAM stage in October 2018
  • Box relative data addressing – is expected to reach FDAM in March 2019.

The 7th Edition of the MP4 file format is awaiting publication.

MPEG-B ISO/IEC 23001 MPEG systems technologies is a collection of systems standards that are not specific to a given standard.

In MPEG-B MPEG is working on two new standards

  • Part 14 Partial File Format will provide a standard mechanism to store HTTP entities and the partial file in broadcast applications for later cache population. The standard is planned to reach FDIS stage in April 2020.
  • Part 15 Carriage of Web Resources in ISOBMFF will make it possible to enrich audio/video content, as well as audio-only content, with synchronised, animated, interactive web data, including overlays. The standard is planned to reach FDIS stage in January 2019.

MPEG-H ISO/IEC 23008 High efficiency coding and media delivery in heterogeneous environments is a 15-part standard for audio-visual compression and heterogeneous delivery.

Part 10 MPEG Media Transport FEC Codes is being enhanced by the Window-based FEC code an amendment. FDAM is expected to be reached in January 2020.

MPEG-DASH ISO/IEC 23009 Dynamic adaptive streaming over HTTP (DASH) is the standard for media delivery on unpredictable-bitrate delivery channels.

In MPEG-DASH MPEG is working on

  • Part 1 Media presentation description and segment formats is being enhanced by the Device information and other extensions amendment. FDAM is planned to be reached in July 2019.
  • Part 7 Delivery of CMAF content with DASH contains guidelines recommending some of the most popular delivery schemes for CMAF content using DASH. TR is planned to be reached in March 2019


Part 2 Omnidirectional Media Format (OMAF) released in October 2017 is the first standard format for delivery of omnidirectional content. With OMAF 2nd Edition Interactivity support for OMAF, planned to reach FDIS in January 2020, MPEG is substantially extending the functionalities of OMAF. Future releases may add 3DoF+ functionalities.


MPEG plans to achieve FDIS stage for Part 1 Transport and Storage of Genomic Information in January 2019.

Application Formats

MPEG-A ISO/IEC 23000 Multimedia Application Formats is a suite of standards for combinations of MPEG and other standards (when there are no suitable MPEG standard for the purpose).

MPEG is working on two new standards

  • Part 21 Visual Identity Management Application Format will provide a framework for managing privacy of users appearing on pictures or videos shared among users. FDIS is expected to be reached in January 2019.
  • Part 22 Multi-Image Application Format (MIAF) will enable precise interoperability points when creating, reading, parsing and decoding images embedded in HEIF (MPEG-H part 12).

Application Programming Interfaces

The Application Programming Interfaces area comprises standards developed in order to make possible effective use of some MPEG standards.

In the API area MPEG is working on MPEG-I, MPEG-G and MPEG-IoMT.


MPEG is working on Part 8 Network-based Media Processing (NBMP), a framework that will allow users to describe media processing operations to be performed by the network. The standard is expected to reach FDIS stage in January 2020.


MPEG is working on Part 3 Genomic Information Metadata and Application Programming Interfaces (APIs). The standard is expected to reach FDIS stage in March 2019.

MPEG-IoMT ISO/IEC 23093 Internet of Media Things is a suite of standards supporting the notion of Media Thing (MThing), i.e. a thing able to sense and/or act on physical or virtual objects, possibly connected to form complex distributed systems called Internet of Media Things (IoMT) where MThings interact between them and humans.

In MPEG-IoMT MPEG is working on

  • Part 2 IoMT Discovery and Communication API
  • Part 3 IoMT Media Data Formats and API.

Both expected to reach FDIS stage in March 2019.

Media Systems

Media Systems includes standards or Technical Reports targeting architectures and frameworks.

In Media Systems MPEG is working on Part 1 IoMT Architecture, expected to reach FDIS stage in March 2019. The architecture used in this standard is compatible with the IoT architecture developed by JTC 1/SC 41.

Reference implementation

MPEG is working on the development of 10 standards for reference software of MPEG-4, -7, -B, -V, -H, -DASH, -G, -IoMT


MPEG is working on the development of 8 standards for conformance of MPEG-4, -7, -B, -V, -H, -DASH, -G, -IoMT.


Even this superficial overview should make evident to all the complexity of interactions in the MPEG ecosystem that has been ongoing for 30 years (note that the above only represents a part of what happened at the last meeting in Macau).

In 30 years of MPEG, and counting? I wrote “Another thirty years await MPEG, if some mindless industry elements will not get in the way”.

It might well have been a prophecy.

Posts in this thread (in bold this post)

Data Compression Technologies – A FAQ

This post attempts to answer some of the most frequent questions received on the proposed ISO Data Compression Technologies Technical Committee (DCT TC). See here and here. If you have a question drop an email to

Q: What is the difference between MPEG and DCT?

A: Organisation-wise MPEG is a Working Group (WG), reporting to a Subcommittee (SC), reporting to a Technical Committee (TC), reporting to the Technical Management Board (TMB). Another important difference is that a WG makes decision by consensus, i.e. with a high standard of agreement – not necessarily unanimity – while a TC makes decisions by voting.

Content-wise DCT is MPEG with a much wider mandate than media compression involving many more industries than just media.

Q: What is special of DCT to need a Technical Committee?

A: Data compression is already a large industry even if one only looks at the single media industry. By serving more client industries, it will become even larger and more differentiated. Therefore, the data compression industry will require a body with the right attributes of governance and representation. As an example, data compression standards are “abstract” as they are collections of algorithms, and intellectual property intensive (today most patent declarations submitted to ISO are from MPEG). The authority to regulate these matters resides in ISO but the data compression industry will need an authoritative body advocating its needs because the process will only intensify in the future.

Q: Why do you think that a Technical Committee will work better than a Working Group?

A: With its 175 standards produced and its 1400 members, over 500 of which attend quarterly meetings, MPEG – a working group – has demonstrated that it can deliver revolutionising standards affecting a global industry. DCT intends to apply the MPEG successful business model to many other non-homogeneous industries. Working by consensus is technically healthy and intellectually rewarding. Consensus will continue to be the practice of DCT, but policy decisions affecting the global data compression industry often require a vote. Managing a growing membership (on average one member a day joins the group) requires more than an ad hoc organisation like today. The size of MPEG is 20-30 times the normal size of an ISO WG.

Q: Which are the industries that need data compression standards?

According to Forbes, 163 Zettabytes of data, i.e. 163 billion Terabytes, will be produced worldwide in 2025. Compression is required by the growing number of industries that are digitising their processes and need to store, transmit and process huge amounts of data.

The media industry will continue its demands for more compression and for more media. The industries that capture data from the real world with RADAR-like technologies (earth-bound and unmanned aerial vehicles, civil engineering, media etc.), genomics, healthcare (all sorts of data generated by machines sensing humans), automotive (in-vehicle and vehicle-to-vehicle communication, environment sensing etc.), industry 4.0 (data generated, exchanged and processed by machines), geographic information and more.

Q: What are the concrete benefits that DCT is expected to bring?

A: MPEG standards have provided the technical means for the unrelenting growth of the media industry over the last 30 years. This has happened because MPEG standards were 1) timely (i.e. the design is started in anticipation of a need), 2) technically excellent (i.e. MPEG standards are technically the best at a given time), lively (i.e. new features improving the standard are continuously added) and 4) innovative (i.e. a new MPEG standard is available when technology progress enables a significant delta in performance). DCT is expected to do the same for the other industries targeted by DCT.

Q: Why should DCT solutions be better than solutions that are individually developed by industries?

A: “Better” is a multi-faceted word and should be assessed from different angles: 1) technical: compression is an extremely sophisticated domain and MPEG has proved that relying on the best experts produces the best results; 2) ecosystem: implementors are motivated to make products if the market is large and sharing technologies means that implementors face a lower threshold to enter. ON the contrary a small isolated market may not provide enough motivation to implementors; 3) maintenance: an industry may well succeeds in assembling a group of experts to develop a standard. However, a living body like a standard needs constant attention that can no longer be provided if  the experts have left the committee; 4) interoperability: industry-specific data compression methods are dangerous when digital convergence drives data exchange across apparently unrelated industry segments because they create non-communicating islands.

Posts in this thread (in bold this post)

Copyright, troglodytes and digital

We can only guess the feelings of the troglodyte that drew animals on the walls of the Altamira cave, but we have a rough idea of the feelings of the Latin poet Martial towards those who proclaimed his own works as theirs. We know even more precisely the feelings of the Italian 16th century poet Ariosto when he proposed to Duke Alfonso d’Este of Ferrara to share the proceeds that he would obtain from the fines imposed on those who copied the poet’s works.

In now remote analogue times, it was “easy” for the law to prosecute those who made unauthorised copies of a book. The arrival of the photocopier. however, forced the law to put a patch – the so-called photocopy tax. The arrival of the audio cassette made it even easier to make copies of music tracks and so another tax was levied, this time on blank cassettes. The arrival of MP3 made copying audio so easy that the world we knew (I mean, those who knew it), ended there.

The arrival of digital technology was truly a missed opportunity. Instead of solving the problem at the root by exploiting come digital technologies, already widely available at the time, patches were added to patches trying to hold together a situation that had been cracking on all sides for decades.

The last patch in line is the new proposed European copyright directive.

I need to make clear on where I stand. You may well dress up the topic with pompous words like “freedom of the internet”, but it is clear to me that those who make a profit by exposing thousands of snippets of newspaper articles would do well to get in touch first with newspaper publishers. I do not know Martial, but I’m sure Ariosto would have be fully with me on this. So I’m not against a new copyright directive.

I am concerned by the logical foundation of the draft European directive and, in particular, by Article 13. This requires that a filter be placed to check that any digital object uploaded online in the European Union does not violate the copyright of a third party. Instead of saying that uploading somebody else’s content without authorisation should not be done – something I have just said I could not agree more with – Article 13 prescribes a particular solution, i.e. that gateway be placed at a particular place in the value chain.

I happen to have different ideas and I will illustrate them with a personal anecdote. When I was a child and I was free from school, I helped my mother in a market close to our town where she ran a stall. Obviously our main task was to sell, but it was no less important to make sure that the goods on display did not “inadvertently” end up in someone’s pockets. We applied a principle that Martial would have called caveat venditor.

Please note that our attitude of prudence is not the custom of my family alone or of the times of my youth. Over the years I have visited markets in different parts of the world, as a buyer, and I noticed that sellers, regardless of local culture, all behave in the same cautious way.

Back to Article 13, what does this mean in practice? That if I manage a website where my clients upload content, I have the obligation to verify that such content does not violate someone else’s copyright.

But why should I do it? If my mother and I have kept an eye on our goods and millions of sellers on the markets in all latitudes and longitudes pay attention to theirs, why copyright holders should be exempt from the task of controlling their – digital, but who cares -goods?

My mother and I cavimus (bewared), millions of people cavent (beware), the copyright holders caveant (ought to beware).

Some might reply that copyright holders are not able to do that in a global market. That was certainly true before the internet. In the digital age, however, there are technologies that allow copyright holders to control their content without imposing gratuitous burdens on people who are just doing their jobs.

It is very clear to me that technology alone is hardly an answer and that it must be integrated with the law. It is also clear to me that such an integration may not be easy, but it is not a good reason to continue to handle technology as if we were the troglodyte of the Altamira cave.



It worked twice and will work again

On the 30th anniversary of the first MPEG meeting I wrote a paper that extolled the virtues of MPEG in terms productivity – in absolute and relative terms – and of actual standard adoption. In this paper I would like to expose the vision that has driven MPEG since its early days that explains and promises to continue its success.

Today it is hard to believe the state of television 30 years ago. Each country retained the freedom to define its own baseband and transmission standards for terrestrial, cable and satellite distribution. The baseband and distribution standards of package media (video cassettes, laser discs etc.) were “international” but controlled by a handful of companies (remember Betamax and VHS).

With digital technologies coming of age, everybody was getting ready for more of the same. The only expected difference was the presence of a new player – the telecom industry – who saw digital as the way to get into the video distribution business.

Figure 1 depicts the situation envisaged in each country or region or industry: continuing the analogue state of things, each industry would, as a matter of principle, have a different the digital baseband.

Figure 1 – Digital television seen with analogue eyes

The MPEG magician played a magic to the global media industry saying: look, we give you a single standard for the digital baseband that works for telecom, terrestrial, cable, satellite and package media distribution. There would be a lot to say – and to learn – about how a group of compression experts convinced a global industry worth hundreds of billion USD, but let’s simply say that it worked.

Figure 2 shows the effect of the MPEG magic and how different the digital television industry turned out to be than the one depicted in Figure 1: all industries shared the same media compression layer.

Figure 2 – The digital television distribution

Since 1994, when MPEG-2 was adopted, MPEG has managed the standards of the media compression layer for the 5 industries. An important side effect was the birth of a new “media compression layer industry” – global this time – partly from pieces of the old industry and partly from entirely new pieces.

This was only the beginning of the story because, in the mean time, internet had matured to a (kind of, at that time) broadband distribution infrastructure for the fixed and the mobile internet. MPEG took notice and developed the standards that would serve these new industries while still serving the old ones. The picture illustrates the new configuration of the industry which is largely the one that exists today.

Figure 3 – The digital media distribution

So the magic worked again. Looking back some 20 years ago the industries who were fancied by the MPEG magic have no reason to regret as they have seen constant growth supported by the best media compression standards:

  • Digital Media revenues amount to 126.4 B$ in 2018, steadily increasing over the last few years
  • Digital TV and video industry, including e.g. Netflix and Amazon, are expected to be worth 119.2 B$ in 2022, up from 64 B$ in 2017
  • Digital ad spending overtook TV ad spending in 2017 with a record spending of 209 B$ worldwide.

In another paper I made known that the Italian ISO member body has requested ISO to establish a Data Compression Technologies (DCT) Technical Committee (TC). That proposal represents an extension of the model described above and is represented in Figure 4 (the new industries mentioned are the likely first targets of the DCT TC).

Figure 4 – The data compression industry

The DCT TC will provide data compression standards for all industries in need of data compression to do their job better. The field of endeavour called “data compression” generates standard algorithms expressed by abstract languages like mathematical formulae or code snippets for implementation in software or silicon by a variety of application domains.

I look forward to the new MPEG magic played by the Data Compression Technologies Technical Committee to provide new records in sustained growth to new industries.

Posts in this thread (in bold this post)

Compression standards for the data industries


In my post Compression – the technology for the digital age, I called data compression “the enabler of the evolving media-rich communication society that we value”. Indeed, data compression has freed the potential of digital technologies in facsimile, speech, photography, music, television, video on the web, on mobile, and more.

MPEG has been the main contributor to the stellar performance of the digital media industries: content, services and devices – hardware and software. Something new is brewing in MPEG because it is applying its compression toolkit to other non-media data such as point clouds and DNA reads from high speed sequencing machines, and plans on doing the same on neural networks.

Recently UNI – the Italian ISO member body – has submitted to ISO the proposal to create a Technical Committee on Data Compression Technologies (DCT, in the following) with the mandate to develop standards for data compression formats and related technologies to enable compact storage as well as inter-operable and efficient data interchange among users, applications and systems. MPEG activities, standards and brand should be transferred to DCT.

With its track record, MPEG has proved that it is possible to provide standard data compression technologies that are the best in their class at a given time to serve the needs of the digital media industries. The DCT proposal to extend the successful MPEG “horizontal standards” model to the “data industries” at large, of course retaining the media industries.

That giving all industries the means to enjoy the benefits of more data accessed and used by systematically applying standard data compression to all data types is not an option, but a necessity is proved by Forbes.  Estimates indicate that by 2025 the world will produce 163 Zettabytes of data. What will we do of those data, when today only 1% of the data created is actually processed?

Why Data Compression is important to all

Handling data is important for all industries: in some cases it is their raison d’être, in other cases it is crucial to achieve the goal and in still others data is the oil lubricating the gears.

Data appear in manifold scenarios: in some cases a few sources create huge amounts of continuous data, in other cases many sources create large amounts of data and in still others each of a very large number of sources creates small discontinuous chunks of data.

Common to all scenarios is the need to store, process and transmit data. For some industries, such as telecommunication and broadcasting, early adopters of digitisation, the need was apparent from the very beginning. For others the need is gradually becoming apparent now.

Let’s see in some representative examples why industries need data compression.

Telecommunication. Because of the nature of their business, telecommunication operators (telcos) have been the first to be affected by the need to reduce the size of digital data to provide better existing services and/or attractive new services. Today telcos are eager to make their networks available to new sources of data.

Broadcasting. Because of the constraints posed by the finite wireless spectrum on their ability to expand their quality and range of services, broadcasters have always welcome more data compression. They have moved from Standard Definition to High Definition then to Ultra High Definition and beyond (“8k”), but also to Virtual Reality. For each quantum step in the quality of service delivered, they have introduced new compression. More engaging future user experiences will require the ability to transmit or receive ever more data, and ever more types of data.

Public security. MPEG standards are already universally used to capture audio and video information for security or monitoring purposes. However, technology progress enables users to embed more capabilities in (audio and video) sensors, e.g. face recognition, counting  of people and vehicles etc., and the sharing of that information in a network of more and more intelligent sensors to drive actuators. New standard data compression technologies are needed to support the evolution of this domain.

Big Data. In terms of data volume, audio and video, e.g. those collected by security devices or vehicles, are probably the largest component of Big Data, as shown by the Cisco study forecasting that by 2021 video on the internet will account for more than 80% of total traffic. Moving such large amounts of information from source to the processing cloud in an economic fashion requires data compression and their processing requires standards that allow the data to be processed independently of the information source.

Artificial intelligence uses different types of neural networks, some of which are “large”, i.e. require many Gigabytes and require massive computational complexity. To practically move intelligence across networks, as required by many consumer and professional use scenarios, standard data compression technologies are needed. Compression of neural networks is not only a matter of bandwidth and storage memory, but also of power consumption, timeliness and usability of intelligence.

Healthcare. The healthcare world is already using genomics but many areas will greatly benefit by a 100-fold reduction of the size and the time to access the data of interest. In particular, compression will accelerate the coming-of-age of personalised medicine. As healthcare is often a public policy concern, standard data compression technologies are required.

Agriculture and Food. Almost anything related to agriculture and food has a genomic source. The ability to easily process genomic data thanks to compression opens enormous possibilities to have better agriculture and better food. To make sure that compressed data can be exchanged between users, standardised data compression technologies are required.

Automotive. Vehicles are becoming more and more intelligent devices that drive and control their movement by communicating with other vehicles and fixed entities, sensing the environment, and storing the data for future use (e.g., for assessing responsibilities in a car crash). Data compression technologies are required and, especially when security is involved, the technologies must be standard.

Industry 4.0. The 4th industrial revolution is characterised by “Connection between physical and digital systems, complex analysis through Big Data and real-time adaptation”. Collaborative robots and 3D printing, the latter also for consumer applications, are main components of Industry 4.0. Again, data compression technologies are vital to make Industry 4.0 fly and, to support multi-stakeholder scenarios, technologies should be standard.

Business documents. Business documents are becoming more diverse and include different types of media. Storage and transmission of business documents are a concern if bulky data are part of them. Standards data compression technologies are the principal way to reduce the size of business documents now and more so in the future.

Geographic information. Personal devices consume more and more geographic information and, to provide more engaging user experiences, the information itself is becoming “richer”, which typically means “heavier”. To manage the amount of data data compression technologies must be applied. Global deployment to consumers requires that the technologies be standard.

Blockchains and distributed ledgers enable a host of new applications. Distributed storage of information implies that more information is distributed and stored across the network, hence the need for data compression technologies. These new global distributed scenarios require that the technologies be standard.

Which Data Compression standards?

Data compression is needed if we want to be able to access all the information produced or available anywhere in the world. However, as the amount of data and the number of people accessing it grows, new generations of compression standards are needed. In the case of video, MPEG has already produced five generations of compression standards and one more is under development. In the case of audio, five generations of compression standards have already been produced, with the fifth incorporating extensive use of metadata to support personalisation of the user experience.

MPEG compression technologies have had, and continue to have, extraordinarily positive effects on a range of industries with billions of hardware devices and software applications that use standards for compressing and streaming audio, video, 3D graphics and associated metadata. The universally recognised MP3 and MP4 acronyms demonstrate the impact that data compression technologies have on consumer perception of digital devices and services over the world.

Non-inter-operable silos, however, are the biggest danger in this age of fast industry convergence that only international standards based on common data compression technologies can avoid. Point Clouds and Genomics show that data compression technologies can indeed be re-used for different data types from different industries. Managing different industry requirements is an art and MPEG has developed it over 30 years dealing with such industries as telecom, broadcasting, consumer electronics, IT, media content and service providers and, more recently, bioinformatics. DCT can safely take the challenge and do the same for more industries.

How to develop Data Compression standards?

As MPEG has done for the industries it already serves, DCT should only develop “generic” international standards for compression and coded representation of data and related metadata suitable for a variety of application domains so that the client communities can use them as components for integration in their systems.

The process adopted by MPEG should also be adopted by DCT, namely:

  • Identification of data compression requirements (jointly with the target industry)
  • Development of the data compression standard (in consultation with the target industry)
  • Verification that the standard satisfies the agreed requirements (jointly with the target industry)
  • Development of test suites and tools (in consultation with the target industry)
  • Maintenance of the standards (upon request of the target industry).

Data Compression is a very specialised field that many technical and business communities in specific domains are ill-equipped to master satisfactorily Even if an industry succeeds in attracting the necessary expertise, the following will likely happen

  1. The result is less than optimal compared to what could have been obtained from the best experts;
  2. The format developed is incompatible with other similar formats with unexpected inter-operability costs in an era of convergence;
  3. The implementation cost of the format is too high because an industry may be unable to offer sufficient returns to developers;
  4. Test suites and tools cannot be developed because a systematic approach cannot be improvised;
  5. The experts who have developed the standard are no longer around to ensure its maintenance.

Building the DCT work plan

The DCT work plan will be decided by the National Bodies joining it. However, the following is a reasonable estimate of what that workplan will be.

Data compression for Immersive Media. This is a major current MPEG project that currently comprises systems support to immersive media; video compression; metadata for immersive audio experiences; immersive media metrics; immersive media metadata and network-based media processing (NBMP). A standard for systems support (OMAF) has already been produced, a standard for NBMF is planned for 2019, a video standard in 2020 and an audio standard in 2021. After completing the existing work plan DCT should address the promising light field and audio field compression domains to enable really immersive user experiences.

Data compression for Point Clouds. This is a new, but already quite advanced area of work for MPEG. It makes use of established MPEG video and 3D graphics technologies to provide solutions for entertainment and other domains such as automotive. The first standard will be approved in 2019 but DCT will also work for new generations of point cloud compression standards for delivery in the early 2020s.

Data compression for Health Genomics. This is the first entirely non-media field addressed by MPEG. In October 2018 the first two parts – Transport and Compression – will be completed and the other 3 parts – API, Software and Conformance – will be released in 2019. The work is done in collaboration with ISO/ TC 276 Biotechnologies. Studies for a new generation of compression formats will start in 2019 and DCT will need to drive those studies to completion along with other data types generated by the “health” domain for which data compression standards can be envisaged.

Data compression for IoT. MPEG is already developing standards in the the specific “media” instance of IoT called “Internet of Media Things” (IoMT). This partly relies on the MPEG standard called MPEG-V – Sensors and Actuators Data Coding, which defines a data compression layer that can support different types of data from different types of “things”. The first generation of standards will be released in 2019. DCT will need to liaise with the relevant communities to drive the development of new generations of IoT compression standards.

Data compression for Neural Networks. Several MPEG standards are or will soon employ neural network technologies to implement certain functionalities. A “Call for Evidence” has been issued in July 2018 to get evidence of the state of compression technologies for neural networks after which a “Call for Proposals” will be issued to get the necessary technologies and develop a standard. End of 2021 can be the estimated time of the first neural network compression standard. However, DCT will need to investigate which other compression standards for this extremely dynamic field.

Data compression for Big Data. MPEG has already adapted the ISO Big Data reference model for its “Big Media Data” needs. Specific work has not begun yet and DCT will need to get the involvement of relevant communities, not just in the media domain.

Data compression for health devices. MPEG has considered the need for compression of data generated by mobile health sensors in wearable devices and smartphones to cope with their limited storage, computing, network connectivity and battery. DCT will need to get the involvement of relevant communities and develop data compression standards for heath devices that promote their effective use.

Data compression for Automotive. One of the point cloud compression use cases – efficient storage of the environment captured by sensors on a vehicle – is already supported the Point Cloud Compression standard under development. There are, however, many more types of data that are generated, stored and transmitted inside and outside of a vehicle for which data compression has positive effects. DCT can offer its expertise to the automotive domain to achieve new levels of efficiency, safety and comfort in vehicles.

The list above includes standards MPEG is already deeply engaged in or is already working on. However, the industries that can benefit from data compression standard is much broader than those mentioned above (see e.g., Why Data Compression is important to all) and the main role of DCT will be to actively investigate the data compression needs of industries, get in contact with them and jointly explore new opportunities for standard development.

Is DCT justified?

The purpose of DCT is to make accessible data compression standards, the key enabler of devices, services and application generating digital data, to industries and communities that do not have the necessary estremely specialised expertise to develop and maintain such standards on their own.

The following collects the key justifications for creating DCT:

  1. Data compression is an enabling technology for any digital data. Data compression has been a business enabler to media production and distribution, tecommunication, and Information and Communication Technologies (ICT) in general by reducing the cost of storing, processing and transmitting digital data. Therefore, data compression will also facilitate enhanced use of digital technologies to other industries that are undergoing – or completing – transition to digital. As it happened for media, by lowering the access threshold to business, in particular to SMEs, data compression will drive industries, to create new business models that will change the way companies generate, store, process, exchange and distribute data.
  2. Data compression standards trigger virtuous circles. By reducing the amount of data required for transmission, data compression will enable more industries to become digital. Being digital will generate more data and, in turn, further increase the need for data compression standards. Because compressed digital data are “liquid” and easily cross industry borders, “horizontal”. i.e. “generic” data compression standards are required.
  3. Data compression standards remove closed ecosystems bottlenecks. In closed environments, industry-specific data compression methods are possible. However, digital convergence is driving an increase in data exchange across industry segments. Therefore industry-specific standards will result in unacceptable bottlenecks caused by a lack of interoperability. Reliable, high-performance and fully-maintained data compression standards will help industries avoid the pitfalls of closed ecosystems that limit long-term growth potential.
  4. Sophisticated technology solutions for proven industry needs. Data compression is a highly sophisticated technology field with 50+ years of history. Creating efficient data compression standards requires a body of specialists that a single industry can ill afford to establish and, more importantly, maintain. DCT will ensure that the needs for specific data compression standards can always be satisfied by a body of experts who identify requirements with the target industries, develop standards, test for satisfactory support of requirements, produce testing tools and suites, and maintain the standards over the years.
  5. Data compression standards to keep the momentum growing . The industries that have most intensely digitised their products and services prove that their growth is due to their adoption of data compression standards. DCT will offer other industries and communities the means to achieve the same goal with the best standards, compatible with other formats to avoid interoperability costs in an age of convergence, with reduced implementation costs because suppliers can serve a wide global market and with the necessary conformance testing and maintenance support.
  6. Data compression standards with cross-domain expertise. While the nature of “data” differs depending on the source of data, the MPEG experience has shown that compression expertise transfers well across domains. A good example is MPEG’s Genome Compression standard (ISO/IEC 23092), where MPEG compression experts work with domain experts, combining their respective expertise to produce a standard that is expected to be widely used by the genomic industry. This is the model that will ensure sustainability of a body of data compression experts while meeting the requirements of different industries.
  7. Proven track record, not a leap in the dark. MPEG has 1400 accredited experts, has produced 175 digital media-related standards used daily by billions of people and collaborates with other communities (currently genomics, point clouds and artificial intelligence) to develop non-audiovisual compression standards. Thirty years of successful projects prove that the MPEG-inspired method proposed for DCT works. DCT will have adequate governance and structure to handle relationships with many disparate client industries with specific needs and to develop data compression standards for each of them. With an expanding industry support, a track record, a solid organisation and governance, DCT will have the means to accomplish the mission of serving a broad range of industries and communities with its data compression standards.


According to the ISO directives these are the steps required to establish a new Technical Committee

  1. An ISO member body submits a proposal (done by Italy)
  2. THe ISO Central Secretariat releases a ballot (end of August)
  3. All ISO member bodies vote on the proposal (during 12 weeks)
  4. If ISO member bodies accept the proposal the matter is brought to the Technical Management Board (TMB)
  5. The TMB votes on the proposal (26 January 2029)

An (optimistic?) estimate for the process to end is spring 2019.

Send your comments to

Posts in this thread (in bold this post)

Erasmus and migration

The apex of Renaissance was around the middle of the 15th-16th centuries. Learned men freely communicated with the feeling of belonging to a whole that was shared by their minds and by definition borderless.

No other man better symbolises the community of minds that hovered the geographical expression called Europe than Erasmus of Rotterdam.

Then came Martin Luther and decades of religion wars. Other wars sought to establish ever stronger national identities. The common language itself – Latin – still learnt, praised and practiced until recently – was gradually replaced by national languages.

A century and a half later the other side of the Atlantic saw a grand example of nation building: the United States of America. The borders of the new entity were fuzzy at best but, in case it was not clear to the ex-colonists, the occupation of Washington during the American-British War of 1812 reminded them that they had better have a Commander-in-Chief to deal with foreign powers. I am not sure I like the idea of a single person being able to decide what to do of those who set foot in the USA “illegally”, but there is no doubt that all the facets of that power have played a major role in making the USA the power that it is today.

Another century and a half later the extreme eastern end of Europe saw another grand example of nation building: the Union of the Soviet Socialist Republics. Over the centuries the czars of Russia had tried to bring the higher classes of their empire closer to the more and more fractured community of minds that Europe had become. The czarist empire knew very well what borders were and indeed over the centuries Russia had become a huge multi-ethnic and multi-continental empire. Given the conditions of the moment, the czars’ successors took a minimalist approach to their country’s borders only to revert to expansionism when favourable (so to speak) conditions returned.

Fifty years later Europe saw another – so far – grand example of nation building: the European Union. Driven by a handful of visionaries who had learnt from 15 centuries of wars, and particularly from the world wars, they put in place a process that, starting from economic integration, aspired to achieve higher goals.

Clearly Europe has been built taking the Europe of Erasmus as a model. For decades Europe was a notion where citizens belonged to countries that had a very strong rooting on their territories but shared an Erasmus-like common ideal that would eventually coverthe entire geographical Europe.

This noble plan has worked for a while. For decades students in Europe felt and behaved like Erasmus in the 15th century thanks to a program that, indeed, bears his name. Given time these young people would grow and become European citizens all feeling like members of a community like the learned men of five centuries ago.

Europe could have become the first example of a nation that, unlike all grand nations that had a border, only has intellectual borders. It is not going to happen because this noble plan is being crushed by a handful of migrants – in a population of half a billion people.

It would have been great to determine that you are Europen if you belong to the European community of minds, but now we must be able to determine that by some physical means, i.e. that “inside” you are European and “outside” you are foreigner. Alas, we need an old-fashioned physical “border” to save the ideal of a border-less continent-wide community.

Europeans should be able, as they do, to freely move inside the physical space called Europe where some foreigners will always find the way to get in. If the foreigners are admitted into the physical space we should strive to make them part of the continent-wide community.

That is a long term endeavour that starts from the moment foreigners enter the physical space called European Union. They should be taken in charge by the Europe Union, not by national states.

Caveat venditor

When I was a kid and was free from school, I used to help my mother in a market place close to our town where she run a bench.

Our primary task was to sell wares (of course). The task second in importance was to make sure that the wares on display did not “inadvertently” end up in the pockets of some onlookers. We, the sellers, applied the caveat venditor (let the seller beware) principle and bewared.

I happened to witness that this attitude is not proper only of my family or of those times. Throughout the years I visited market places in different parts of the world and I always saw sellers, no matter what was the local culture, behave in a similar cautious way.

Now I have a question. Article 13 of the current draft (2018/06/20) of the new European Copyright Directive aimed at “adapting EU copyright rules to the digital environment” requires an “upload filter” whose function is to check that everything uploaded online in the EU does not infringe somebody’s copyright.

What does this mean? If you run a website where your customers upload content, you have to check that your customers’ content does not infringe somebody else’s copyright.

Why on earth should one do this? If my mother and I watched over our wares, and millions of people in all latitudes and longitudes watch over theirs, why should copyright holders be exempted from watching over their (digital) wares?

My mother and I cavimus, millions of people cavent, copyright holders caveant.

There are plenty of inexpensive technologies that allow copyright holders to watch over their content without putting gratuitous burdens on the shoulders of people who are just doing their own work.

30 years of MPEG, and counting?


Thirty years ago this day, in Ottawa, ON, some 29 experts from 6 countries attended the 1st meeting of the Moving Picture Experts Group, to become universally known as MPEG. Twenty-five days ago, in San Diego, CA, 20 times the MPEG experts of the 1st meeting attended the 122nd MPEG meeting.

These 30 years have been an incredible ride.

MPEG’s mission is to produce digital media standards and MPEG did it through without exemption. Here are some facts

  • MPEG has been engaged in 21 work items (SO language for “standardisation areas”);
  • In one case the work item produced just one standard but on the other extreme MPEG-4 counts 34 standards;
  • MPEG has produced a total of 174 standards or an average of ~6 standards/year and is working on a few tens more;
  • Some MPEG standards contain a few tens of pages, some others several hundreds and, in a few cases, over 1000 pages;
  • MPEG has produced several hundred standards amendments (ISO language for “extensions”);
  • Some standards have been published only once, some others a few times and the Advanced Video Coding standard (AVC) 8 times (and a 9th edition is in preparation).

These numbers may look impressive, but have to be assessed in a context. The Joint ISO/IEC Technical Committee 1 (JTC 1), to which MPEG belongs, counts more than 100 working groups, MPEG, with just 1/10 of all JTC 1 experts, produces 10 times more standards than the average JTC 1 working group.

Clearly MPEG has done a lot in the past 30 years, but what about the current level of activity? In the last 30 months (i.e. in the last 10 meetings), MPEG has been working on more than 200 “tracks” (by track I mean an activity that develops working drafts, standards or amendments).

One reason of the interest aroused by MPEG standards is MPEG’s practice to  communicate its plans to, collect requirements from and share results with some 50 different bodies who work on related areas. It also offers – and receives – collaboration from other ISO and ITU-T groups on specific standards.

Publishing standards – like writing books – is one measure of productivity. Not unlike a book, however, it does not help if a standard stays in the shelves of the ISO Central Secretariat. Therefore, to be sure that MPEG has meaningfully accomplished its mission, we must make sure that its standards are used in products, services and applications.

Are all 174 MPEG standards widely used? No. As much as some products of a company sell like hot cakes and other stay in the company stores, some MPEG standards are widely used and some others only to some extent.

“Widely”, however, is an analogue measure. A better, digital, measure is “billion” that applies to a number of MPEG standards:

  • MPEG-1 Video was the first standard to cross the level of 1 billion users (Video CD players);
  • MPEG-1 Audio layer 2 is present even today in most TV set top boxes;
  • MPEG-1 Audio layer 3 (aka MP3) has been in use for the last 20 years, in portable audio players and now in all handsets and PCs;
  • MPEG-2 is used in all television set top boxes, DVDs and BluRay;
  • MPEG-4 AAC and AVC are standard in TV set top boxes since more than 10 years, mobile handsets, BluRays and PCs;
  • The MPEG file format is used every time a video is stored on or transmitted to a mobile handset, so even “billion” may not be the right measure…

Some other MPEG standards are used more “moderately” and for these the unit of measure is just “hundred million”. This is the case for e.g. MPEG-H for new generation broadcasting and DASH for internet streaming.

Such an intense use of MPEG standards explains the many amendments and editions and the “longevity” of some MPEG standards: extensions are still made to MPEG-2 Systems (after 24 years). MPEG file format (after 19 years), AVC (after 15 years) and so on.

Are you surprised to know that MPEG has received 5 (five) Emmy Awards?

Another thirty years await MPEG, if some mindless industry elements will not get in the way.

Posts in this thread (in bold this post)