Which company would dare to do it?

To get an explanation of the question in the title and an answer, the reader of this article should read the first 5 chapters. If in a hurry, the reader can jump here.

  1. MPEG is synonym of growth
  2. Manage growth with organisation
  3. MPEG does not only make, but “sells” standards
  4. A different approach to standards
  5. Interoperability is not just for commercials
  6. Time to answer the question

MPEG is synonym of growth

MPEG is a unique group in the way it has grown to what it is today. Thirty-one years ago it started as an experts group for video coding for interactive applications on compact disc (it became a working group only 3.4 years after its establishment). A few months after its establishment it added audio coding and systems aspects related to the handling of compressed audio and video. Two years later it moved to digital television (MPEG-2) shedding the “interactive applications on compact disc” attribute, and then it moved to audio-visual applications for the nascent fixed and mobile internet (MPEG-4), then to media delivery in heterogeneous environments (MPEG-H), then to media delivery on the unreliable internet (MPEG-DASH), then to immersive media (MPEG-I), then to genome compression (MPEG-G).

The list of MPEG projects given above is a heavily subsampled version of all MPEG projects (21 in total). However, it gives a clear proof of the effectiveness of the MPEG policy to extend to and cover fields that are related to compression of media. Now MPEG is even working on DNA read compression and has already approved 3 parts of the MPEG-G standard.

The MPEG story, however, is not really in line with the ISO policy on Working Groups (WG). WGs are supposed to be established and run like projects, i.e. disbanded after the task has been achieved. MPEG, however, has been in operation for 31 years because the field of standards for moving pictures and audio is far from being exhausted.

Today MPEG standards cover the following domains: Video Coding, Audio Coding, 3D Graphics Coding, Font Coding, Digital Item Coding, Sensors and Actuators Data Coding, Genome Coding, Neural Network Coding, Media Description, Media Composition, Systems support, Intellectual Property Management and Protection (IPMP), Transport, Application Formats, Application Programming Interfaces (API), Media Systems, Reference implementation and Conformance.

Manage growth with organisation

The MPEG story is also not really in line with another ISO policy on WGs. WGs are supposed to be “limited in size”. However, MPEG counts 1500 experts registered and an average 500 experts attending its quarterly meetings, These last one weeks and are preceded by almost another week of joint video projects and ad hoc group meetings.

The size of MPEG, however, is not the result of a deliberate will to be outside of the rules. It is the natural result of the expansion of its programme of work.

Since the early days, MPEG had a structure: first a Video subgroup, then an Audio subgroup, then a Systems subgroups. The Test subgroup was formed because there was a need to test the submissions in response to the MPEG-1 and MPEG-2 Video Calls for Proposals, the Requirements subgroup came with the need to manage the requirements of all industries interested in MPEG-2 and the 3D Graphics subgroup came with MPEG-4. The Communication subgroup came later prompted by the need to have regular communication with the outside world.

The MPEG organisation is not the traditional hierarchical organisation. Inside most subgroups there are units – permanent or established on an as needed basis – that address specific areas. Units report to subgroups but interact with other units, inside or outside their subgroups, prompted by a group that includes all subgroup chairs.

This unique flat organisation allows MPEG’s 500 experts to work on multiple projects, while allowing those interested in specific one to stay focussed. At the last October 2019 meeting experts worked on 60 parallel activity tracks and produced ~200 output documents. This is also possible because, since 1995, MPEG has been using a sophisticated online document management system now extended to make available information on other parallel meetings, to support the development of plenary results, to manage the MPEG work plan etc.

MPEG not only makes, but “sells” standards

Formally MPEG was not the result of a conscious decision by National Bodies (eventually it became so) but it was prompted by a vision. Therefore, MPEG never had a “guaranteed market”: its standards had to “sell” for MPEG to continue to exist. For that to happen, MPEG standards had to be perfectly tuned to market needs. In other words there had to be a closed relationship between supplier (MPEG) and customers (industries).

Therefore, searching for and listening to customers is in the MPEG DNA:

  1. For its first MPEG-1 standard MPEG found as customers consumer electronics (interactive video on compact disc and eventually Video CD), telcos (video distribution on ADSL-1), radio broadcasters (digital audio broadcasting). MP3 is a different story…
  2. For its MPEG-2 standard MPEG had to convince all types of broadcasters – terrestrial, cable and satellite – all consumer electronics companies and the first IT companies
  3. For its MPEG-4 standard MPEG has to convince the first wave of IT companies and to start talking to mobile communication companies
  4. And so on… Every new subsequent MPEG project brought some new companies of existing industries or new industries.

Above I have used the verb “brought”, but this is not the right verb, certainly not in the early years. MPEG made frenetic efforts to talk to companies, industry fora and standards organisations, presented its plans and sought requirements for its standards. The result of those efforts is that today MPEG can boast a list of several tens of major industry fora and standards organisations such as (in alphabetic order): 3GPP, ATSC, DVB, EBU, SCTE, SMPTE and more.

A different approach to standards

This intense customer-oriented approach is not what other JTC 1 committees do. If you want to build Internet of Things (IoT) solutions and you turn to standards produced by JTC 1/SC 41 – Internet of Things and related technologies, you are likely to only find models and frameworks. If you want to implement a Big Data solution and you turn to JTC 1/SC 42 – Artificial intelligence, you will find again models and frameworks but no trace of API, data formats or protocols.

Both MPEG and other JTC 1 subcommittees develop standards before industry has a declared need for them. However, they differ in what they deliver: implementable standards (MPEG) vs models and frameworks (other committees).

MPEG commits resources to develop specifications when implementation technologies may not be fully developed. Therefore, MPEG can assemble and possibly improve its specifications by adding more technologies to the extent it can be shown that they provide measurable technical merits.

The danger is that MPEG may very well spot the right standard, but its development may happen too early with the risk that the standard may be superseded by a better technology at the time industry really needs the standard. Conversely, the standard may arrive too late, at a time when companies have made real investments for their own solutions and are not ready to discount their investments.

This is one reason why, within JTC 1, MPEG (a WG) has produced more standards than any other subcommittee (note that a subcommittee includes several WGs). MPEG’s policy is to try and develop a new “business” area with a standard that may turn out not to be adopted by industry, not to risk losing an opportunity.

By providing models and frameworks, other committees take the approach of creating an environment where industry players share some basic elements on top of which an ecosystem of solutions with certain degrees of interoperability may eventually and gradually appear.

It is very difficult to make general comparisons, but it is clear that MPEG standards create markets because industry knows how to make products and users know they buy products that provide full interoperability. A proof of this? In 2018 MPEG-enabled devices had a global market value in excess of 1 T$ and MPEG-enabled services generated global revenues in excess of 500 B$. In the same year MPEG standards had a far reaching impact on people at the global level: at the global level there are 2.8 billion smartphones of which 1,56 billions were sold in 2018 alone (see here) and there are ~1.6 billion TV sets in global use by ~1.42 billion households serving a TV viewing audience of ~4.2 billion in 2011 (see here).

Interoperability is not just for commercials

MPEG is also unique because it has succeeded in converting what would otherwise be a naïve approach to interoperability to a practical and effective implementation of this notion. Back in 1992 MPEG was struggling with a problem created by the success of its own marketing efforts: many industries from many countries and regions had believed in the MPEG promise of a single international standard for audio and video compression, but actually industries and countries or regions had different agendas. Telcos wanted a scalable video coding solution, American broadcasters wanted digital HDTV, European broadcasters wanted digital TV scalable to higher resolutions, some industries sought a cheap solution (RAM was an important cost element at that time) while others could afford more expensive solutions.

The solution was obvious but making a standard that included all requirements for all users was out of question. MPEG learned from the notion of profile developed by JTC 1/SC 21 – Open systems interconnection (OSI):

“set of one or more base standards, and, where applicable, the identification of chosen classes, subsets, options and parameters of those base standards, necessary for accomplishing a particular function”

MPEG madeit practically implementable for its own purposes by interpreting “base standards” and “chosen classes, subsets, options and parameters of those base standards” as “coding tools”, e.g. a type of prediction or quantisation. Starting from MPEG-2 (but actually the profile and level notion – at that time not crystal clear yet – was already present in MPEG-1) MPEG standards conceptually specify collections of tools and descriptions of different combinations of tools, i.e. the “profiles”.

Next to quality, profile is the feature that has contributed the most to the success of MPEG standards. Today, OSI profiles are nowhere to be seen, but the world is full of products and services that implement MPEG profiles.

Time to answer the question

I will try now to give a meaning to “it” in the question of the title of this article “Which company would dare to do it?”

MPEG is a standards group, not a company. Still MPEG operates like a company: it produces standards to maximise the number of its customers by satisfying their needs. The measure of MPEG performance used here would probably not satisfy the criteria of an accountant, but I consider enabling a market of 1.5 T$ p.a. to be something akin to “profits” while expenses can be measured to be 500 (experts)*4 (meetings/year)*7.5 (meeting days)*1000 ($/day) = 15 M$.

What would you think of board of directors who wanted to reorganise a “company” whose Operating Expense Ratio (OER) is 0.001% (or its Revenues/Expenses ratio is 100,000)?

Posts in this thread


The birth of an MPEG standard idea

From what I have published so far on this blog it should be clear that MPEG is an unusual ISO working group (WG). To mention a few, duration (31 years), early use of ICT (online document management system in use since 1995), size (1500 experts registered and 500 attending), organisation (the way the work of 500 experts work on multiple projects), number of standards produced (more than any other JTC 1 subcommittee), impact on the industry (1 T$ in devices and 0.5 T$ in services per annum).

In How does MPEG actually work? I have talked about the life cycle of an MPEG standard, depicted in Figure 1.

Figure 1 – The MPEG standard development process

However, that article does say much about the initial phase of the life cycle, i.e. the moment new ideas that will turn into standard are generated, which is not even identified in the figure. By looking into that moment, we will see again how the way new ideas for MPEG standards are generated, makes MPEG an unusual WG.

The structure of this article is

A round up of MPEG standards

The work standard is used to indicate both a series of standards (identified by the 5-digit ISO number) and a part of a standard (identified by 5 digits a dash and a number). In this article we will use “standard” for the former and “part of standard” for the latter,

MPEG-1 and 2

The idea of the first two MPEG standards was generated in 1988 when a new “Moving Picture Experts Group” was created in JTC 1/SC 2/WG 8. The original MPEG work items were (the acronym DSM stands for Digital Storage Media):

  1. Coding of moving pictures for DSM’s having a throughput of 1-1.5 Mbit/s (1988-1990)
  2. Coding of moving pictures for DSM’s having a throughput of 1.5-5 Mbit/s (1990-1992)
  3. Coding of moving pictures for DSM’s having a throughput of 5-60 Mbit/s (to be defined).

I was the one who proposed the first work item (video coding for interactive video on compact disc) later to be named MPEG-1 while Hiroshi Yasuda added the second (later to be named MPEG-2). In MPEG-2 the word DSM was kept in order not to upset other standards groups (but the eventual title of MPEG-2 became “Generic coding of moving pictures and associated audio information”). The third standard was little more than a placeholder. The time assigned to develop the standard was definitely optimistic. It took two more years than planned for both MPEG-1 and MPEG-2 to reach FDIS (and even so it was really a rush).

The third standard was the first and an early case of the birth of a new standard idea that was “miscarried” because the third work item was combined with the second. That is the reason why there is no MPEG-3, but there is MP3 which is just something else.


MPEG-4 was born with a similar process and was motivated by the fact MPEG-1 and MPEG-2 targeted high bitrates. However, low bitrates, too, were important and not covered by MPEG standards. Eventually MPEG-4 became the container of foundational digital media technologies such as audio for internet distribution, file format and open font format.


MPEG-7 had a different story. It was proposed by Italy to SC 29 in response to the prospects of users having to navigate 500 TV channels to find what they wanted to see. The study in SC 29 went nowhere, so MPEG took it over and developed a complete standard framework for media metadata.


MPEG-21 was driven by the upheaval brought about by MP3 as exemplified by Napster. The response was to create a complete standard framework for media ecommerce. The framework included the definition of Digital Items, and standards for Right and Contact Expression Languages and much more


MPEG-A was the result of an investigation carried out by the MPEG plenary. AS a result of the investigation MPEG realised that, beyond standards for individual media, it was necessary to also develop standard combinations of media encoded according to MPEG standards.


MPEG-B, -C and -D were proposed by MPEG at a time the Systems-Video-Audio trinity appeared to be no longer a response to standards needs, while individual Systems, Video and Audio standards were still in demand. All three standards include parts that can be classified as Systems, Video and Audio. As a note the Systems, Video and Audio trinity is alive and kicking. Actually it has become a quternity with Point Clouds.


MPEG-E was driven by the idea of providing the industry with a standard specifying the architecture and software components of a digital media consumption device.


MPEG-V was probably the first standard that was not the result of a decision by MPEG to propose a new standard but the result of a individual proposals coming from two different directions: virtual worlds (the much touted Second Life of the mid-2000’s) and the enhanced user experience made possible by existing and new sensors and actuators. MPEG succeeded in developing a comprehensive standard framework for interaction of humans with and between virtual worlds.


MPEG-M was influenced by work done within the Digital Media Project (DMP) to address a standard middleware for digital media. MPEG-M became an extensible standard framework that specifies the architecture with High Level API, the middleware composed of MPEG technologies with Low Level API. and the aggregation of services leveraging those technologies.


MPEG-U was probably the first standard whose birth happened in a subgroup – Systems. Eventually MPEG-U became a standard for exchanging, displaying, controlling and communicating widgets with other entities, and for advanced interaction.


MPEG-H and DASH were two standards born out of the strongly felt need to overhaul the then 15-year old MPEG-2 Transport Stream (TS). The result was that the market 1) strongly reaffirmed its confidence in MPEG-2 TS, 2) enthusiastically embraced MPEG-H (integrated broadcast-broadband distribution over IP) and widely deployed DASH (media distribution accommodating unpredictable variations of available bandwidth) inspired by and developed in collaboration with 3GPP.


MPEG-I was the result of the drive of the industry toward immersive services and devices enabling them. This is currently the MPEG flagship project, now with 14 parts but certainly designed to compete withe the number of parts (34) of MPEG-4.


MPEG-CICP was a “housekeeping” action with the goal to collect code points for non-standard specific media formats in a single place (actually 4 documents).


MPEG-G resulted from the proposal of a single organisation for a framework standard for storage, compression and access of non-media data (DNA reads). The organisation proposed the activity, but of course the development of the standard was fully open and in line with the MPEG process of Figure 1.


MPEG-IoMT resulted from a single organisation proposing a framework standard for media-specific Things (as defined in the context of Internet of Things), i.e. cameras and displays.


MPEG-5 resulted from the proposal of a group of companies who needed a video compression standard that addressed business needs of some use cases, such as video streaming, where existing ISO video coding standards have not been as widely adopted as might be expected from their purely technical characteristics. This requirement was not met by state of the art MPEG video coding standard and the proposal was, after much debate, discussed and accepted by the MPEG plenary.

Looking at the birth of parts

From the roundup above the reader may have gotten the impression that most MPEG standards are the result of a collective awareness of the need of a standard. As I have described above, this is largely, but not exclusively, true, of standards identified by the 5-digit ISO number. But if we look at the parts of MPEG standards, we get a picture that does not contradict the first statement but provides a different view.

In its 31 years of activity MPEG has produced parts of standards in the following areas: Video Coding, Audio Coding, 3D Graphics Coding, Font Coding, Digital Item Coding, Sensors and Actuators Data Coding, Genome Coding, Neural Network Coding, Media Description, Media Composition, Systems support, Intellectual Property Management and Protection (IPMP), Transport, Application Formats, Application Programming Interfaces (API), Media Systems, Reference implementation and Conformance.

In the following we will see how the nature of standard parts influences the birth of MPEG standards.

Video Coding

Video coding standards are mostly driven by the realisation that some of existing MPEG standards are no longer aligned with the progress of technology. This was the case of MPEG-2 Video because it promised more compression than MPEG-1 Video, MPEG-4 Visual because there was no MPEG Video Coding standard for very low (much below 1 Mbit/s) bitrates, MPEG-4 AVC was developed because it promised more compression than MPEG-4 Visual, MPEG-H HEVC was developed because it promised more compression than MPEG-4 AVC and MPEG-I VVC was developed because it promised more compression than MPEG-H HEVC.

Forty years of video coding and counting tells the full story of the MPEG video compression standards.

This is not the full story, though. Table 1 is a version of the table in More video with more features, slightly edited to accommodate recent evolutions. The table describes the functionalities, beyond the basic compression functionality, that have been added over the years to MPEG Video Coding standards. The birth of each of these proposals for new functionalities has been the result of much wrangling between those who wanted to add the functionalities because they believed the necessary technology was available and those who saw the technology was immature and not ready for a standard.

Table 1 – Functionalities added to MPEG video coding standards

Audio Coding

The companion Audio Coding standards had a much different evolution. Unlike Video, whose bitrate was and is so high and growing to justify new generations of compression standards, Audio Coding was driven to large extent by applications and functionality, with compression always playing a role. Thirty years of audio coding and counting talks about the first MP3, then of MPEG-4 AAC in all its shapes, and then of MPEG Surround, Spatial Audio Object Coding (SAOC), Unified Speech and Audio Coding (USAC), Dynamic Range Control (DRC), MPEG-H 3D Audio and the coming MPEG-I Immersive Audio. MPEG-7 Audio should not be forgotten even though compression is appl;ied to descriptors, not to audio itself. The birth of each of these Audio Coding standards is a history in itself, ranging from being a part of a bigger plan developed at MPEG Plenary level, to specific Audio standards agreed by the Audio group, to a specific functionality to be added to a standard either developed or being developed.

3D Graphics Coding

The 3DG group worked on 2D/3D graphic compression, and then on Animation Framework eXtension (AFX). In the case of 2D/3D graphic compression, the birth of the standards was the result of an MPEG plenary decision, but the standards kept on evolving by adding new technologies for new functionalities, most often at the instigation or individual experts and companies.

Talking of 3D Graphics Coding, I could quote Rudyard Kipling’s verse

Oh, East is East, and West is West, and never the twain shall meet

and associate West to Video Coding and East to 3D Graphics Coding (or vice versa). Indeed, it looked like Video and 3D Graphics Coding would comply with Kipling’s verse until Point Cloud Compression (PCC) came to the fore. Proposed by an individual expert and under exploration for quite some time, it suddenly became one of the sexiest MPEG developments merging, in an intricated way, Video and 3D Graphics.

We can indeed say with Kipling

But there is neither East nor West, Border, nor Breed, nor Birth,

When two strong men stand face to face, though they come from the ends of the earth!

Font Coding

Font Coding is again a new story. This time the standard was proposed by the 3 companies – Adobe, Apple and Microsoft – who had developed the OpenType specification. The reason was that it had become burdensome for them to maintain and expand the specification in response to market needs. The MPEG plenary accepted the request, took over the task and developed several parts of several standards in multiple editions. As participants in the Font Coding activity do not typically attend MPEG meetings, new functionalities are mostly added at the request of experts or companies on the email reflector of the Font Coding ad hoc group.

Digital Item Coding

The initiative to start MPEG-21 was taken by the MPEG plenary, but the need to develop the 22 parts of the standards were largely identified by the subgroups – Requirements, Multimedia Description Schemes (MDS) and Systems,

Sensors and Actuators Data Coding

The birth of MPEG-V was the decision of the MPEG plenary, but the parts of the standard kept on evolving at the instigation or individual experts and companies. Four editions of the standard were produced.

Genome Coding

Development of Part 1 Storage and Transport and Part 2 Compression of a Genome Coding standard was obviously a major decision of the MPEG plenary. The need for other parts of the MPEG-G standard, namely Part 3 Metadata and API, and Part 6 Compression of Annotations, was identified by the Requirements group working on the standard.

Neural Network Coding

Neural Network Coding was proposed at the October 2017 meeting. The MPEG plenary was in doubt whether to call this “compression of another data type” (neural networks) or something in line with its “media” mandate. Eventually it opted to call it “Compression of neural networks for multimedia content description and analysis”, which is partly what it really is, an extended new version of CDVS and CDVA with embedded compression of descriptors. Neural Network Compression (NNC) is now (October 2019) at Working Draft 2 and is planned to reach FDIS in April 2021. Experts are too busy working on the current scope to have the time to think of more features, but we know there will be more because the technologies in the current draft do not support all the requirements.

Media Description

As mentioned above, the MPEG plenary decided to develop MPEG-7 seeing the inability of SC 29 to jump on the opportunity of a standard that would describe media in a standard way to help user access the content of their interest. The need for the early parts of MPEG-7 were largely identified by the groups working on MPEG-7. The need for later standards, such as CDVS and CDVA, was identified by a company (CDVS) and by a consortium (CDVS). The need for the latest standard (Neural Network Compression) was identified by the MPEG plenary.

Media Composition

Until 1995 MPEG was really a group working mostly for Broadcasting and Consumer Electronics (but MP3 was a sign of things to come) and did not have the need for a standard Media Description. MPEG-4 was the standard that extended the MPEG domain to the IT world and the “Coding of audio-visual objects” title of MPEG-4 meant that a Media Description technology was needed.

The MPEG plenary took the decision to extend the Virtual Reality Mark-up Language (VRML) establishing contacts with that group. The MPEG plenary did the same when a company proposed a new W3C recommendation-based Media Description technology. A company proposed to develop the MPEG Orchestration standard. After much investigation and debate, the Requirements and Systems groups have recently come to the conclusion that the MPEG-I Scene Description should be based on an extension to Khronos’ glTF2.

Systems support

Systems support was the first non-video need after audio identified by the MPEG plenary a few months after the establishment of MPEG. Today Systems support standards are mostly, but not exclusively, in MPEG-B. This standard contains parts that were the result of a decision of the MPEG plenary (e.g. Binary MPEG format for XML and Common encryption), the proposal of the Systems group (e.g. Sample Variants and Partial File Format) or the proposal of a company (e.g. Green metadata).

Intellectual Property Management and Protection (IPMP)

IPMP parts appear in MPEG-2, MPEG-4 and MPEG-21. They were triggered by the same context that produced MPEG-21, i.e. how to combine the liquidity of digital content with the need to guarantee a return to rights holders.

The need for MPEG-4 IPMP was identified by the MPEG plenary, but MPEG-2 and MPEG 21 IPMP was proposed by the Systems and MDS groups, respectively.


Although partly already in MPEG-1, Transport technology in MPEG flourished in MPEG-2 with Transport Stream and Program Stream. The development of MPEG-2 Systems was a major MPEG plenary decision, as was the case for MPEG-4 System, that at the time included the Scene description technology. MPEG-2 TS has dominated the broadcasting market (and is even used by AOM). As said above, MPEG-H MPEG Media Transport (MMT) and DASH are two major transport technologies whose development was identified and decided by the MPEG plenary. All 3 standards have been published several times (MPEG-2 Systems 7 times) as a result of needs identified by the Systems group or individual companies.

Application Formats

The first Application Formats were launched by the MPEG Plenary and the following Application Formats by different MPEG groups. Later individual companies or consortia proposed several Application Formats. The Common Media Application Format (CMAF) proposed by Apple and Microsoft is one of the most successful MPEG standards.

Application Programming Interfaces (API)

MPEG Extensible Middleware (MXM) was the first MPEG API standard. The decision to do this was made by the MPEG plenary but the proposal was made by a consortium. The MPEG-G Metadata and API standard was proposed by the Requirements group. The IoMT API standard was proposed by the 3D Graphics group.

Media Systems

This area collects the parts of standards that describe or specify the architecture underpinning MPEG standards. This is the case of part 1 of MPEG-7, MPEG-E, MPEG-V, MPEG-M, MPEG-I and MPEG-IoMT. These parts are typically kicked off by the MPEG plenary.

Reference implementation and Conformance

MPEG takes seriously its statement that MPEG standards should be published in two languages – one that is understood by humans and another that is understood by a machine – and that the two should be equivalent in terms of specification. The reference software – the version of the specification understood by a machine – is used to test conformance of an implementation to the specification. For these reasons the need for reference software and conformance is identified by the MPEG plenary.


With the understanding that all decision are formally made by the MPEG plenary, the trigger of an MPEG decision happens at different levels. Very often – and more often as MPEG matures and its organisation more solid – the trigger is in the hand of an individual experts or group of experts, or of an MPEG subgroup.

Posts in this thread


More MPEG Strengths, Weaknesses, Opportunities and Threats


In its MPEG and JPEG as SCs proposal, MPEG Future proposes that MPEG become a subcommittee to improve collaboration with other bodies, establish a clear reference in ISO for the digital media industry, enhance group’s governance and more. The obvious question to MPEG Future concerns MPEG’s adequacy for the new role. The first answer to this question is that, in its original proposal, the Italian National Body UNI has already carried out a SWOT (Strengths-Weaknesses-Opportunities-Threats) analysis.

In No one is perfect, but some are more accomplished than others I have started rewording, expanding and publishing that SWOT analysis and in this article I will continue the task.

The Italian National Body has identified the following Key Performance Indicators (KPI) of a standards committee like MPEG: Context, Scope of standards, Business model, Membership, Structure, Leadership, Client industries, Collaboration, Standards development, Standards adoption, Innovation capability, Communication and Brand.

In the article mentioned above I have dealt with Context, Scope of standards, Business model and in this one I will deal with Membership, Structure and Leadership.


I would like to identify four levels of members: those who actually attend MPEG meetings, those who are officially registered as members but do not attend, those who actually work in MPEG projects without being officially members and those who, even without being members, have their work significantly influenced by MPEG work plan and standards.

Membership – as defined above – is the most valuable MPEG asset. It is because of this that the MPEG Future Manifesto has identified “Support and expand the academic and research community which provides the life blood of MPEG standards” as the first of its actions.


MPEG has a level 1-2 membership competent in all areas of scope, large in number (level 1 is 500 and level 2 is 1500 experts), from many different industries with a growing role of academia and global (>30 countries). Level 3 membership is estimated at a few thousand experts and level 4 is estimated at a few tens of thousands. There is a continuous flow of level 1-2 experts leaving and being replaced by new experts, from level 3 and even from level 4.

Another strength is the fact that many level 1 MPEG members are active members of other organisations. This multiple membership facilitates understanding of other committees’ work, needs and plans. Table 1 identifies customers (those MPEG provides standards to) and partners (those MPEG works with, e.g. to develop standards).

Table 1 – Main customers (C) and partners (P)

Read Standards and collaborations to know more about the way MPEG does work with other committees.


The main weakness comes from the fact that the percentage of level 1 experts coming from companies directly using MPEG standards is shrinking. This is the result of a phenomenon that is entirely outside of MPEG control but alters MPEG’s traditional relationship with its industries.

Related to the same phenomenon is the fact that the percentage of experts working for Non-Performing Entities (NPE) is growing. Of course, all experts are motivated to develop the best possible standards, but the ultimate goal of experts is changing compared to the traditional experts’ goal.

Similar to the above phenomenon is the fact that the percentage of academic members, currently at about 25%, is growing. Of course, injection of valuable academic know how is good but again the ultimate goal of experts is changing compared to the traditional experts’ goal.

Of a completely different nature is the weakness generated by one of the strengths mentioned above: the large number of level 1 members. The ISO/IEC directives say that WGs should be “limited in size”. Limited is not defined but, when one sees ISO Technical Committees of couple hundred members, ISO Subcommittees of a few tens and MPEG (a working group) of 500, that MPEG has exceeded the “limited size” is more than a suspicion.

A final main weakness is a consequence of the fact that MPEG attracts the best experts but, being a working group, does not attract managers who care about the organisational sustainability of MPEG in a world of standards. No level 1 MPEG member attends JTC 1 meetings where important policy decisions may be made that affect MPEG’s work plan and execution.


One of the most strategically important opportunities is how to make best use of the enormous brain power that populates MPEG meetings and activities, influences research and attracts new members.

This can be achieved by exploiting the opportunities for new standards in the MPEG traditional media field. MPEG is working in several areas such as immersive media, neural network compression and video coding for machines that will require a large number of experts making substantial contributions. Additionally MPEG can offer new perspectives in compression of data other than media, e.g. genomics.

From the organisational viewpoint MPEG can comply with the ISO/IEC directives while keeping the MPEG ecosystem intact, e.g. by achieving subcommittee status.


The biggest threat is that the MPEG membership is not an asset that is granted for ever. Members at all levels can leave without being replaced because the MPEG work plan may lose its attraction, or its standards are no longer relevant or profitable. A related threat is an overshoot of the attractionm to new members that is unable to reward all members at all levels.

Another set of threats is caused by the current discussions on the future of MPEG which is shaking the confidence of industry and experts. A breakup of MPEG in disconnected working groups would dramatically affect MPEG’s ability to deliver its existing work plan. Even if delivery is assured, there will be no guarantee that the quality of standards will remain the same because the glue provided by the MPEG organisation and modus operandi will be lost.


The MPEG structure is another major asset of MPEG. It is at the root of the quality, usability and ultimate success of MPEG standards.


The biggest strength of the MPEG structure is the fact that it has not been designed by committee but is the result of a 30-year long learning process. Figure 1 depicts the structure with an indication of the flow of activities.

Figure 1 – Today’s MPEG structure and workflow

MPEG can be defined as an ecosystem of interacting subgroups developing integrated standards and, over three decades, MPEG subgroups were created and disbanded (see here for the full story) because the ecosystem shifted in nature. The subgroups in operation today are the best match to the current conditions, but may well change if the programme of work will change.

These are the main components of the MPEG ecosystem

  1. Ad hoc groups (AhG) were created since the early days (1990) because experts needed an “official” environment to continue doing work outside MPEG meetings, with the understanding that “decisions” could only be made when MPEG is in session.
  2. Break-out groups (BoG) existed since the early days because even a single part of a standard could be too complex and the work had to be split in separate activities to be merged later by the subgroup in charge of that part of the standard.
  3. Joint meetings are possible because the expertise of the MPEG membership covers all areas needed by MPEG standards. Whenever an MPEG standard needs to interface with another standard or expose an interface it is possible to get the relevant people together, discuss and take action on the issues.
  4. Chairs meetings are the place where the general progress of work is reviewed and the need for interaction between the elements of the MPEG ecosystem are identified.
  5. Finally MPEG benefits from powerful ICT tool developed by Christian Tulvan of Institut Mines Télécom to support document management, session allocation, work plan etc.

A more complete analysis of the MPEG ecosystem is found at MPEG: vision, execution, results and a conclusion.

An important strength is given by the fact the processess described above have taken root over many years and are now deeply ingrained in the collective mindset of mPEG members.

A final important strength is that, while MPEG does not have a formal strategic planning function, this is actually implemented by the diffuse structure described above.


The main weakness of the current MPEG structure is a reflection of the main strength as described above. MPEG has an enormous brain power with extremely high levels of technical excellence but has weak links with the market.

This does not mean that MPEG is unaware of the market. Its processes include the development of context and objectives for a new project, the development of use cases and the analysis of use cases to develop requirements. However, all this is done by technical experts who, as the case may be, occasionally wear the clothes of market guys.

Because of its enormous brain power, MPEG has been able to develop many standards, some of which are extremely successful and other less so. Therefore, while there is no compelling need to address this weakness  because MPEG standards are so successful, there is room for improvements.

Another significant weakness is the limitation in MPEG’s ability to initiate new collaborations with other committees because of MPEG’s inferior status in ISO.


MPEG Future’s MPEG and JPEG as SCs proposes that MPEG becomes a subcommittee with a new Market needs Advisory Group (AG). MPEG is a big thing. Can it be bigger? describes how the interaction between a technology driven Technical Requirements AG can compete and collaborate with the proposed Market needs AG to make more robust and justified proposals for new standards.

By becoming a subcommittee MPEG can also have more freedom to timely initiate collaborations with other committees and to actually establish formal collaborations with other ISO and IEC committees using the Joint Working Group )JWG) mechanism. Today, as a working group, MPEG may not formally do work with other committees if not by liaison.


The main threat is the possibility that, in the face of a large committee of 500 level 1 experts and 1500 level 2 experts, MPEG is simply broken up in its working groups or, worse, new working groups are created by recombining MPEG activities. Of course, given sufficient time and effort, a new different MPEG-like organisation may be created, but at the cost of delayed or inferior quality standards and without a guarantee that a committee-designed organisation will work as well as an organisation that is the result of a Darwinian process.


By leadership here we mean the many people who hold a leadership position in MPEG: convenor, subgroup chairs, ad hoc group chairs and break-out group rapporteurs.


Because of its oft-mentioned enormous brain power, MPEG is in the enviable position of being able to identify excellent leaders. Actually, MPEG does that for subgroup chairs but ad hoc group chairs and break-out group rapporteurs are very much the result of a bottom up process.

The main strength is given by the consolidated and experienced MPEG and subgroup leadership who is ready to delegate significant levels of authoomy to AhGs and BoGs, with the constraints imposed by the fact that formal adoption of technology is the competence of subgroups and ratification of decision is the competence of the MPEG plenary.


The main weakness, going hand-in-hand with its main strength, is that leadership of MPEG and subgroups is rather static and that new leaders identified in AhG-BoG activities are not sufficiently put to good use.


With the implementation of MPEG Future’s MPEG and JPEG as SCs proposal, MPEG has the opportunity to introduce accelerated cycles of leadership regeneration in WGs, JWGs and AGs, and in a better management of the “unit” entity described in MPEG: vision, execution, results and a conclusion.

The MPEG Future proposal also incudes suggestions on new processes to nominate candidates as chair of the proposed new subcommittee by the Secretariat could be designed that preserve the ultimate authority of the Secretariat to decide in the framework of selections made by the MPEG community.


The MPEG structure and MPEG leadership are not disconnected entities. Today’s MPEG structure with an entirely new leadership would have a hard time to work smoothly and guarantee the delivery of the standards in the work plan with the quality that MPEG’s client industries expect. This does not mean that the leadership should stay static forever, simply that changes should be implemented in a progressive fashion.


In this article we have made a SWOT analysis of three of the most critical KPIs: membership, structure and leadership. MPEG is excellent in all three, but does have weaknesses. Opportunities for improvements are offered by MPEG Future’s MPEG and JPEG as SCs proposal, but threats are lurking.

Posts in this thread


The MPEG Future Manifesto

Communication makes us humans different. Media make communication between humans effective and enjoyable. Standards make media communication possible.

Thirty-two years ago, the MPEG vision was forming: make global standards available to allow industry to provide devices and services for the then emerging digital media so that humans could communicate seamlessly.

For thirty-two years the MPEG standards group has lived up to the MPEG vision: MPEG standards are behind the relentless growth of many industries – some of them created by MPEG standards. More than half the world population uses devices or accesses services, on a daily or hourly basis, that rely on MPEG standards.

The MPEG Future Manifesto claims that the MPEG mission is far from exhausted:

  • New media compression standards can offer more exciting user experiences to benefit consumers that the service, distribution and manufacturing industries want to reach, but also for new machine-based services;
  • Compression standards can facilitate the business or mission of other non-media industries and the MPEG standards group has already shown that this is possible.

Therefore, the MPEG Future Manifesto proposes a concerted effort to

  • Support and expand the academic and research community which provides the life blood of MPEG standards;
  • Enhance the value of intellectual property that make MPEG standards unique while facilitating their use;
  • Identify and promote the development of new compression-related standards benefitting from the MPEG approach to standardisation;
  • Further improve the connection between industry and users, and the MPEG standards group
  • Preserve and enhance the organisation of MPEG, the standards group who can achieve the next goals because it brought the industry to this point.

MPEG Future is a group of people, many of whom are MPEG members, who care about the future of MPEG. MPEG Future is open to those who support the MPEG Future Manifesto’s principles and actions.

You may:

  • Participate in the MPEG Future activities, by subscribing to the LinkedIn MPEG Future group https://bit.ly/2m6r19y
  • Join the MPEG Future initiative, by sending an email to info@mpegfuture.org.

Posts in this thread


What is MPEG doing these days?

It is a now a few months since I last talked about the standards being developed  by MPEG. As the group dynamics is fast, I think it is time to make an update about the main areas of standardisation: Video, Audio, Point Clouds, Fonts, Neural NetworksGenomic data, Scene description, Transport, File Format and API. You will also find a few words on three exploration that MPEG is making

  1. Video Coding for Machines
  2. MPEG-21 contracts to smart contracts
  3. Machine tool data.


Video continues to be a very active area of work. New SEI messages are being defined for HEVC while there a high activity in VVC that is due to reach FDIS in July 2020. Verification Tests for VVC have not been carried out yet, but the expectation is that VVC will bring compression of video of about ~1000, as can be seen from the following table where bitrate reduction of a standard is measured with respect to that of the previous standard. MPEG-1 bitrate reduction with respect to uncompressed video. VVC bitrate reduction is estimated.

Standard Bitrate reduction Year
MPEG-1 Video -98% 1992
MPEG-2 Video -50% 1994
MPEG-4 Visual -25% 1999
MPEG-4 AVC -30% 2003
MPEG-H HEVC -60% 2013
MPEG-I VVC -50% 2020

Compression of 1000 is obtained by computing the inverse of 0.02*0.5*0.75*0.7*0.4*0.5.

SEI messages for VVC are now being collected in MPEG-C Part 7 “SEI messages for coded video bitstreams”. The specification of SEI messages is generic in the sense that the transport of SEI messages can be effected both in the video bitstream or at the Systems layer. Care is also taken to make messages transport possible on previous video coding standards.

MPEG CICP (Coding-Independent Code-Points) Part 4 “Usage of video signal type code points” has been released. This Technical Report provides guidance on combinations of video properties that are widely used in industry production practices by documenting the usage of colour-related code points and description data for video content production.

MPEG is also working on two more “traditional” video coding standard, both included in MPEG-5.

  1. Essential Video Coding (EVC) will be a standard video coded that addresses business needs in some use cases, such as video streaming, where existing ISO video coding standards have not been as widely adopted as might be expected from their purely technical characteristics. EVC is now being balloted as DIS. Experts working on EVC are actively preparing for the Verification Tests to see how much “addressing business needs” will cost in terms of performance.
  2. Low Complexity Enhancement Video Coding (LCEVC) will be a standardvideo coded that leverages other video codecs yo improves video compression efficiency while maintaining or lowering the overall encoding and decoding complexity. LCEVC is now being balloted as CD.

MPEG-I OMAF already supports (2018) 3 Degrees of Freedom (3DoF), where a user’s head can yaw, pitch and roll, but the position of the body is static. However, rendering flat 360° video, i.e. supporting head rotations only, may generate visual discomfort especially when rendering objects close to the viewer.

6DoF enables translation movements in horizontal, vertical, and depth directions in addition to 3DoF orientations. The translation support enables interactive motion parallax giving viewers natural cues to their visual system and resulting in an enhanced perception of volume around them.

MPEG is currently working on a video compression standard (MPEG-I Part 12 Immersive Video – MIV) that enables head-scale movements within a limited space. In the article On the convergence of Video and 3D Graphics I have provided some details of the technology being used to achieve the goal, comparing it with the technology used for Video-based Point Cloud Compression (V-PCC). MIV is planned to reach FDIS in October 2020.


Audio experts are working with the goal to leverage MPEG-H 3D Audio to provide a full 6DoF Audio experience, viz. where the user can localise sound objects in horizontal and vertical planes, and perceive sound objects’s loudness changes as a user moves around an audio object, sound reverberation as in a real room and occlusion when a physical object is interposed between a sound source and a user.

The components of the system to be used to test proposals are

  • Coding of audio sources: using MPEG-H 3D Audio
  • Coding of meta-data: e.g. source directivity or room acoustic properties
  • Audio and visual presentations for immersive VR worlds (correctly perceiving a virtual audio space without any visual cues is very difficult)
  • Virtual Reality basketball court where the Immersive Audio renderer makes all the sounds in response to the user interaction of bouncing the ball and all “bounce sounds” are compressed and transmitted from server to client.

Evaluation of proposals will be done via

  • Full, real-time audio-visual presentation
  • Head-Mounted Display for “Unity” visual presentation
  • Headphones and “Max 8” for audio presentation
  • Proponent technology will run in real-time in Max VST3 plugin.

Currently this is the longest term MPEG-I project as FDIS is planned for January 2022.

MPEG Immersive Video and Audio share a number of features. The most important is the fact that both are not “compression standards”, in the sense that they use existing compression technologies on top of which immersive features are provided by metadata that will be defined by Immersive Video (part 12 of MPEG-I) and Immersive Audio (part 5 of MPEG-I). MPEG-I Part 7 Immersive Media Metadata will specify additional metadata coming from the different subgroups.

Point Clouds

Video-based Point Cloud Compression is progressing fast as FDIS is scheduled for January 2020. The maturity of the technology, suitable for dense point clouds (see, e.g. https://mpeg.chiariglione.org/webtv?v=802f4cd8-3ed6-4f9d-887b-76b9d73b3db4) is reflected in related Systems activities that will be reported later.

Geometry-based Point Cloud Compression, suitable for sparse point clouds (see, e.g. https://mpeg.chiariglione.org/webtv?v=eeecd349-61db-497e-8879-813d2147363d) is following with a delay of 6 months, as FDIS is expected for July 2020.


MPEG is extending MPEG-4 Part 22 Open Font Format with an amendment titled “Colour font technology and other updates”.

Neural Networks

Neural Networks are a new data type. Strictly speaking is addressing the compression of Neural Networks trained for multimedia content description and analysis.

NNR, as MPEG experts call it, has taken shape very quickly. First aired and discussed at the October 2017, a Call for Evidence (CfE)  was issued in July 2018 and a Call for Proposal (CfP) issued in October 2018.  Nine responses were received at the January 2019 meeting that enabled the group to produce the first working draft in March 2019. A very active group is working to produce the FDIS in October 2020.

Read more abour NNR at Moving intelligence around.

Genomic data

With MPEG-G parts 1-3 MPEG has provided a file and transport format, compression technology, metadata specifications, protection support and standard APIs for the access of sequencing data in the native compressed format. With the companion parts 4 and 5 reference software and conformance, due to reach FDIS level in April 2020, MPEG will provide a software implementation of a large part of the technologies in parts 1 to 3 and the means to test an implementation for conformity to MPEG-G.

January 2020 is the deadline for responding to the Call for Proposals on Coding of Genomic Annotations. The call is in response to the need of most biological studies based on sequencing protocols to attach different types of annotations, all associated to one or more intervals on the reference sequences, resulting from so-called secondary analyses. The purpose of the call is to acquire technologies that will allow to provide a compressed representation of such annotation.

Scene description

MPEG’s involvement in scene description technologies dates back to 1996 when it selected VRML as the starting point for its Binary Format for Scenes (BIFS). MPEG’s involvement continued with MPEG-4 LASeR, MPEG-B Media Orchestration and MPEG-H Composition Information.

MPEG-I, too, cannot do without a scene technology. As for the past, MPEG will start from an existing specification – glTF2 (https://www.khronos.org/gltf/) – selected because it is an open, extensible, widely supported with many loaders and exporters and enables MPEG to extend glTF2 capabilities of for audio, video and point cloud objects.

The glTF2-based Scene Description will be part 14 of MPEG-I.


Transport is a fundamental function of real-time media and MPEG continues to develop standards, not just for its own standards, but also for JPEG standards (e.g. JPEG 2000 and JPEG XS). This is what MPEG is currently doing in this vital application area:

  1. MPEG-2 part 1 Systems: a WD of an amendment on Carriage of VVC in MPEG-2 TS. This is urgently needed because broadcasting is expected to be a good user of VVC.
  2. MPEG-H part 10 MMT FEC Codes: an amendment on Window-based Forward Error Correcting (FEC) code
  3. MPEG-H part 13 MMT Implementation Guidelines: an amendment on MMT Implementation Guidelines.

File format

The ISO-based Media File Format is an extremely fertile standards area that extends over many MPEG standards. This is what MPEG is doing in this vital application area:

  1. MPEG-4 part 12 ISO Base Media File Format: two amendments on Compact movie fragments and EventMessage Track Format
  2. MPEG-4 part 15 Carriage of NAL unit structured video in the ISO Base Media File Format: an amendment on HEVC Carriage Improvements and the start of an amendment on Carriage of VVC, a companion of Carriage of VVC in MPEG-2 TS
  3. MPEG-A part 19 Common Media Application Format: the start of an amendment on Additional media profile for CMAF. The expanding use of CMAF prompts the need to support more formats
  4. MPEG-B part 16 Derived Visual Tracks in ISOBMFF: a WD is available as a starting point
  5. MPEG-H part 12 Image File Format: an amendment on Support for predictive image coding, bursts, bracketing, and other improvements to give HEIF the possibility to store predictively encoded video
  6. MPEG-DASH part 1 Media presentation description and segment formats: start of a new edition containing CMAF support, events processing model and other extensions
  7. MPEG-DASH part 5 Server and network assisted DASH (SAND): the FDAM of Improvements on SAND messages has been released
  8. MPEG-DASH part 8 Session based DASH operations: a WD of Session based DASH operations has been initiated
  9. MPEG-I part 2 Omnidirectional Media Format: the second edition of OMAF has started
  10. MPEG-I part 10 Carriage of Video-based Point Cloud Compression Data: currently a CD.


This area is more and more being populated with MPEG standards

  1. MPEG-I part 8 Network-based Media Processing is on track to become FDIS in January 2020
  2. MPEG-I part 11 Implementation Guidelines for NBMP is due to reach TR stage in April 2020
  3. MPEG-I part 13 Video decoding interface is a new interface standard to allow an external application to provide one or more rectangular video windows from a VVC bitstream.


Video Coding for Machines

MPEG is carrying out explorations in areas than may give rise to future standards: 6DoF, Dense Light Fields and Video Coding for Machines (VCM). VCM is motivated by the fact that, while traditional video coding aims to achieve the best video/image under certain bit-rate constraints having humans as consumption targets, the sheer quantity of data being/to be produced by connected vehicles, video surveillance, smart cities etc. makes the traditional human-oriented scenario inefficient and unrealistic in terms of latency and scale.

Twenty years ago the MPEG-7 project started the development of a comprehensive set of audio, video and multimedia descriptors. Other parts of MPEG-7 have added other standard descriptions of visual information for search and analysis application. VCM may leverage that experience and frame it in the new context of expanded use of neural networks. Those interested can subscribe to the Ad hoc group on Video Coding for Machines at https://lists.aau.at/mailman/listinfo/mpeg-vcm and participate in the discussions at mpeg-vcm@lists.aau.at.

MPEG-21 Based Smart Contracts

MPEG has developed several standards in the framework of MPEG-21 media ecommerce framework addressing the issue of digital licences and contracts. Blockchain can execute smart contracts, but is it possible to translate an MPEG-21 contract to a smart contract?

Let’s consider the following use case where User A and B utilise a Transaction system that interfaces with a Blockchain system and a DRM system. If the transaction on the Blockchain system is successful, DRM System authorises User B to use the media item.

The workflow is

  1. User A writes a CEL contract and a REL licence and sends both to User B
  2. User B sends the CEL and the REL to a Transaction system
  3. Transaction system translates CEL to smart contract, creates token and sends both to Blockchain system
  4. Blockchain system executes smart contract, records transaction and notifies Transaction system of result
  5. If notification is positive Blockchain system translates REL to native DRM licence and notifies User A
  6. User A sends media item to User B
  7. User B requests DRM system to use media item
  8. DRM system authorises User B

In this use case, Users A and B can communicate using the standard CEL and REL languages, while Transaction system is tasked to interface with Blockchain system and DRM system.

A standard way to translate MPEG-21 contracts to smart contracts will ensure users that the smart contract executed by a block chain corresponds to the human-readable MPEG-21 contract.

Those interested in exploring this topic can subscribe to the Ad hoc group on MPEG-21 Contracts to Smart Contracts at https://lists.aau.at/mailman/listinfo/smart-contracts and participate in the discussions at smart-contracts@lists.aau.at.

Machine tools data

Mechanical systems are become more and more sophisticated in terms of functionalities but also in terms of capability to generate data. Virgin Atlantic says that a Boeing 787s may be able to create half a terabyte of data per flight. The diversity of data generated by an aircraft makes the problem rather challenging, but machine tools are less complex machines that may still generate 1 Terabyte of data per year. The data are not uniform in nature and can be classified in 3 areas: Quality control, Management and Monitoring.

There are data available to test what is means to process machine tool data.

Other data

MPEG is deeply engaged in compressing two strictly non-media data: Genomic and Neural Networks, even though the latter is currently considered as a compression add-on to multimedia content description and analysis. It is also exploring compression of machine tool data.

The MPEG work plan

The figure graphically illustrates the current MPEG work plan. Dimmed coloured items are not (yet) firm elements of the workplan.


Posts in this thread

MPEG is a big thing. Can it be bigger?


Having become the enabler of a market of devices and services worth 1.5 T$ p.a., MPEG is a big achievement, but is that a climax or the starting point aiming at new highs?

This is a natural question to ask for a group that calls itself “MPEG Future”. The future is still to be written and the success of MPEG will largely depend on the ability of those who attempt to write it.

This article will try and analyse the elements of an answer though the following steps:

  1. The MPEG machine as it has been running for several years
  2. The key success factors
  3. The situation today
  4. What’s next to make MPEG bigger.

The MPEG machine: input, processing and output

To understand if MPEG can be bigger the first thing to do is to understand how could MPEG reach this point. I will start doing that by considering a simplified model of the MPEG standards ecosystem that I believe is what has made MPEG the big thing that we know (Figure 1.

Figure 1: The current MPEG standard ecosystem

The MPEG machine has three iterative phases of operation

  1. MPEG receives inputs from 3 sources:
    1. MPEG members
    2. Partners, i.e. committees who may be interested in developing a joint standard
    3. Customers, typically committees or industry associations who may need a standard in an area for which MPEG has expertise.
  2. MPEG processes inputs and may decide
    1. To start an exploration by studying use cases and requirements, or by exploring technologies
    2. To develop a new MPEG standard, if the exploration is successful
    3. To extend or correct an existing standard.
  3. MPEG generates outputs
    1. To the industry at large, by announcing its work plan, milestones of the work plan such as Calls for Proposal, events such as workshops, results of verification tests etc.
    2. To communicate to partners and customers about how MPEG is handling their inputs or to seek their opinion or to propose new initiatives
    3. To inform partners and customers of the progress of its standard and eventually making standards available.

MPEG’s key success factors

These are the main success factors of MPEG’s handling of the business of standardisation.

  1. Search for customers. MPEG started with the vision of “digital media standards for a global market” but it did not have – and still does not have – a “constituency” whose interest it was expected to further. It assembled the expertise required to implement its vision, but needed to find buyers for its standards. Finding customers is the main element of MPEG’s DNA.
  2. Customer care. Each industry has its own requirements, some shared with others. MPEG needed to find both the common denominator of requirements and the industry-specific requirements, and to design solutions where all industries could operate unencumbered by the requirements of others.
  3. Integrated standards. MPEG has been able to develop complete digital media solutions without leaving its customers struggling with the task of making different pieces from different sources work together. Still the single parts can be individually used.
  4. The role of research. MPEG has been a magnet attracting the best researchers in digital media from both industry and academia and is influencing many research programs.
  5. New customers without losing old ones. With MPEG-2, MPEG had “acquired” the broadcasting industry, but with MPEG-4 it acquired the IT and mobile industry. MPEG succeeded in providing the same standards to both industries even though they were more and more in competition. This has continued in other areas of MPEG standardisation.
  6. Strategic plans. MPEG has developed its program of work through collaboration with its client industries. MPEG does not have a centralised “strategic planning” function, but this function is part of its modus operandi.
  7. Business model. Companies participating in MPEG know that good technologies are rewarded by royalties and that they should invest in technologies for future standards.

The situation today

The MPEG operation has seen many years of successes, but the context has greatly changed.

  1. MPEG has an impressive portfolio of standards actively used by a global number of loyal customers.
  2. The media industry has greatly expanded in scope and its members are becoming more diverse.
  3. MPEG values the requirements of its customers but there are so many technologies fighting for dominance
  4. There is an increasing percentage of MPEG members who come from research/academia or are NPEs
  5. Acquiring new customers in new areas is getting more and more onerous
  6. Many work items in the strategic plan heavily depend on technologies whose development path is unclear
  7. The MPEG business model is still an asset, but may no longer serve the needs of a significant part of MPEG customers.

And now, what?

MPEG Future strives to facilitate the creation of a new environment that will enable the development of standards for media compression and distribution and their adoption for ever more pervasive media-related user experiences.

What should be the principal axes of the new MPEG age advocated by MPEG Future?

Technology? Sure, mastering technology for top performing standards remains important to MPEG, but commonality and synergies of technologies is not the issue. MPEG has a large and dedicated group of experts who explore the implications of new technologies (more is better, but it is not the issue).

Market? Sure, market is important. Companies may be reluctant to talk too much about market but making standards that are driven by research and academia is not the way to go.

The principal axis of the next phase of MPEG work should focus on how market players want to package technologies – that MPEG Future obviously advocates to be standard – to serve market needs.

MPEG Future envisages that a new group called Market Needs be created in MPEG in its new status of subcommittee, next to the existing Technical Requirements group. The latter should continue to explore the technology side of new ideas, while the former should monitor the relevance of new ideas as enabled by technology, to market reality. The new form of the MPEG standards ecosystem is depicted in Figure 2.

Figure 2: The neW MPEG standard ecosystem

There are two main challenges for the new MPEG standards ecosystem

  1. A Market Needs group populated with industry leaders
  2. A modus operandi where inputs from Market Needs to Technical Requirements enrich the technical exploration and results from Technical Requirements to Market Needs are used to strengthen the market value of the a new idea.

MPEG is well placed to create an effective Market Needs group because of its network of partners and customers and is well placed to extend its modus operandi with an effective Market Needs – Technical Requirements interaction. After all MPEG has spent its last 30 years incorporating new communities. The Market Needs community is of a new type, but this makes the challenge all the more enticing…

Posts in this thread

MPEG: vision, execution, results and a conclusion


In 1987, a few months before MPEG was the established, ISO TC 97 Data Processing became the ISO/IEC Joint Technical Committee 1 (JTC 1) on Information Technology. With this operation the data processing industry, renamed information technology industry on the occasion, was able to concentrate on a single Technical Committee (TC) all standardisation activities needed by that industry, including those “electrical”, until then the exclusive purview of IEC.

Thirty-two years later, JTC 1 has become a very large TC, but inside IEC things have been all but static. A major achievement, dating back a couple of decades ago, has been the creation of Technical Committee 100 “Audio, Video and Multimedia systems and equipment”, that grouped activities until then scattered in different parts of IEC.

On the occasion of the IEC General Meeting that included meetings of many IEC TCs, a joint TC 100 and JTC 1 workshop was held in Shanghai on 2019/10/19. MPEG was invited to give a talk about how it develops standards and its most promising current projects. The title selected was an unambiguous “The world is going digital – MPEG did so 30 years ago”.

This article will talk about what I said in the first part of my speech: what drove the establishment of MPEG, how MPEG is organised, and main results produced by MPEG. There is a very short conclusion worth reading.

Digitising the audio-visual distribution

The MPEG story is a successful case of digitisation. Thirty years ago, the audio-visual distribution industry engaged in a process that, unlike what is often happening today, was based on international standards. But why did digitisation of the audio-visual distribution industry succeed? That was because of a number of reasons.

At the end of the 1980s

  • Audio-visual data widely used since a few decades in analogue form
  • Everybody understood that digitising analogue data was technically convenient but costly and that use of compression could reduce the size of digital information and even multiply available capacity
  • Compression technology research was beginning to provide exploitable results
  • Just everybody was waiting for an accessible audio-visual compression technology
    • Telcos: new interactive services and video distribution
    • Broacasters: more efficient distribution of old and new services
    • CE Companies: new products for new distribution channels
    • IT Companies: hardware and software for digital audio-visual distribution services.

At that time standards were needed as they are needed today,. However, at that time, the prevailing attitude of the audio-visual sector was that every industry, country, company etc. should have its own “standard”. The MPEG standardisation way of prevailed over the “old way”. Today  international standards are used to compress audio, moving pictures (including 3D Graphics), and to deliver and consume audio-visual data.

MPEG standards were excellent in terms of quality and standardisation made the audio-visual digitisation technology accessible to all users from a plurality of sources. MPEG’s ability to attract all industry made itthe melting pot of the new global audio-visual distribution that eventually became the famed “industry convergence”.

This is well represented by Figure 1 where the many independent analogue basebands of the analogue worlds, instead of becoming many independent digital basebands, became the single MPEG “digital baseband”.

Figure 1: Audio-visual distribution before and after MPEG

The figure applies specifically to MPEG-2 but it is also the conceptual foundation of the large majority of MPEG standards that followed MPEG-2. The repetition of the first success was made possible by the “MPEG business model”:

  1. When developing a standard MPEG requests companies to provide their best technologies
  2. MPEG develops high-performance standards using the best technologies available at a given time frame
  3. Patents holders receive royalties which they may re-invest in new technologies
  4. When time comes, MPEG can develop a new generation of MPEG standards because it can draw from new technologies, some resulting from patent holders’ re-investments.

In the early MPEG days, the “MPEG industries” were those manufacturing devices (implementation industries) and those actually using the devices for their business (client industries). Both were main contributors to the MPEG standards. Today MPEG is quite different from 30 years ago because the context in which it operates has changed substantially. There is a growing role of companies who ow valuable technologies they contribute to MPEG standards but are unlikely to be manufacturers or users of the standard (technology industries), as depicted in Figure 2.

Figure 2: MPEG standards and industries

The MPEG organisation

What is the inside of the machine that produces the MPEG standards? Judging from Figure 3 one could think that it is a very standard machine.

 Figure 3: The MPEG organisational structure

The Requirements subgroup develops requirements for the standards to be developed; 4 technical subgroups – Systems, Video (that includes two joints groups with ITU-T), Audio and 3D Graphics – develop the standards; the Test subgroup assesses the quality; and the Communication subgroup informs the world of the results.

That’s all? Well, no. One must look first at Table 1 to see the “interaction events” between the different subgroups that took place in the last 12 months during MPEG meetings. They were 68 in total, each lasting from one hour to half a day of joint meetings, each involving at least 2 subgroups and some 3 subgroups or more.

Table 1 MPEG interaction events in 2019

  Systems Video Audio 3DG Test
Requirements 6 14 4 2 1
Systems 11 3 9
Video 1 7 4
Audio 3
3DG 1

Table 1 describes the amount of interaction taking place inside MPEG but does not describe how interaction takes place. In Figure 4 the subgroups are represented as a circle surrounding “units” whose first letter indicates the subgroups they belong to. Units are temporary or stable entities within the subgroups who get together (indicated by the arrows) as orchestrated by the subgroup chairs meeting as “Technical Coordination”.


Figure 4: MPEG, subgroups and units

It is this ability to mobilise people with the right expertise that allows MPEG to create standards that can be used to make complete audio-visual systems, but can also be used independently (Figure 5).

Figure 5: MPEG makes integrated standards

MPEG standards are technology heavy. How does MPEG make decisions about which technology get into a standard?

Subgroups are tasked to decide which technologies are adopted in a standard. Because standards are so intertwined, ~10% of official meeting time is used to keep members informed of what is being developed/decided in subgroups, though massive use of IT tools. MPEG members can keep themselves informed of what is being discussed and or decided where and when.

The purpose of MPEG plenaries is to review and approve subgroup decisions. These may be challenged at MPEG plenaries (and this has happened less than 10 times in 30 years). Challenges are addressed by applying thoroughly and conservatively the ISO/IEC definition of consensus.

There is an additional aspect that must be considered: MPEG does not have a constituency because if it had one it would be forced to consider the interests of that industry to the possible detriment of other industries.

Therefore, MPEG has partners, with which it develops standards and customers for which it develops standards. this is shown Table 2.

Table 2 MPEG partners (P) and customers (C)

Committee Status Standards
AES C 2, D
ARIB C 2, 4, H, DASH
ATSC C 2, 4, H, DASH
CTA C Several
DVB C 2, 4, H, DASH
EBU C Several
IEC TC 100 C 2, 4, H
ISO TC 276 P G
ITU-T SC 16 P 2, 4, H, I
JPEG C 4, 7, H
Khronos C I
SCTE C Several
SMPTE C Several
TTA C 2, 4, H, DASH
W3C P Several

The success of MPEG standards

So far MPEG has produced ~180 standards. This would amount to an average of 6 standards per years. In practice it is much more because a standard is a living body that typically evolves to include many Amendments and is published multiple times incorporating those Amendments and Corrigenda. Figure 6 show how productive MPEG has been: in spite of being a working group it has produced more standards than any other JTC 1 Subcommittee.

Figure 6: MPEG has published more standards than any other JTC 1 SC

Table 3 lists the 7 areas in which its standards can be classified and maps each area to one of the MPEG standards. It is easy to see that compression and transport are the areas most populated by MPEG standards.

Table 3 MPEG areas and standards

Areas Standard
Compression 1, 2, 4, 5, C, D, G, H, I
Descriptor compression 7
Content e-commerce 21
Combinations of content formats A
Systems & transport 1, 2, 4, B, DASH, H, I, G
Multimedia platforms E, M
Device & application interfaces M, IoMT, U, V

Table 4 assesses the economic value MPEG has brought to the device manufacturing and service industries. The data refer to 2018. Roughly speaking MPEG-enabled devices are worth ~1 trillion USD p.a. and MPEG-enabled services are worth 0.5 trillion USD p.a.

Table 4 The impact of MPEG standards: 1 T$ (devices), 0.5 T$ (services)

Device manufacturing B$ Services B$
Smartphones 522 Pay-TV 227
Tablets 145 TV advertising 177
Laptops 103 Games, films and music 138
TV sets 100 TV production (US) 40
Video surveillance 37 OTT TV 38
Set Top Boxes 20 Social media 34
Digital cameras 18.9 Enterprise video 13.5
In-vehicle infotainment 15 In-flight Entertainment 5
Video conferencing 5 TV subscriptions 1.5
Commercial drones 1.5 SVOD subscriptions 0.5

We should not forget that the life of a large share of the world population is constantly and pervasively affected by MPEG standards.


MPEG is a unique machine that has produced and counts on producing standards affecting the life of billions of people and wide swathes of industry. Industry and consumers have the right to expect that this machine is allowed to do its work and that no improvised apprentice tamper with it.

Posts in this thread

Who “decides” in MPEG?

If MPEG were a typical company, the answer to this question would be simple. Persons in charge of different levels of the organisation “decide”. But MPEG is not a company and there is no chain of command where A tells B to do C or else.

Decisions are made, but how? As an autocracy, an oligarchy or a democracy? To answer these questions, let’s first see how the work in MPEG is organised.

The convenor chairs the 3 plenary sessions.

  1. Monday morning: the results of the ad hoc groups established at the meeting before are presented and the work of the week is organised. The meeting last typically 3 hours. Typically no “decisions” are made.
  2. Wednesday morning: the results of the first two days of work are presented and the work of the rest of the week is organised. Comments and questions for clarification may be asked. Typically the meeting schedule for the next two years is approved, based on the recommendation of a group called “Convenor’s Advisors” who assesses proposals for meeting venues. This can hardly be called a “decision”. The meeting typically lasts 2 hours.
  3. Friday afternoon: the recommendations from subgroups are reviewed by the plenary. Typically they are read, possibly edited and, as a rule, accepted, unless there is a exception I will talk about later. One can say that the plenary “decides”, but actually it ratifies. The meeting typically last 4 hours.

So, where are decisions made?

To answer this question let’s see how the technical work is done in the subgroups: Requirements, Systems, Video (including groups in collaboration with ITU-T), Audio, 3D Graphics and Tests. This is the rough assignment of responsibilities:

  1. Requirements receives proposals for new work, manages the explorations that lead to issuing Calls for Proposals, participates in the assessment of test results and eventually in the definition of profiles. Requirements makes decisions.
  2. Technology development groups – Systems, Video, groups in collaboration with ITU-T, Audio and 3D Graphics – take the results of the test, develop draft specifications and manage the standard approval process (note that tests are not always required to start a new project, but a requirements definition phase is always present). Technology development groups make decisions.
  3. Tests carries out the growing number of tests that are required for the development of visual standards. Tests does not make decisions, it simply provides the results of test to the appropriate group.

It is clear that decisions are really made by subgroups, but how?

The main ingredients of decisions by technology development groups are input contributions by members and their assessment made by the specific subgroup. Evidence must be brought that a technology does what the proponents claims it does and the main tool to achieve this is called “Core Experiment”. This is carried out by the proponent and at least another independent participant.. The results from the two must be compatible and prove that the technology brings gains to be accepted into the standard.

The decisions of the technology development groups are not easy, but getting to a decision can be achieved in a structured way because technology plays an overriding role. Definitely less structured is the process managed by the Requirements group, done just by itself or jointly with a technical group. The decisions to be made are of the type: “does this proposal for new work make sense” or “is this profile needed”?

The question “does this proposal make sense” leaves ample margins for decision because a new technology may be in competition with another existing technology, can be immature, addresses a questionable need etc. MPEG tends to be open to new proposals based on the principle that if someone needs something, why should those unconcerned prohibit the work? After all, the task of MPEG is not to make a “decision” for the new work to start, but only to make a preliminary assessment of a proposal so that a formal proposal for a new work item can be made and voted by National Bodies.

So far, all MPEG proposals for new standards have passed the ISO acceptance criterion of simple majority of P members in the committee approving the proposal.

Adoption of a profile can also be a really tricky matter. Profiles are levels of performance, typically enabled by the presence of technologies in the profiles, required by certain application domains. How much is the profile driven by technology and how much by the market? Discussions may drag on for a long time but eventually a decision must be made. This one is a real decision because the profile becomes part of the standard.

In ISO, to which MPEG belongs, decision must be made by consensus. The ISO definition of consensus is

General agreement, characterized by the absence of sustained opposition to substantial issues by any important part of the concerned interests and by a process that involves seeking to take into account the views of all parties concerned and to reconcile any conflicting arguments.

NOTE    Consensus need not imply unanimity.

Therefore MPEG subgroups make decisions based on this definition of consensus.

What about the exception made at plenaries I was talking about before? It may happen – and probably it did happen less that 10 times in the 30 years of MPEG history – that the party whose wish was overruled by a decision made by a subgroup based on the above definition of consensus, challenges the subgroup consensus at the Friday plenary. The plenary applies the definition of consensus in a very conservatory way to determine if the challenge has to be accepted or the subgroup decision is confirmed.

We can now see that the question “is MPEG an autocracy, an oligarchy or a democracy?” is the wrong question. The right one is the title of this article “who decides in MPEG”. The answer is: MPEG members, if they want to decide. If not the chairs or the convenor have the task of finding a way to a consensus or else declare that no consensus was found. Then, no decision is made.

Posts in this thread

What is the difference between an image and a video frame?

The question looks innocent enough. A video is a sequence of images (called frames) captured and eventually displayed at a given frequency. However, by stopping at a specific frame of the sequence, a single video frame, i.e. an image, is obtained.

If we talk of a sequence of video frames, that would always be true. It would also be true if an image compression algorithm (an “intra-frame” coding system) is applied to each individual frame. Such coding system may not give an exciting compression ratio, but can serve very well the needs of some applications, for instance those requiring the ability to decode an image using just one compressed image. This is the case of Motion JPEG (now largely forgotten) and Motion JPEG 2000 (used for movie distribution and other applications) or some profiles of MPEG video coding standards used for studio or contribution applications.

If the application domain requires more powerful compression algorithms, the design criteria are bound to be different. Interframe video compression that exploits the redundancy between frames must be used. In general, however, if video is compressed using an interframe coding mode, a single frame may very well not be an image because its pixels may have been encoded using pixels of some other frames. This can be seen in the image below dating back 30 years ago in MPEG-1 times.

The first image (I-picture) at the left is compressed using only the pixels in the image. The fourth one (P-picture) is predictively encoded starting from the I-Picture. The second and third image (B-pictures) are interpolated using the first and the fourth. This continue in the next frames where the sequence can be P-B-B-B-P where the last P-picture is predicted from the first P-picture and 3 interpolated pictures (B-pictures) are created from the first and the last P pictures.

All MPEG intraframe coding schemes – MPEG-1, MPEG-2, MPEG-4 Visual and AVC, MPEG-H (HEVC), and MPEG-I (VVC) – have intraframe encoded pictures. This is needed because in broadcasting applications the time it takes for a decoder to “tune-in” must be as short as possible. Having an intra-coded picture, say, every half a second or every second, is a way to achieve that. Having intra-coded pictures is also helpful in interactive applications where the user may wish to jump anywhere in a video.

Therefore, some specific video frames in an interframe coding scheme can be images.

Why don’t we make the algorithms for image coding and intra-coded pictures of an interframe coding scheme the same?

We could but this has never been done for several reasons

  1. The intra-coding mode is a subset of a general interframe video coding scheme. Such schemes are rather complex, over the years many coding tools have been designed and when the intraframe coding mode is designed some tools are used because “they are already there”.
  2. Most applications employing an interframe coding scheme have strict real time decoding requirements. Hence complexity of decoding tools plays a significantly more critical role in an interframe coding scheme than in a still picture coding scheme.
  3. A large number of coding tools in an interframe video coding scheme are focused on motion-related processing.
  4. Due to very large data collected in capturing video than capturing images, the impact of coding efficiency improvement is different.
  5. Real time delivery requirements of coded video have led MPEG to develop significantly different System Layer technologies (e.g. DASH) and make different compromises at the system layer.
  6. Comparisons between the performance of the still picture coding mode of the various interframe coding standards with available image coding standards have not been performed in an environment based on a design of tests agreed among experts from all areas.
  7. There is no proven need or significant benefit of forcing the still picture coding mode of an MPEG scheme to be the same as any image compression standard developed by JPEG or vice-versa.

There is no reason to believe that this conclusion will not be confirmed in future video coding systems. So why are there several image compression schemes that have no relationship with video coding systems? The answer is obvious: the industry that needs compressed images is different than the industry that needs compressed video. The requirements of the two industries are different and, in spite of the commonality of some compression tools, the specification of the image compression schemes and of the video compression schemes turn out to be different and incompatible.

One could say that the needs of traditional 2D image and video are well covered by existing standards, But what about new technologies that enable immersive 2D visual experiences?

One could take a top-down philosophical approach. This is intellectually rewarding but technology is not necessarily progressing following a rational approach. The alternative is to take a bottom-up experiential approach. MPEG has constantly taken the latter approach and, in this particular case, it acts in two directions:

  1. Metadata for Immersive Video (MIV). This representsa dynamic immersive visual experience with 3 streams of data: Texture, Depth and Metadata. Texture information is obtained by suitably projecting the scene on a series of suitably selected planes. Texture and Depth are currently encoded with HEVC.
  2. Point Clouds with a large number of points can efficiently represent immersive visual content. Point clouds are projected on a fixed number of planes and projections can be encoded using any video codec.

Both #1 and #2 coding schemes include the equivalent of video intra-coded pictures. As for video, these are designed using the tools that exist in the equivalent of video inter-coded pictures.

Posts in this thread

MPEG and JPEG are grown up


A group of MPEG and JPEG members have developed a proposal seek to leverage the impact MPEG and JPEG standards have had on thousands of companies and billions of people all over the world.

A few numbers related to 2018 tell a long story. At the device level, the installed base of MPEG-enabled devices was worth 2.8 trillion USD and the value of devices in that year was in excess of 1 trillion USD. At the service level, the revenues of the PayTV industry were ~230 billion USD and of the total turnover of the global digital terrestrial television was ~200 billion USD.

Why we need to do something

So far MPEG and JPEG were hosted by Subcommittee 29 (SC 29). The group thinks that it is time to revitalise the 27-year old SC 29 structure. To achieve the goal, let’s make the following considerations:

  1. MPEG has been and continues to be able to conceive strategic visions for new media user experiences, design work plans in response to industry needs, develop standards in close collaboration with client industries, demonstrate their performance and promote their use.
  2. For many years MPEG and JPEG have provided standards to operate and innovate the broadcast, broadband and mobile distribution industries, and the imaging industry, respectively;
  3. MPEG and JPEG have become the reference committee for their industries;
  4. MPEG reference industries’ needs for more standards continue to grow causing a sustained increase in MPEG members attending (currently 600);
  5. JPEG and MPEG have a track record of widely deployed standards developed for and in collaboration with other committees that require a more appropriate level of liaison;
  6. MPEG and JPEG operate as virtual SCs, each with a structure of interacting subgroups covering the required areas of expertise, including a strategic planning function;
  7. MPEG and JPEG have independent and and universally recognised strong brands that must be preserved unfettered and enhanced;
  8. MPEG and JPEG are running standardisation projects whose operation must be guaranteed;

A Strengths-Weaknesses-Opportunities-Threats (SWOT) analysis has been carried out on MPEG. The results point to the need for MPEG

  1. To achieve an SC status compatible with its wide scope of work and large membership (1500 registered members and 600 attending physical meetings)
  2. To retain its scope and structure slightly amended to improve the match of standards with market needs and leverage internal talents
  3. To keep and enhance the MPEG brand.

What should be done

This is the proposal

  1. MPEG becomes a JTC 1 SC (SC 4x) with the title “MPEG compression and delivery of Moving Pictures, Audio and Other Data”;
  2. JPEG becomes SC 29 with the title “JPEG Coding of digital representations of images”;
  3. MPEG/JPEG subgroups become working groups (WG) or advisory groups (AG) of SC 4x/SC 29. MPEG adds a Market needs AG;
  4. Both SC 4x and SC 29 retain existing collaborations with ITU-T and their collaborative stance with other committees/bodies, e.g. by setting up joint working groups (JWG);
  5. SC 4x may create, in addition to genomics, WGs/JWGs for compression of other types of data with relevant committees, building on MPEG’s common tool set;
  6. If selected as secretariat (a proposal for a new SC 4x requires that a National Body be ready to take the secretariat), the Italian National Body (ITNB) is willing to make the following steps to expedite a smooth transition:
    1. Nominate the MPEG convenor as SC 4x chair;
    2. Nominate an “SC 4x chair elect” from a country other than Italy using criteria of 1) con-tinuity of MPEG’s vision and strategy, 2) full understanding of the scope of SC 4x and 3) record of performance in the currently held position;
    3. Call for nominations of convenors of SC 4x working groups (WG). We nominate current subgroup chairs as convenors of the respective WG

The benefits of the proposal

The proposal brings a significant number of benefits

  1. It has a positive impact on the heavy load of MPEG and JPEG work plans:
    1. It supports and enhances MPEG work plan, as MPEG is moved to SC 4x, retaining its proven structure, modus operandi and relationships with client industries in scope;
    2. It supports and enhances JPEG work plan, as SC 29 elevates JPEG SGs to WGs, retaining its proven modus operandi and relationships with client industries in scope;
  2. It preserves and builds upon the established MPEG and JPEG brands;
  3. It retains and improves all features of MPEG success, in particular its structure and modus operandi:
    1. SC 4x holds its meetings collocated with the meetings of its WGs and AGs requesting to meet;
    2. SC 4x facilitates the formation of break-out groups during meetings and of ad hoc groups in between meetings;
    3. SC 4x exploits inter-group synergies by facilitating joint meetings between different WGs and AGs during physical meetings;
    4. SC 4x promotes use of every ICT tools that can improve its effectiveness, e.g. teleconferencing and MPEG-specific IT tools to support standards development.
  4. It enhances MPEG’s and JPEG’s collaboration stance with other committees via Joint Working Groups;
  5. It improves MPEG’s supplier-client relationship with its client industries with its new status;
  6. It adds formal governance to the well-honed MPEG and JPEG structures;
  7. It balances continuity and renewal of MPEG leadership at all levels;
  8. It formalises MPEG’s and JPEG’s high-profile standard reference roles for the video and image sectors, respectively.

The title and scope of SC 4x

Upon approval by JTC 1 and ratification by the TMB, SC 4x will assume the following

  1. Title: MPEG compression and delivery of moving pictures, audio and other data;
  2. Scope: Standardisation in the area of efficient delivery of moving pictures and audio, their descriptions and other data
    • Serve as the focus and proponent for JTC 1’s standardisation program for broadcast, broadband and mobile distribution based on analysis, compression, transport and consumption of digital moving pictures and audio, including conventional and immersive, generated or captured by any technology;
    • Serve as the focus and co-proponent for JTC 1’s standardisation program on efficient storage, processing and delivery of genomic and other data, in agreement and collaboration with the relevant committees.

The SC 4x structure

  1. WG 11 subgroups become:
  2. SC 4x Advisory Groups (AG) – do not produce standards;
  3. SC 4x Working Groups (WG) – produce standards;
  4. Minor adjustments to WG 11 subgroup structure made to strengthen productivity:
  1. New Market needs AG to enhance alignment of standards with market needs (to be installed at an appropriate time after establishment of SC 4x);
  2. Genome Coding moves from a Requirements activity to WG level;
  3. SC 4x retains WG 11’s collaborative stance with other committees/bodies, e.g. Collaborative Teams with ITU-T on Video Coding and Joint Working Groups with ISO/IEC committees to carry out commonly agreed projects;

Joint Working Groups (JWG) may be established if the need for common standards with other ISO/IEC committees is identified.

SC 4x will constantly monitor the state of standards development and adapt its structure accor­dingly, including by establishing new WGs, e.g. on standards for other data types.

SC 4x meetings

  1. For the time being, to effectively pursue its standardisation goals, SC 4x will continue its practice of quarterly meetings collocated with its AGs and WGs (same time/place) organised as an “SC 4x week”, virtually the same of that of MPEG. Extended plenaries are joint meetings of all WGs/AGs. SC 4x plenaries held on the Sunday before and during an hour after the extended plenary on Friday. The last plenary deals with matters such as liaisons, meeting schedules etc that used to be handled by WG 11 plenaries
Day Time Meeting Chaired by
Sunday 14-16 SC 4x plenary Chair
Monday 09-13 Extended SC 4x plenary to review AhG reports and plan for the week Chair elect
Wednesday 09-11 Extended SC 4x plenary to review work done so far by AGs/WGs and plan for the rest of the week Chair elect with Tech. Coord. AG Convenor
Friday 14-17 Extended SC 4x plenary to review and approve recommend­ations produced by AGs/WGs Chair
Friday 17-18 Plenary to act on matters requiring SC 4x intervention Chair
  1. WGs and AGs could have longer meeting durations (i.e. start before first SC 4x meeting);
  2. Carry out a thorough review of all details of meeting sessions, agendas, document regis­tration etc. with the involvement of all affected experts;
  3. Institut Mines Télécom’s unique services offered for the last 15 years would be warmly welcome to preserve and continually improve WG 11’s operating efficiency with the involvement of all WG/AG members.

Title and scope of SC 29

(the following is a first attempt at defining the SC 29 title and scope after creation of SC 4x)

Upon approval by JTC 1, SC 29 will change its title and scope as follows:

  1. Title: JPEG coding of digital representations of images
  2. Scope: Development of international standards for
  • Efficient digital representations, processing and interchange of conventional and immersive images
  • Efficient digital representations of image-related sensory and digital data, such as medical and satellite
  • Support to digital image coding applications
  • Maintenance of ISO/IEC 13522

The structure of SC 29

  1. WG 11 subgroups become:
  1. SC 4x Advisory Groups (AG) – do not produce standards;
  2. SC 4x Working Groups (WG) – produce standards;
  3. SC 29 may set up Joint Working Groups, e.g. with SC 4x and TC 42, to carry out commonly agreed projects;

(the following is a first attempt at defining the SC 29 structure after creation of SC 4x, using the current SG structure of WG 1)

  1. SC 29 meetings: similar organisation as currently done by JPEG.

Why MPEG and JPEG do not work together?

This is a reasonable question, and has a simple answer. They can and should, however, the following should be taken into consideration

In an MPEG moving picture codec, there is always a still picture coding mode, a mode of the general moving picture coding scheme, whose tools are a subset of the tools of the complete moving picture coding scheme.

No need or significant benefit has ever been found that justifies the adoption of a JPEG image coding scheme, as the still picture coding mode of an MPEG moving picture coding scheme. Ditto for other schemes

There is no reason to believe that the same should not apply to such media types as point cloud and lightfield. The still picture coding mode of a dynamic (time dependent) point cloud or lightfield coding scheme uses coding tools from the general coding scheme, not those independently developed for images.

Image compression schemes have their own market. separate from the market of moving picture compression schemes. Often the market for images anticipates the market for moving pictures. That is why independent JPEG standards can be useful.

Posts in this thread