An analysis of the MPAI framework licence

http://www.mpai.community/

Introduction

The main features of MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – are the focus on efficient representation of moving pictures, audio and data in general using Artificial Intelligence technologies and the will to find a point where the interest of holders of IPR in technologies essential to a standard and the interest of users of the standard are equally upheld.

In this article I will analyse the principal tool devised by MPAI to achieve the latter goal, the framework licence.

The MPEG process

The process used by MPEG to develop its standards used to be simple and effective. As there were just too many IPRs in video coding back in the late 1980s, MPEG did not even consider the possibility to develop a royalty free standard. Instead it assessed any technology proposed and added it to the standard if the technology provided measurable benefits. MPEG did so because it expected that the interest of industry users and IPR holders would converge to a point satisfactory to both. This did indeed happen by reviving the institute of patent pools.

Therefore, the MPEG process can be summarised by this sequence of steps

Define requirements – Call for technologies – Receive patent declarations – Develop the standard – (Develop licence)

The brackets surrounding “Develop” indicate that MPEG has no business in that step. On the other hand, the entire process relied on the expectation that patent holders could be remunerated.

Lately, many including myself, have pointed out that last step of the process, has stalled. The fact that individually all patent holders declare to be willing to licence their patents at FRAND terms does not automatically translate into the only thing users need – a licence to use the standard.

A tour of existing licences

Conceptually a licence can be separated in two components. The first describes the business model that the patent holders apply to obtain their remuneration. The second determines the levels of remuneration.

Let’s take two relevant examples: the HEVC summary licences published by the two MPEG LA and HEVC Advance patent pools on their web sites where I have hidden the values in dollars, percentages and dates and replaced with variable. In the following you will find my summary. My wording is deliberately incomplete because my intention is to convey the essence of the licences and probably imperfect as I am not a lawyer. If you think I made a serious mistake or an important omission please send an email to Leonardo.

MPEG LA

  • Licence has worldwide coverage and includes right to make, use and sell
  • Royalty paid for products includes right to use encoders/dec­oders for content
  • Products is sold to end users by a licensee
  • Vendors of products that contain an encoder/decoder may pay royalties on behalf of their customers
  • Royalties of R$/unit start from a date and apply if licensee sells more than N units/year or the royalties paid are below a cap of C$
  • The royalty program is divided in terms, the first of which end on a date
  • The percent increase from one term to another is less than x%

HEVC Advance

  • Licence applies to
    • Encoder/decoders in consumer products with royalty rates that depend on the type of device
    • Commercial content distribution (optical discs, video packs etc.)
  • Commercial products used to create or distribute content and streaming are not licensed
  • Licence covers all of licensor(s)’ essential claims of the standard practiced by a licensee
  • Royalty rates
    • Rates depend on territory in which consumer product/content is first sold (y% less in less developed countries)
    • Rates include separate and non-additive discounts of z% for being in-compliance and standard rates if licence is not in-compliance
    • Base rates for baseline profiles and extended rates for advanced profiles
    • Optional features (e.g. SEI messages) have a separate royalty structure
    • Rates and caps will not increase more than z% for any renewal term
    • Multiple cap categories (different devices and content) and single enterprise cap
    • All caps apply to total royalties due on worldwide sales for a single enterprise
    • Standard rates are not capped
    • Annual Credit of E$ applies to all enterprises that owe royalties and are in-compliance provided in four equal quarterly installments of 25E$ each
  • Licences
    • Licenses are for n year non-terminable increments, under the same n year term structure
    • The initial n year term ends yyyy/01/01 and the first n-year renewal term ends yyyy+n/01/01

What is a framework licence?

A framework licence is the business model part of a licence. The MPEGLA and HEVC Advance licences in the form summarised above can be taken as examples of framework licences.

Therefore, a framework licence does not, actually shall not (for antitrust reasons) contain any value of dollars, percentages and dates.

How does MPAI intend to use framework licences?

MPAI brings the definition of the business model part of a licence (that used to be done after an MPEG standard was developed) at the point in the process between definition of requirements and call for technologies. In other words, the MPAI process becomes

Define requirements – Define framework licence – Call for technologies – Receive patent declarations – Develop the standard – (Develop licence)

As was true in MPEG, MPAI does not have any business in the last step in brackets.

Let’s have a closer look at how a framework licence is developed and used. First of all, active MPAI members, i.e. those who will participate in the technical development, are identified. Active members develop the licence and adopt it by a qualified majority

Members who make a technical contribution to the standard must make a two-fold declaration that they will

  1. make available the terms of the licence related to their essential patents according to the framework licence, alone or jointly with other IPR holders (i.e. in a patent pool), after the approval of the standard by MPAI and in no event after the commercial implementation of the standard.
  2. take a licence for the essential patents held by other MPAI members, if used, within the term specified in the framework licence from the publication by IPR holders of their licence terms. Evaluation of essentiality shall be made by an independent chartered patent attorney who never worked for the owner of such essential patent or by a chartered patent attorney selected by a patent pool.

What problem a framework licence solves

The framework licence approach is not a complete solution of the problem of providing a timely licence to data representation standards, it is a tool that facilitates reaching that goal.

When MPAI decides to develop a standard, it must know what purpose the standard serves, in other words it must have precise requirements. These are used to call for technologies but can also be used by IPR holders to define in a timely fashion how they intend to monetise their IP, in other words to define their business model.

Of course, the values of royalties, caps, dates etc. are important and IPR holders in a patent pool will need significant amounts of discussions to achieve a common view. However, unlike the HEVC case above, the potentially very significant business model differences no longer influence the discussions.

Users of the standard can know in advance how the standard can be used. The two HEVC cases presented above show that the licences can have very different business models and that some users may be discouraged from using – and therefore not wait for – the standard, if they know the business model. Indeed, a user is not only interested in the functional requirements but also in the commercial requirements. The framework licence tells the usage conditions, not the cost.

However, some legal experts think that the framework licence could include a minimum and maximum value of the licence without violating regulatory constraints. Again, this would not tell a user the actual cost, but a bracket.

Further readings

More information on the framework licence can be found on the MPAI web site where the complete MPAI workflow is described, or from the MPAI Statutes.

Posts in this thread

Posts in the previous thread

List of all related articles

 

MPAI – do we need it?

http://www.mpai.community/

Introduction

Sunday last week I launched the idea of MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – an organisation with the twofold goal of 1) developing Technical Specifications of coded representation of moving pictures, audio and data, especially using artificial intelligence and 2) bridging the gap between technical specifications and their practical use, especially using “framework licences”.

The response has been overwhelming, but some have asked me: “Why do we need MPAI?”. This is indeed a basic question and, in this article, I intend to provide my answer.

The first reason

As much as VC1, VP8/VP9 and AV1 were developed because MPEG and/or its ecosystem were not providing the solutions that the market demanded, MPAI responds to the need of industry to have usable standards that allow industry and consumers to benefit from technological progress.

The second reason

The body producing standards of such an industrial and social importance should be credible. MPEG is no more, and its unknown SC 29 replacement operates in ISO, a discredited environment because of its lack of governance. The very fact that a determined and connected group of ISO entities could hijack a successful group as MPEG, with its track record serving the industry, is a proof that, at the macro level, major decisions in ISO are made because some powers that be decide that certain things should go in a direction convenient to them. Then, at the micro level, common-sense decisions like preserving MPEG plenaries where the conclusions of different groups are integrated in a single whole are blocked because “they are not in the directives” (as if hijacking MPEG was in the directives).

The third reason

The standards produced by the body should be usable. I have already written that, about 15 years ago, at what was probably the pinnacle of MPEG success, I was already anticipating the evolution of the industry that we are witnessing today. However, my efforts to innovate the way MPEG developed standards were thwarted. I tried to bring the situation to the public attention (see for instance …). All in vain. The result has been that the two main components of MPEG-H, the latest integrated MPEG project – part 2 video (HEVC) and part 3 audio (3D Audio) – have miserably failed. The hope to see a decent licence for Part 3 video (VVC) of the next integrated MPEG project – MPEG-I – is in the mists of an unknown future and may well tread the same path.

It could well happen that, in a burst of pride, VVC patent holders will want to show that they can get their acts together and deliver a VVC licence, but who guarantees that, at the next standard, the same HEVC/3D Audio pantomime will not be on stage? Can the industry – and billions of consumers – continue to be the hostage of a handful of string pullers acting in the dark?

The fourth reason

We need a North Star guiding the industry in the years to come. Thirty-two years ago, the start of MPEG was a watershed. Digital technologies promised to provide more attractive moving pictures and audio, more conveniently and with more features to the many different and independent industries who were used to handle a host of incompatible audio-visual services. Having been present then and being present now, I can testify that MPEG has delivered much more than promised. By following the MPEG North Star, industry has got a unified technology platform on which different industries can build and extend their business.

MPAI is the new watershed. I don’t know if bigger or smaller than 32 years ago, probably bigger. Artificial Intelligence technologies demonstrate that it is possible to do better and more than traditional digital technologies. But there is a difference. In the last 32 years digital audio and video have offered wonders, still they kept the two information streams isolated from the rest of the information reaching the user. With artificial intelligence, audio and video have the potential to seamlessly integrate with the many other information types handled by a device on a unified technology platform. How? Leave it to digital media and artificial intelligence experts, which have started to become an integrated community, to open their respective domains to other technologies.

Forget the past

It would be nice – and many, I for one, would thank for it – if someone undertook to solve the open problems in the use of digital media standards past. I am afraid this is an intricate problem without a unified point from which one can attempt to find a solution.

But is that a worthwhile effort? One way or another, industry has interoperable audio-visual technologies for its current needs, some even say more than it needs.

What remains of the group that did the miracle in the past 32 years is paralysed and the organisation in which it used to operate is problem-ridden and discredited. I pity the hundreds of valuable experts who are forced to face unneeded troubles.

Look to the future

Let’s look to the future, because we can still give it the shape we want. This is what the MPAI statutes suggest when they define the MPAI purpose as developing technical specifications of coded representation of moving pictures, audio and data, especially using artificial intelligence.

The task for MPAI is to call the large community of researchers from industry and academia to reach the goal to develop standards that provide a quantum leap in user experience by doing better and offering more than done so far, and by achieving a deeper integration of information sources reaching the user.

I know that the technologies in our hands have the potential to reach the goal, but only a new organisation that has the spirit, the enthusiasm and the effectiveness of the old one to deliver on the new promises can actually reach the goal.

That is the ideal reason to create MPAI. A more prosaic but vital reason to do it is that standards should also be usable.

Posts in this thread

Posts in the previous thread

List of all related articles

 

New standards making for a new age

http://www.mpai.community/

Problem statement: Making standards, especially communication standard, is one of the noblest activities that humans can perform for other humans. The MPEG group used to do that for media and other data. However, ISO, the body that hosted MPEG, suffers from several deficiencies, two of which are: fuzzy governance and ineffective handling of Intellectual Property Rights (IPR), the engine that ensures tech­nical innovation-based progress. The prospects of reforming ISO are low: installing good governance requires capable leadership and solving the IPR problem is an unre­warding endeavour. Actually, the only beneficiary of such endeavours is by and large only MPEG, whose standards collect ~57.5% of all patent declarations rece­ived by ISO.

Moving Picture, Audio and Data Coding by Artificial Intelligence – MPAI is a not-for-profit organisation that addresses the two deficiencies – governance and IPR handling – by building on and innovating MPEG’s experience and achiev­em­ents and by targeting the involvement of a large community of industry, research and acad­emic experts. MPAI’s gover­nance is clear and robust, and its specifications are developed using a process that is technically sound and designed to facil­it­ate practical use of IPR in MPAI specifications.

Mission: to promote the efficient use of Data by

  1. Developing Technical Specifications (TS) of
    1. Data coding, especially using new technologies such as Artificial Intelligence, and
    2. Technologies that facilitate integration of Data Compression components in Information and Communication Technology systems, and
  2. Bridging the gap between TSs and their practical use through the develop­ment of IPR Guidelines, such as Framework Licences and other instruments.

Data include, but are not restricted to, media, health, manufacturing, autom­ot­ive and generic data.

 Governance: The General Assembly (GA) elects the Board of Directors, establishes Develop­ment Committees (DC) tasked to develop specifications and approves their TSs. Each Member appoints an adequate number of representatives in DCs. Principal Members appoint one representative in the IPR Sup­port Advisory Committee (IPR SAC), tasked to develop Framework Licenses (FWL).

Process: before a new project starts (i.e. before a Call for Technologies is issued)

  1. The IPR SAC develops a FW) that lists the elements of the future licen­ce of the TS without any indication of cost. Examples of such possible elements could be: “royalty free profile” with a given performance level, possible “initial grace period” depending on market develop­ment, possible “content fees”, possible one or more annual “caps”, a possible given ratio of user devices generating human perceivable signals vs other user devices etc.
  2. The FWL does not contain actual values such royalty levels, dates, percentage values etc.
  3. The FWL is approved by a qualified majority of Principal Members participating in the project.
  4. Each Member participating in the project declares to be willing to make available a licence for its IP according to the FWL by a certain date, and to take a licence by a certain date if it will use the part of the TS that is covered by IP
  5. Each Member shall inform the Secretariat of the result of its best effort to identify IP that it believes is infringed by a TS that is being or has already been developed by a DC.

 Method of work: the GA develops, maintains and constantly updates a work plan on the basis of Members’ inputs and responses to Calls for Interest. The GA assigns the development of a TS to a DC. The DC typically issues Calls for Evidence and/or Calls for Technol­ogy. Anybody may answer Calls for Interest, Evidence or Technology. A non-Member whose contribution submitted in response to a Call for Technology is accepted, is requested to join MPAI. DCs develop TSs by consensus. If consensus is not reached on an issue, the chair may decide to bring the matter to the attention of the GA who decides by qualified majority vote. The DC shall docum­ent which (parts of) a contribution is adopted in the TS. See here for a detailed work flow.

Membership: companies and organisations, including universities, may become Principal or As­sociated Members at their choice. Applicants can become and then remain Members by paying yearly membership fees. Only Principal Members are allowed to vote. Associated Mem­bers may join DCs and contribute to the development of TSs.

Key documents: The text above is a summary description of MPAI. The Statutes, that include detailed Procedures of work should be con­sulted for precise information. See a summary here. The Statutes are being reviewed and will be made public shortly.

A novel approach: MPAI offers a novel approach to standardisation with the following features:

  1. MPAI intends to be a broad multidisciplinary and multi-stakeholder community.
  2. Low access threshold to participate in the development of Technical Specifications: most meetings are held by teleconference sup­ported by advanced ICT-based collaboration facilities.
  3. Facilitated participation of experts who have stayed away from formal standardisation because of cost and other concerns.
  4. Framework Licences, developed through a rigorous process, expedite the use of Technical Specifications covered by IP.
  5. Timely delivery of application-driven and technology-intensive specifications.
  6. Bottom-up governance in specification development.
  7. No external constraints on members when they decide about activities.

The MPAI web site is at http://www.mpai.community/. As the TLD suggests, MPAI is a community. Therefore, comments from the community, in particular on Statutes and Operation, are welcome. Please send your comments to Leonardo, while the MPAI Secretariat is being established.

Posts in this thread

Posts in the previous thread

List of all related articles

 

The MPEG to Industry Hall of fame

At the suggestion of Steve Morgan, THE [RE]DESIGN GROUP, I initiate a new “MPEG to Industry” Hall of fame complementing the MPEG Hall of fame where I highlighted those who helped make MPEG what it eventually became, besides standards development.

Suggestions are open. If you want to make a nomination please send an email to Leonardo adding the name of the nominee and a brief text explaining the contribution of the nominee to convert one or more MPEG standards into products.

[2020/07/15, Steve Morgan]

The honorable Mr. Jerry Pierce was responsible for helping Matsushita (aka Panasonic) establish the Digital Video Compression Center (DVCC), which was Hollywood’s first and foremost DVD Mastering facility.  During it’s 7 years if operations, DVCC released over 87% of Hollywood’s “A” titles on DVD.

Posts in this thread

Posts in the previous thread

List of all related articles

 

This is ISO – An incompetent organisation

ISO is too important to leave it in the hand of people who are catapulted to Geneva from who knows where for who knows what alchemy to serve who knows what purposes.

Of course, the way an organisation elects to hire people is their business. However, the mission of that organisation must be fulfilled. The mission “to develop high quality voluntary International Standards which facilitate international exchange of goods and services, support sustainable and equitable economic growth, promote innovation and protect health, safety and the environment” cannot stop at putting in place a process prescribing how to move a document due to become a standard from one stage to another. I mean, that could have been the end point 73 years ago when ISO was established.

I do not know what is required for economic growth or protection of health, safety and the environment, but is innovation promoted just by managing the process of standards approval? In my opinion standard is synonymous of innovation, or we are talking of rubber stamping.

Of course, innovation has probably as many faces than there are industries, probably more. Therefore the point here is not about ISO looking for and hiring superhumans competent on everything and able to discover the enabling factors of innovation, but it is about hiring people who listens to the weak or strong signals coming from the field of standardisation.

In this article I will talk about how MPEG handled reference software for its standards, an issue that goes to the core of what is a media compression standard.

In 1990 Arian Koster proposed to develop a common reference software for MPEG-1. Internet may have developed on the principle of “Rough Consensus and Running Code”, but the world of video compression was born on what I would call “Rough Consensus and Running Hardware” where each active participant developed their own implementation of a commonly (roughly) agreed specification. Comparing results was not easy. In the COST 211 project satellite 2 Mbit/s satellite links were used to interconnect different hardware implementations. In MPEG-1 (but that was true of MPEG-2 as well) every active participant developed their own code and brought results of their simulations of Core Experiments. By proposing to create common code bases, Arian opened the doors of a new world to MPEG.

Arian’s proposal notwithstanding, there was not a lot of common code for MPEG-1 and MPEG-2, but in the mid 1990s his ideas were fully implemented for the MPEG-4 reference software – audio, video and systems. That was more than 10 years after Richard Stallman had launched the GNU Project. In a completely different setting, but with comparable motivation, MPEG made the decision to develop the reference software collaboratively because better software would be obtained, the scope of MPEG-4 was so large that no company could probably develop it all and a software implementation made available to the industry would accelerate adoption of the standard.

In those years, Mike Smith, the head of ISO’s Information Technology Task Force (ITTF), was of great help. He stated that ISO was only interested in “selling text”, not software, and allowed MPEG to develop what was called the MPEG-4 “copyright disclaimer” that contained the following elements

  1. The role of developer and contributors
  2. The status of the software as an implementation of the MPEG standard
  3. Free licence to use/modify the module in conforming products
  4. Warning to users that use of the software may infringe patents
  5. No liability for developers, contributors, companies and ISO/IEC for use/modify the software
  6. Original developer’s right to use, assign or donate the code to a third party.

For sure the MPEG-4 copyright disclaimer was rather disconnected from the GNU Public Licence (GPL), but it did serve MPEG purposes well. All MPEG reference software was made available on the ISO web site for free download. It is a known fact that many use MPEG reference software as the uncontroversial and unambiguous way to express the intention of the (textual) MPEG standard.

The copyright disclaimer was used for about 15 years, until large software companies in MPEG expressed their discomfort for it. At that time many companies were already using modified reference software in their products. This was allowed by the copyright disclaimer but handling software with different licences in their products was not desirable. MPEG then opted for a modified BSD licence. “Modification” meant adding a preamble to the BSD licence: “The copyright in this software is being made available under the BSD License, included below. This software may be subject to other third party and contributor rights, including patent rights, and no such rights are granted under this license”. MPEG called this the MXM licence because it was developed for and first applied to the MPEG Extensible Middleware standard.

One day, probably some 25 years after Arian’s proposal, the ISO attorney Holger Gehring realised that there was one, actually more issues with the reference software. As MPEG used to meet in Geneva every 9 months because of the video collaboration with ITU-T, the MPEG chairs had several sessions with him and his collaborators until late at night for a couple of MPEG meetings. We discussed that, for MPEG people, the textual version of standard was important, but the software version, at least for some types of standard, was more important and that the two had equal status, in the sense that textual version expressed normative clauses in a human language, while the software version expressed the same normative clauses in a machine language.

After reaching an agreement on this principle, the discussion moved to the licence of the software. I recalled the agreement with Mike Smith and that all MPEG reference software was posted on the ISO web site for free download and advocated the use of the MXM licence. This was agreed and I undertook to convince all MPEG subgroups to adopt it (some were still using the copyright disclaimer). With some difficulty I got the support of all subgroups to the agreement.

I communicated the agreement to the ISO attorney but got no acknowledgement. Later I learned that he had “left” ISO.

We kept our part of the agreement and released all reference software with the MXM licence. A little before I left ISO I learned that several standards with reference software in it had been withheld because the reference software issue had surfaced again, that they were discussing it and that they would inform us of the result…

ISO has to decide if it wants to “promote innovation” or if it wants to be feudal. People in the field have acquired a lot of competence, validated by some of the biggest software companies, that would allow ISO competently to address the issue of software copyright in standards.

Posts in this thread

Posts in the previous thread

List of all related articles

 

This is ISO – An obtuse organisation

Introduction

This is the 4th episode of my series about the deficiencies of the organisation that calls itself International Organisation for Standardisation and claims “to develop high quality voluntary International Standards which facilitate international exchange of goods and services, support sustainable and equitable economic growth, promote innovation and protect health, safety and the environment”.

I do not intend to make comment on how the mission is achieved – that is for everybody to judge – but on what is further up and produces the claimed international standards.

In the first episode This is ISO – A feudal organisation I analysed how the ISO structure is permeated by an attitude befitting more the early Middle Ages than the end of the second decade of the third millennium. In the second episode This is ISO – A chaotic organisation I highlighted how standards making is negatively impacted by a work allocation that responds to a logic of power, not of efficiency. In the third episode This is ISO – A hypocritical organisation I described how the ISO/IEC directives make minute prescriptions on debatable or irrelevant issues, while big issues like governance are simply swept under the carpet.

In this episode I will analyse how ISO investments in Information Technology (IT), an indispensable tool in today’s standards making process, go awry causing harm to the very processes the IT investment is supposed to improve. This is a serious issue with two faces: thousands of people are forced to do useless work and National Body money is incompetently squandered.

Making standards editing digital

The idea of digitising the workflow that produces standards is sound. However, whether it makes also sense depends on how it is designed.

Most of the work MPEG was doing at the time of the MPEG-1 Committee Draft (1991) was already electronic, but few carried what were at that time heavy laptops. The MPEG-1 editors sent files to the secretariat, but I do not know if the secretariat forwarded files or paper to the ISO Central Secretariat (CS). Also, I do not know in which form ISO CS sent the drafts back to the editors for review. I would not be surprised if all correspondence were paper-based.

Two years later, at the meeting that approved the MPEG-2 CD (November 1993), all MPEG processes were electronic, but paper still ruled.  Notwithstanding the anti-environment record 1,000,000 photocopies, the event marked the beginning of a golden age that lasted about 20 years. Editors sent their MS Word files to the secretariat who reviewed and sent them to the ISO CS who would annotate the files received from the editors and send the annotated Word files back to the editors.

A golden age is called such because people can later refer to it as a time when everything ran so smoothly. In fact the standard development workflow was already digital. The dark age came when ISO thought they should “improve” it. They commissioned a system that converted word files into an SGML format. This would be the master file that would be annotated, converted to PDF, and sent to the editors. I imagine ISO CS thought this was a great idea, but the result was that editors could not access the SGML editor and even if they could, they would continue editing their Word files. The PDF caused all sorts of troubles, the most remarkable being the loss of all links from one section to another section of the file, sometimes running by the hundreds in a 500+ page document. The editors went crazy trying to recover the vital information lost.

Digital support to standards making

In November 1995 MPEG created a system that allowed members to upload their contributions to a remote server, and browse through and download selected documents. In the 3 weeks around an MPEG meeting there used to be very high peaks of traffic because hundreds of people would simultaneously download hundreds of MByte-size documents

It took many years for ISO to develop a system for uploading and downloading documents for its many committees. It is not known for what purposes the ISO system has been designed, maybe the secretariats who upload one file a day at best, and download two at most.

One day an ISO manager decided that MPEG, too, should use the ISO system. It took some time to convince him that, during an MPEG week, the ISO system might no longer be accessible by all ISO users just because of the MPEG traffic would cause denial of service. Obviously, a system can only be used for what it has been designed.

At one time the ISO CS wanted to know more about the MPEG system. A teleconference was held, and the functionalities of the system explained. ISO people said that they were redesigning a new system and that we would hear back from them. That never happened.

Conclusions

Precept #1 of a project to design a software system is to get the requirements from the people who will eventually use it. This rule is not in force in ISO because ISO is a feudal organisation where the duke decides, and the peasants comply.

We do not live in the year 800 BC. Obtuse decisions impact people who love what they do (for free) and expect that the system, if not help them, at least does not get in the way.

A disclaimer

I did write this in an earlier post, but it is important to repeat it here. ISO is an organisation with an important mission and most of the people from ISO CS that I have met were courteous, competent and ready to help. They were just trying to cope with the deficiencies of the system. ISO problems lie at the top echelons, at least in the current days.

Like the adage goes, a fish rots from the head.

 

Posts in this thread

Posts in the previous thread

List of all related articles

 

What to do with a jammed machine?

Introduction

In this article I recall how the success story of the MPEG business model took shape, why that wonderful machine has jammed and how the competition is prospering. There could be a future that continues the successes of the past but more likely there will be another future that is costly for industry, research and consumers. This situation should prompt a call for action, but…

A success story

Starting with MPEG-1 (1992), MPEG standards have provided the industry with the means to carry out the transformation of media from analogue to digital. Subsequently, MPEG standards have enabled the industry to consolidate the transformation by giving the means to offer to consumers products and services of constantly improved performance. Globally, the annual value of products and services that rely on MPEG standards is ~1.5 trillion USD, or ~2% of the world gross product (GWP).

This success story is based on the “MPEG business model”, designed to reward innovation by developing standards that offer the best performance at a given time, without considering the impact of the IPRs introduced. As the model assumed, the market found the way to remunerate IPR holders and the royalties that the IPR holders obtained were typically reinvested in innovation that generated new IP for future standards.

The machine has jammed

Recently, this success story has been slowed down, if not interrupted. Some of the reasons are:

    1. Increased number of Non-Practicing Entities (NPE)
    2. Constant reduction in the percentage of Practicing Entities (PE)
    3. Valuable IP from NPEs added to MPEG standards
    4. Different PE and NPE motivations with respect to standards
    5. Inadequacy of ISO IPR rules for a fast-evolving market.

The issue with item 5 is that ISO requires patent holders to only declare how they will allow use of their IP: for free (option 1), FRAND assignment (option 2) or no assignment (option 3). This is inadequate for MPEG standards. I saw item 5 coming long time ago, I wrote about it and I even attempted to signal the IPR problem to higher ISO layers. I was stopped by a handful of National Bodies and then I turned to “silent mode”.

That the machine has jammed is shown by the situation of the latest approved video compression standard (HEVC): ~45 IPR holders, of which ~2/3 are part of 3 patent pools with divergent licences and ~1/3 is not part of any patent pool. Over the years the 3 patent pools made only atomic-level convergence and, 7.5 years after the approval of the standard, HEVC has a level of use only in broadcasting, but limited use in streaming.

This is not the only source of problems. The immersive audio compression standard MPEG-H 3D Audio has a significantly smaller number of IPR holders than HEVC. Still, there is no MPEG-H 3D Audio licence. When Sony wanted to use MPEG-H 3D Audio in a service, it was forced to ask MPEG to define a profile whose IP is expected to be owned by just one patent holder.

There is competition

Starting from MPEG-4 Visual, there has always been a level of competition. However, that competition had a limited impact compared to the competition of Alliance for Open Media (AOM), established by Amazon, Cisco, Google, Intel, Microsoft, Mozilla and Netflix in 2016. AOM has released its AV1 video compression specification in 2018. The AV1 license can be defined as “royalty free”. AOM has also established a fund that can be used by AOM members if they are attacked by a third party for IPR infringement.

AV1 is currently widely used for streaming. However, AOM is actively working to get its specification adopted in other application areas, such as broadcasting and mobile.

What the future could be

Economists like to say that having more solutions is good for competition Ask consumers and they will remind you of Betamax and VHS. In the past 30 years, there has been a lot of competition in the field of digital media but that was between proponents of different technologies at MPEG meetings. Thanks to that competition consumers were offered a constantly improved “user experience”. I like to think that MPEG made both economists and consumers happy, a record.

If the MPEG business model were not beset by the difficulties I mentioned, the market could continue to be powered by MPEG standards for a few more decades. This is proved by just two cases. Traditional audio-video compression (for TV and mobile) has reached levels compatible with the development of networks, but more can be done. Coding of immersive audio-visual scenes is a wide research and innovation field.

The costs of the other future

In this section I will analyse the cost that the stalled situation will have on the industry, the research community and the consumers.

Costs for industry

If the stalled situation described above will drag on, it can be expected that the markets will converge to royalty free solutions produced by a source dominated by a few big players. This source will not have an internal mechanism pushing for continuous innovation like MPEG used to have. Therefore, the source will hardly be able to offer the same rate of new technologies that we have witnessed in past decades.

Drawing a comparison with the situation of the 1970s and 1980s when the television industry was stagnating because it had all the technology it needed to make products and services, the media industry may face a future of stagnation.

Costs for research

I do not have direct experience but I have no difficulty believing what knowledgeable people told me, namely that AV1 has been developed by a handful of (very competent) engineers and a handful of (even more competent) patent attorneys.

The new future is one where thousands of well-paid jobs in media compression and the academic research that feeds it will go because there is no longer a pressing demand for new compression technologies.

 Costs for consumers

This situation will obviously be reflected on consumers. They will probably enjoy a bonanza, like television users of the 1970s and 1980s did, because manufacturers will compete on price. But they will not enjoy new shiny devices or services.

And then?

The above looks like a call for action…

Posts in this thread

Posts in the previous thread

List of all related articles

 

Stop here if you want to know about MPEG (†)

Over the last two years I have written ~100 articles in defence of MPEG, its legacy and its future. The articles were written without a particular organisation in mind.

During these two years the number of readers has grow and I thought it useful to group the articles according to the following categories.

Here are the articles

Posts in this thread

Posts in the previous thread

List of all related articles

 

This is ISO – A hypocritical organisation

Introduction

ISO and IEC have developed the ISO/IEC Directives Part 1: Procedures for the technical work about procedures to be followed in developing and maintaining an International Standard. This is complemented by the Consolidated ISO Supplement,  about procedures specific to ISO and by the JTC 1 Supplement, about procedures specific to JTC 1. Finally, for the purpose of this article, I will mention the ISO/IEC Directives Part 2: Principles and rules for the structure and drafting of ISO and IEC documents.

We are talking of hundreds of pages in total. They have costed hundreds of person-years for their development and extension over the decades.

In one of my previous articles, This is ISO – A feudal organisation, I have complained about the fact that ISO does not have a document even remotely similar to those above to regulate the activities at Technical Committee and Subcommittee level where political decisions are made. At meetings, the secretariat has the power to interpret the few rules that can be found in the directives and to invent new ones if needed. For instance, I was not allowed to put on record a statement with the justification that “only the secretariat has the right to put anything in the meeting report”. I believe that the right to have recorded a position in the minutes is most basic right of a representative system.

In this article I would like to highlight an aspect of the directives that has been made to apply to both ISO and IEC. in my opinion, it is debatable whether this aspect should even be regulated, while it is clear that this rule has devastating impacts on the development of standards and forces ISO participants to formally comply with it while finding all sorts of subterfuges to avoid it.

My question is: should the minimum – in my opinion the trivial – be regulated, when essential aspects such as regulating the power of committee secretariats or of the ISO central secretariat, take second place, actually no place at all?

The issue

I do not have direct experience of the case that triggered the issue I am going to address. I learned from knowledgeable persons that, at a time when a New Work Item Proposal (or another document) was out for ballot in JTC 1, a National Body used an official mailing list to lobby for the defeat of the document.

Personally, I would not think this is a crime as it was considered. Often there are intense communications among National Bodies behind the scenes when a document is out for ballot, so why not have it in public? One could even argue that it is preferable that National Bodies know all arguments.

Instead, this action drew the ire of the JTC 1 secretariat who obtained that the following rule be introduced: “Documents out for ballot at NP, Committee Stage or any later stage shall not be subject to formal discussion at any working level of JTC 1 during the balloting period. Therefore, National Body positions on a document under ballot are not to be formally discussed at any working level” (wording may have changed over the years).

The impact

For MPEG, this decision was a disaster waiting to happen, another proof that MPEG is the “odd committee out” in ISO (see The MPEG exception). One effect of the decision wouls be that MPEG standards, typically very complex documents, could no longer be collectively reviewed at meetings, one of the main activities that allow MPEG standards to be released in a timely manner and with few or no errors. While errors can be corrected, they require the publication of corrigenda to take effect, not desirable for many reasons.

That rule notwithstanding, however, MPEG went on unconcerned because, this was the reasoning, input documents identifying errors in draft standards were submitted by experts, not National Bodies. So, MPEG kept on discussing and fixing bugs in draft standards out for ballots at its meetings, as any company does for its specifications. National Bodies were informed that bugs had been discovered and could be fixed as documented in MPEG-produced documents titled “Study on Draft CD, DIS etc…”.

But the JTC 1 crusade to straighten things out in the world of international standards did not stop at JTC 1. After a few years, this rule, so far valid only for JTC 1, was made to apply to all of ISO. This was not enough for the JTC 1 secretariat. After a few years of efforts, the maneuvres to extend the rule to IEC as well succeeded. Finally, ISO and IEC could tread in the ways of righteousness!

Unfortunately the attention, already turned to MPEG for other matters, was further focused on MPEG. The group realised that the interpretation of “documents identifying errors submitted by experts and not by National Bodies” would not be accepted. I began asking for advice to several National Bodies. Some were kind enough to offer specific advices like “You should meet in a session outside of the official meeting to discuss bug fixing of documents out for ballot” or “Create a copy under different document number – even if the content is more or less exactly the same, possibly titled “continuous improvement of ………….” and so on.

Conclusions

The story above is a fair description of the atmosphere in which people operate in ISO. Rules are established haphazardly with language, subject to interpretation by a secretariat, to achieve undefined goals of righteousness disregarding the impact on the real mission that should concern ISO, namely, to serve the industry with timely and bug-free standards. The “workers” (or should I call them “peasants”?) who ensure that the ISO mission is reached, not only should work, but are also forced to behave hypocritically by upholding the sacrosanct value of the Directives in words while finding subterfuges to circumvent them in practice.

All this while nothing is done to set order where order is much needed: regulating the power of secretariats.

In Matthew 7:5 we find “Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother’s eye”.

Posts in this thread

Posts in the previous thread

List of all related articles

 

The MPEG Hall of fame

As MPEG is no more, it is time to open and populate the MPEG Hall of fame. It is not going to be an easy task because so many people have contributed to make MPEG what it was.

The MPEG subgroup chairs will not be mentioned here. They are in the Hall of fame by default because they have been among those most committed to the success of MPEG. If you want to know their names, responsibilities and years of services, please look here.

In this article I will mention some of those who made a particular contribution outside of standards development. If you think that someone whose name is not here should be added to MPEG Hall of fame, please send me an email.

Temporally, the first entry is Tsuneyoshi Hidaka. He took care of defining, planning, executing and processing the data of the MPEG-1 and MPEG-2 subjective tests carried out in November 1989 and November 1991, respectively. The execution part was particularly important because JVC had a recently built research lab at Kurihama with an excellent video quality testing facility. Without the benefit of Tsuneyoshi’s expertise and the access to testing facilities MPEG could very well stumbled at its first project because at that time it was an unknown group whose only asset was its plan.

The second entry of the MPEG Hall of fame is Ken Davies (CBC). Ken was a very respected participant in CCIR (now ITU-R) activities. He was the first to make available video test sequences to MPEG by giving access to the CCIR library. Two video sequences – “Table Tennis” and “Flower Garden” – were used and watched by thousands of people engaged in video coding research in the 1990s both inside and outside of MPEG. I am unable to tell the number of individuals and companies who permitted the use of their content – video, audio and, more recently, point cloud data – used to develop many MPEG standards since Ken’s first donation.

As the Hall of Fame attempts to list people in temporal order, it is appropriate to mention two figures whose support and assistance have been fundamental in the first 20 years of MPEG. They are Mike Smith, then the ITTF chief and Keith Brannon, his right hand. They both understood the importance of bringing the MPEG activity under the folds of ISO and both always available to help and solve problems. Today I would say: Quantum mutatus ab illo.

On a different plane is the role of another entry of the MPEG Hall of fame. Arian Koster was a young engineer at KPN Research who, one morning of July 1990, called me to make a suggestion: “What if MPEG developed a software implementation of the MPEG-1 standard?”. This was a radical new idea because at that time every participant in MPEG-1 Audio or Video had their own software implementations of the Video Simulation Model. His suggestion was the beginning of what became the consolidated practice of the MPEG Reference Software. Both MPEG-1 and MPEG-2 did not have significant examples of common reference software, but the practice took root with MPEG-4 and continued ever since.

On a similar plan I would like to propose another entry to the MPEG Hall of fame, myself Leonardo Chiariglione (CEDEO.net). In general, ISO standards do not make a clear distinction between specification (“the law”) and conformance testing (“the tribunal”). My job at Telecom Italia, however, made me sensitive to the latter because Testing Laboratories have always been part of the culture of that industry. MPEG-1 was a “telco – consumer electronics” project, hence it included an industry not very sensitive to formal conformance testing. Therefore, the adoption of my idea that there should be an ad hoc part of the standard that would serve the purpose of testing an implementation for conformance was not discounted. Conformance testing has been ever since a necessary component of all MPEG standards.

On yet another plane was the role of two other entries in the MPEG Hall of fame: Dick Green and Baryn Futa. Dick had worked at PBS had a long experience in attending ITU meeting. In 1988 he had established CableLabs of which he was President. In 1992 he visited my lab at CSELT (later Telecom Italia Lab) to know more about the MPEG-2 work and gave the task to Baryn Futa, CableLabs CEO, to take care of MPEG-2 IPR discussions. Eventually Baryn created MPEG LA (LA=Licensing Authority, no relation with MPEG) overcoming important legislative hurdles.Without a patent pool, MPEG-2 would most likely have never flown.

On yet another plane was the role of another entry of the MPEG Hall of fame, Pete Schirling. Pete worked at IBM and acted as US Head of Delegation (when we could still use that term). In July 1995 he offered MPEG to develop a system that would allow MPEG members to upload their documents and download other members’ documents. I believe that was the first system of this kind deployed for a standards committee, certainly for a committee of the MPEG size which was already hovering around 300. It took many years to get rid of paper, but after Pete’s system we no longer had examples of 1 million photocopies at the November 1993 meeting in Seoul. When Pete could no longer run the system, Wo Chang (NIST) took over. He redesigned the system and ran it for 5 years. When Wo left MPEG, Christian Tulvan (Institut Mines Télécom) took over, redesigned the system and kept on adding incredibly useful features that have made MPEG meetings constantly more efficient and enjoyable. Pete, Wo and Christian can deservedly enter the MPEG Hall of fame.

On the communication front, Pete Schirling (again), Arianne Hinds (IBM and then CableLabs) and Christian Timmerer (University of Klagenfurt) deserve to enter the MPEG Hall for fame because for many years in different periods, they collected and edited the main news and made them the press release of each meeting. Pete, Arianne and Christian have carried out the commendable and quite engaging task of chasing the chairs and extracting the news from them.

On the education front Alexis Tourapis (Apple) enters the MPEG Hall of fame because he shot and edited the video presentations made by MPEG members on different MPEG standards.

On the promotion front, Rob Koenen (TNO and then Tiled Media) and José Alvarez (Huawei) enter the MPEG Hall of fame because they accepted the challenge to be the heralds of MPEG industry liaison. They organised events at MPEG meetings and at several places in the world where they presented the MPEG work plan and asked industry representatives to provide feedback. Besides that, Rob took care of maintaining a visual representation of the MPEG work work plan at every meeting, like the one you find at MPEG status report (Jan 20202).

I am pretty sure I am forgetting worthwhile entries in the MPEG Hall of fame. If you notice one, please send me an email.

Posts in this thread

Posts in the previous thread

List of all related articles