You are currently viewing The why of the MPAI mission

The why of the MPAI mission

  • Post author:
  • Post category:MPAI

In research, a technology that had attracted the interest of researchers decades ago and stayed at that level for a long time, may suddenly come into focus. This is the case of the collection of different technologies called Artificial Intelligence (AI). Although this moniker might suggest that machines are able to replicate the main human trait, in practice such techniques boil down to algorithmically sophisticated pattern matching enabled by training on large collections of input data.  Embedded today in a range of applications, AI has started affecting the life of millions of people and is expected to do so even more in the future.

AI provides tools to “get inside” the meaning of data to an extent not reached by previous technologies. The word “data” is used to indicate anything that represents information in digital form ranging from the US Library of Congress to a sequenced DNA, to the output of a video camera or an array of microphones, to the data generated by a company. Through AI, the number of bits required to represent information can be reduced, “anomalies” in the data discovered, and a machine can spot patterns that might not be immediately evident to humans.

AI is already among us doing useful things. There is keen commercial interest in implementing more AI-centric processes unleashing its full potential. Unfortunately, the way a technology leaves the initial narrow scientific scope to become mainstream and pervasive for products, services and applications is usually not linear nor fast. However, exceptions exist. Looking back to the history of MPEG, we can see digital media standards not only accelerated the mass availability of products enabled by new technologies, but also generated new products never thought of before.

In fact, the MPEG phenomenon was revolutionary because its standards were conceived to be industry neutral, and the process unfolded successfully because it had been designed around this feature. The revolution, however, was kind of “limited” because MPEG was confined to “media” (even though it tried to escape from that walled garden).

Here we talk about AI-centric data coding standards, which do not have such limitations. AI tools are flexible and can reasonably be adapted to any type of data. Therefore, as digital media standards have positively influenced industry and billions of people, so AI-based data coding standards are expected to have a similar, if not stronger impact. Research shows that AI-based data coding is generally more efficient than existing technologies for, e.g., data compression and description.

These considerations have led a group of companies and institutions to establish the Moving Picture, Audio and Data Coding by AI – MPAI – as an international, unaffiliated not-for-profit Standards Developing Organisation (SDO).

However, standards are useful to people and industry if they enable open markets. Still, the industry might invest hundreds of millions into the development of a standard, only to find that it is not practically usable or it is only accessible to a lucky few. In this case rather than enabling markets, the standard itself causes market distortion. This is a rather new situation for official standards, caused by the industry’s recent inability to cope with tectonic changes induced by technology and market. As a result, developing a standard today may appear like a laudable goal, but the current process can actually turn into a disappointment for industry. A standards development paradigm more attuned to the current situation is needed.

Therefore, to compensate for some standards organisations’ shortcomings in their handling of patents, the MPAI scope extends beyond the development of standards for a technology area to include Intellectual Property Rights guidelines.

Let’s briefly compare how the incumbent Data Processing (DP) technology and AI work. When they apply DP, humans study the nature of the data and design a priori methods to process it. When they apply AI, prior understanding of the data is not paramount – a suitably “prepared” machine is subjected to many possible inputs so that it can “learn” from the actual data what the data “means”.

In a sense, the results of bad training are similar in humans and machines. As an education with “bad” examples can make “bad” humans, a “bad”, i.e., insufficient, sectorial, biased etc. education makes machines do a “bad” job. The conclusion is that, when designing a standard for an AI-based application, the technical specification is not sufficient. So, MPAI’s stated goal to make AI applications interoperable and hence pervasive through standards is laudable, but the result is possibly perverse if ungoverned “bad” AI applications pollute a society relying on them.

For these reasons, MPAI has been designed to operate beyond the typical remit of a standards-developing organisation – albeit it fulfills this mission quite effectively, with five full-fledged standards developed in 15 months of operation. An essential part of the MPAI mission consists of providing the users with quantitative means to make informed decisions about which implementations should be preferred for a given task.

Thanks to MPAI, implementers have available standards that can be used to provide trustworthy products, applications and services, and users can make informed decisions as to which one is best suited to their needs. This will result in a more widespread acceptance of AI-based technology, paving the way for its benefits to be fully reaped by the society.

To know more you should read the book “Towards Pervasive and Trustworthy Artificial Intelligence” available from Amazon https://www.amazon.com/dp/B09NS4T6WN/