MPAI issues a Call for Patent Pool Administrator on behalf of the MPAI-CAE and MPAI-MMC patent holders

Geneva, Switzerland – 23 March 2022. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 18th General Assembly. Among the outcomes is the publication of Call for Patent Pool Administrators for two of its approved Technical Specifications.

The MPAI process of standard development prescribes that Active Principal Members, i.e., those intending to participate in the development of a Technical Specification, adopt a Framework Licence before initiating the development. All those contributing to the work are requested to accept the Framework Licence. If they are not Members, they are requested to join MPAI. Once a Technical Specification is approved, MPAI identifies patent holders and facilitates the creation of a patent pool.

Patent holders of Context-based Audio Enhancement (MPAI-CAE) and Multimodal Conversation (MPAI-MMC) have agreed to issue a Call for Patent Pool Administrator and have asked MPAI to publish the call on its website. The Patent Holders expect to work with the selected Entity to facilitate a licensing program that responds to the requirements of the licensees while ensuring the commercial viability of the program. In the future, the coverage of the patent pool may be extended to new versions of MPAI-CAE and MPAI-MMC, and/or other MPAI standards.

Parties interested in being selected as Entity are requested to communicate, no later than 1 May 2022, their interest and provide appropriate material as a qualification to the MPAI Secretariat. The Secretariat will forward the received material to the Patent Holders.

While Version 1 of MPAI-CAE and MPAI-MMC are progressing toward practical deployment, work is ongoing to develop Use Cases and Functional Requirements of MPAI-CAE and MPAI-MMC V2. These will extend the V1 technologies to support new use cases, i.e.,

  1. Conversation about a Scene (CAS), enabling a human holds a conversation with a machine on the objects in a scene.
  2. Human to Connected Autonomous Vehicle Interaction (HCI), enabling humans to have rich interaction, including question answering and conversation with a Connected Autonomous Vehicle (CAV).
  3. Mixed-reality Collaborative Spaces (MCS), enabling humans to develop collaborative activities in a Mixed-Reality space via their avatars.

MPAI develops data coding standards for applications that have AI as the core enabling technology. Any legal entity supporting the MPAI mission may join MPAI, if able to contribute to the development of standards for the efficient use of data.

MPAI is currently engaged in extending some of the already approved standards and developing other 9 standards (those in italic in the list below).

Name of standardAcronymBrief description
AI FrameworkMPAI-AIFSpecifies an infrastructure enabling the execution of implementations and access to the MPAI Store.
Context-based Audio EnhancementMPAI-CAEImproves the user experience of audio-related applications in a variety of contexts.
Compression and Understanding of Industrial DataMPAI-CUIPredicts the company performance from governance, financial, and risk data.
Governance of the MPAI EcosystemMPAI-GMEEstablishes the rules governing the submission of and access to interoperable implementations.
Multimodal ConversationMPAI-MMCEnables human-machine conversation emulating human-human conversation.
Server-based Predictive Multiplayer GamingMPAI-SPGTrains a network to com­pensate data losses and detects false data in online multiplayer gaming.
AI-Enhanced Video CodingMPAI-EVCImproves existing video coding with AI tools for short-to-medium term applications.
End-to-End Video CodingMPAI-EEVExplores the promising area of AI-based “end-to-end” video coding for longer-term applications.
Connected Autonomous VehiclesMPAI-CAVSpecifies components for Environment Sensing, Autonomous Motion, and Motion Actuation.
Avatar Representation and AnimationMPAI-ARASpecifies descriptors of avatars impersonating real humans.
Neural Network WatermarkingMPAI-NNWMeasures the impact of adding ownership and licensing information in models and inferences.
Integrative Genomic/Sensor AnalysisMPAI-GSACompresses high-throughput experiments data combining genomic/proteomic and other.
Mixed-reality Collaborative SpacesMPAI-MCSSupports collaboration of humans represented by avatars in virtual-reality spaces called Ambients
Visual Object and Scene DescriptionMPAI-OSDDescribes objects and their attributes in a scene and the semantic description of the objects.

Visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

Most importantly: join MPAI, share the fun, build the future.