You are currently viewing Avatars and the MPAI-MMC V2 Call for Technologies

Avatars and the MPAI-MMC V2 Call for Technologies

  • Post author:
  • Post category:MPAI

The goal of the MPAI Multimodal Conversation (MPAI-MMC) standard is to enable forms of human-machine conversation that emulate the human-human one in completeness and intensity. While this is clearly a long-term goal, MPAI is focusing on standards providing frameworks which break down – where possible – complex AI functions to facilitate the formation of a component market where solution aggregators can find AI Modules (called AIM) to build AI Workflows (called AIW) corresponding to standard use cases. The AI Framework standard (MPAI-AIF) is a key enabler of this plan.

In September 2021, MPAI approved Multimodal Conversation V1 with 5 use cases. The first one – Conversation with Emotion – assumes that a human converses with a machine that understands what the human says, extracts the human’s emotion from their speech and face, articulates a textual response with an attached emotion, and converts it into synthetic speech containing emotion and a video containing a face expressing the machine’s emotion whose lips are properly animated.

The second MPAI-MMC V1 use case was Multimodal Question Answering. Here a human asks a question to a machine about an object. The machine understands the question and the nature of the object and generates a text answer which is converted to synthetic speech.

The other use cases are about automatic speech translation and they are not relevant for this article.

In July 2022, MPAI issued a Call for Technologies with the goal to acquire the technologies needed to implement three more Multimodal Conversation use cases. One concerns the extension of the notion of “emotion” to “Personal Status”, an element of the internal state of a person which also contains cognitive status (what a human or a machine has understood about the context) and attitude (what is the stance the human or the machine intends to adopt in the context). Personal status is conveyed by text, speech, face, and gesture. See here for more details. Gesture is the second ambition of MPAI-MMC V2.

A use case of MPAI-MMC V2 is “Conversation about a Scene” and can be described as follows:

A human converses with a machine indicating the object of their interest. The machine sees the scene and hears the human; extracts and understands the text from the human’s speech and the personal status in their speech, face, and gesture; understands the object intended by the human; produces a response (text) with its own personal status; and manifests itself as a speaking avatar.

Figure 1 depicts a subset of the technologies that MPAI needs in order to implement this use case.

Figure 1 – The audio-visual front end

These are the functions of the modules and the data provided:

  1. The Visual Scene Description module analyses the video signal, describes, and makes available the Gesture and the Physical Objects in the scene.
  2. The Object Description module provides the Physical Object Descriptors.
  3. The Gesture Description modules provides the Gesture Descriptors.
  4. The Object Identification module uses both Physical Object Descriptors and Visual Scene-related Descriptors, to understand which object in the scene the human points their finger to, select the appropriate set of Physical Object Descriptors, and give the Object ID.
  5. The Gesture Descriptor Interpretation module uses the Gesture Descriptors to extract the Personal Status of Gesture.
  6. The Face Description – Face Descriptor Interpretation chain produces the Personal Status of Face.
  7. The Audio Scene Description module analyses the audio signal, describes, and makes available the Speech Object.
  8. The Speech Description – Speech Descriptor Interpretation chain produces the Personal Status of Speech.

After the “front end” part we have a “conversation and manifestation” part involving another set of technologies as described in Figure 2.

Figure 2 – Conversation and Manifestation

  1. The Text and Meaning Extraction module produces Text and Meaning.
  2. The Personal Status Fusion module integrates the three sources of Personal Status into the Personal Status.
  3. The Question and Dialogue Processing module processes Input Text, Meaning, Personal Status and Object ID and provides the Machine Output Text and Personal Status.
  4. The Personal Status Display module processes Machine Output Text and Personal Status and produces a speaking avatar uttering Machine Speech and showing an animated Machine Face and Machine Gesture.

The MPAI-MMC V2 Call considers another use case – Avatar-Based Videoconference – that uses avatars in a different way.

Avatars representing geographically separated humans participate in a virtual conference. Each participant receives each other participants’ avatars, locates them around a table, and participates in the videoconference embodied in their own avatar.

The system is composed of:

  1. Transmitter client: Extracts speech and face descriptors for authentication, creates avatar descriptors using Face & Gesture Descriptors, and Meaning, and sends the participant’s Avatar Model & Descriptors and Speech to the Server.
  2. Server: Authenticates participants; distributes Avatar Models & Descriptors and Speech of each participant.
  3. Virtual Secretary: Makes and displays a summary of the avatars’ utterances using their speech and Personal Status.
  4. Receiver client: Creates virtual videoconference scene, attaches speech to each avatar and lets participant view and/or navigate the virtual videoconference room.

Figure 3 gives a simplified one-figure description of the use case.

Figure 3 – The avatar-based videoconference use case

This is the sequence of operations:

  1. The Speaker Identification and Face Identification modules produce Speech and Face Descriptors that the Authentication module in the server uses to identify the participant.
  2. The Personal Status Extraction module produces the Personal Status.
  3. The Speech Recognition and Meaning produces the Meaning.
  4. The Face Description and Gesture Description modules produce the Face and Gesture Descriptors (for feature and motion).
  5. The Participant Description module uses Personal Status, Meaning, and Face and Gesture Descriptors to produce the Avatar Descriptors.
  6. The Avatar Animation module animates the individual participant’s Avatar Model using the Avatar Descriptors.
  7. The AV Scene Composition module places the participants’ avatars in their assigned places, attaches to each avatar its own speech and produces the Audio-Visual Scene that the participant can view and navigate.

The MPAI-MMC V2 use cases require the following technologies

  1. Audio Scene Description.
  2. Visual Scene Description.
  3. Speech Descriptors for:
    1. Speaker identification.
    2. Personal status extraction.
  4. Human Object Descriptors.
  5. Face Descriptors for:
    1. Face identification.
    2. Personal status extraction.
    3. Feature extraction (e.g., for avatar model)
    4. Motion extraction (e.g., to animate an avatar).
  6. Gesture Descriptors for:
    1. Personal Status extraction.
    2. Features (e.g., for avatar model)
    3. Motion (e.g., to animate an avatar).
  7. Personal Status.
  8. Avatar Model.
  9. Environment Model.
  10. Human’s virtual twin animation.
  11. Animated avatar manifesting a machine producing text and personal status.

The MPAI-MMC V2 standard is an opportunity for the industry to agree on a set of data formats so a market of modules can be created that is able to handle those formats. The standard should be extensible, in the sense that as new more performing technologies mature, they can be incorporated into the standard.

Please see:

  1. The 2 min video (YouTube and non-YouTube) illustrating MPAI-MMC V2.
  2. The slides presented at the online meeting on 2022/07/12.
  3. The Video recording of the online presentation (Youtube,non-YouTube) made at that 12 July presentation.
  4. The Call for TechnologiesUse Cases and Functional RequirementsFramework Licence. and Template for responses.