You are currently viewing Virtual Secretary for Videoconference

Virtual Secretary for Videoconference

  • Post author:
  • Post category:MPAI

As reported in a previous post, MPAI is busy finalising the “Use Cases and Functional Requirements” document of MPAI-MMC V2. One use case is Avatar-Based Videoconference (ABV), part of the Mixed-reality Collaborative Space (MCS) project supporting scenarios where geographically separated humans represented by avatars collaborate in virtual-reality spaces.

ABV refers to a virtual videoconference room equipped with a table and an appropriate number of chairs to be occupied by:

  1. Speaking virtual twins representing human participants displayed as the upper part of avatars resembling their real twins.
  2. Speaking human-like avatars not representing humans, e.g., a secretary taking notes of the meeting, answering questions, etc.

In line with the MPAI approach to standardisation, this article will report the currently defined functions, input/output data, AIM topology of the AI Workflow (AIW) of the Virtual Secretary, and the AI Modules (AIM) and their input/output data. The information in this article is expected to change when it will be published as an annex to the upcoming Call for Technologies.

The functions of the Virtual Secretary are:

  1. To collect and summarise the statements made by participating avatars.
  2. To display the summary for participants to see, read and comment on.
  3. To receive sentences/questions about its summary via Speech and Text.
  4. To monitor the avatars’ emotions in their speech and face, and expression in their gesture.
  5. To change the summary based on avatars’ text from speech, emotion from speech and face, and expression from gesture.
  6. To respond via speech and text, and display emotion in text, speech, and face.

The Virtual Secretary workflow in the AI Framework is depicted in Figure 1.

Figure 1 – Reference Model of Virtual Secretary

The operation of the workflow can be described as follows:

  1. The Virtual Secretary recognises the speech of the avatars.
  2. The Speech Recognition and Face Analysis extract the emotions from the avatars’ speech and face.
  3. Emotion Fusion provides a single emotion based on the two emotions.
  4. Gesture Analysis extracts the gesture expression.
  5. Language Understanding uses the recognised text and the emotion in speech to provide the final version of the input text (LangUnd-Text) and the meaning of the sentence uttered by an avatar.
  6. Question analysis uses the meaning to extract the intention of the sentence uttered by an avatar.
  7. Question and Dialogue Processing (QDP) receives LangUnd-Text and the text provided by a participant via chat and generates:
    1. The text to be used in the summary or to interact with other avatars.
    2. The emotion contained in the speech to be synthesised.
    3. The emotion to be displayed by the Virtual Secretary avatar’s face.
    4. The expression to be displayed by the Virtual Secretary’s avatar
  8. Speech Synthesis (Emotion) uses QDP’s text and emotion and generates the Virtual Secretary’s synthetic speech with the appropriate embedded emotion.
  9. Face Synthesis (Emotion) uses the avatar’s synthetic speech and QDP’s face emotion to animate the face of the Virtual Secretary’s avatar.

The data types processed by the Virtual Secretary are:

Avatar Descriptors allow the animation of an Avatar Model based on the description of the movement of:

  1. Muscles of the face (e.g., eyes, lips).
  2. Head, arms, hands, and fingers.

Avatar Model allows the use of avatar descriptors related to the model without the lower part (from the waist down) to:

  1. Express one of the MPAI standardised emotions on the face of the avatar.
  2. Animate the lips of an avatar in a way that is congruent with the speech it utters, its associated emotion and the emotion it expresses on the face.
  3. Animate head, arms, hands, and fingers to express one of the Gestures to be standardised by MPAI, e.g., to indicate a particular person or object or the movements required by a sign language.
  4. Rotate the upper part of the avatar’s body, e.g., as need if the avatar turns to watch the avatar next to itself.

Emotion of a Face is represented by the MPAI standardised basic set of 59 static emotions and their semantics. To support the Virtual Secretary use case, MPAI needs new technology to represent a sequence of emotions each having a duration and a transition time. The dynamic emotion representation should allow for two different emotions to happen at the same time, possibly with different durations.

Face Descriptors allow the animation of a face expressing emotion, including at least eyes (to gaze at a particular avatar) and lips (animated in sync with the speech).

Intention is the result of analysis of the goal of an input question standardised in MPAI-MMC V1.

Meaning is information extracted from an input text and physical gesture expression such as question, statement, exclamation, expression of doubt, request, invitation.

Physical Gesture Descriptors represent the movement of head, arms, hands, and fingers suit-able for:

  1. Recognition of sign language.
  2. Recognition of coded hand signs, e.g., to indicate a particular object in a scene.
  3. Representation of arbitrary head, arm, hand, and finger motion.
  4. Culture-dependent signs (e.g, mudra sign).

Spatial coordinates allow the representation of the position of an avatar, so that another avatar can gaze at its face when it has a conversation with it.

Speech Features allow a user to select a Virtual Secretary with a particular speech model.

Visual Scene Descriptors allow the representation of a visual scene in a virtual environment.

In July MPAI plans on publishing a Call for Technologies for MPAI-MMC V2. The Call will have two attachments. The first is the already referenced Use Cases and Functional Requirements document, the second is the Framework Licence that those responding to the Call shall accept in order to have their response considered.