Research, standards and thoughts for the digital world

Earlier posts by categories:

MPAI MPEG ISO

A-User Formation: Building the A-User

  • Post author:
  • Post category:MPAI

If Personality Alignment gives the A-User its style, A-User Formation AIM gives the A-User its body and its voice, the avatar and the speech for the A-User Control to embed in the metaverse. The A-User stops being an abstract brain controlling various types of processing and becomes a visible, interactive entity. It’s not just about projecting a face on a bot; it’s about creating a coherent representation that matches the personality, the context, and the expressive cues. We have already…

Continue ReadingA-User Formation: Building the A-User

Personality Alignment: The Style Engine of A-User

  • Post author:
  • Post category:MPAI

Personality Alignment is where an A-User interacting with a User embedded in a metaverse environment stops being a generic bot and starts acting like a character with intent, tone, and flair. It’s not just a matter of what it utters – it’s about how those words land, how the avatar moves, and how the whole interaction feels. We have already presented the system diagram of the Autonomous User (A-User), an autonomous agent able to move and interact (walk, converse, do…

Continue ReadingPersonality Alignment: The Style Engine of A-User

User State Refinement: Turning a Snapshot into a Full Profile

  • Post author:
  • Post category:MPAI

User State Refinement starts from a “blurry photo” of the User in the context (the initial User State) taken by the Context Capture, that includes a location, activity, initial intent, maybe a few emotional hints and adds to the “blurry photo” all the information about the User that the workflow has been able to collect. We have already presented the system diagram of the Autonomous User (A-User), an autonomous agent able to move and interact (walk, converse, do things, etc.)…

Continue ReadingUser State Refinement: Turning a Snapshot into a Full Profile

Basic Knowledge: The Generalist Engine Getting Sharper with Every Prompt

  • Post author:
  • Post category:MPAI

Basic Knowledge is the core language model of the Autonomous User – the “knows-a-bit-of-everything” brain. It’s the provider of the first response to a prompt but the Autonomous User doesn’t fire off just one answer but four of them in a progressive refinement loop, providing smarter and more context-aware responses with every refined prompt. We have already presented the system diagram of the Autonomous User (A-User), an autonomous agent able to move and interact (walk, converse, do things, etc.) with…

Continue ReadingBasic Knowledge: The Generalist Engine Getting Sharper with Every Prompt

Domain Access: The Specialist Brain Plug-in for the Autonomous User

  • Post author:
  • Post category:MPAI

While the Basic Knowledge module is a generalist language model that “knows a bit of everything”, Domain Access is the expert with a broad range of knowledge that enables the Autonomous User to tap into domain-specific intelligence for deeper understanding. We have already presented the system diagram of the Autonomous User (A-User), an autonomous agent able to move and interact (walk, converse, do things, etc.) with another User in a metaverse. The latter User may be an A-User or be…

Continue ReadingDomain Access: The Specialist Brain Plug-in for the Autonomous User

Prompt Creation: Where Words Meet Context

  • Post author:
  • Post category:MPAI

The Prompt Creation module is the storyteller and translator in the Autonomous User’s “brain”, It takes raw sensory input  –  audio and visual spatial data of Context (such as objects in a scene with their position, orientation and velocity) and the Entity State (rich description of the A‑User’s understanding of the “internal state” of the User) – and turns it into a well‑formed prompt that Basic Knowledge can actually understand and respond to. We have already presented the system diagram…

Continue ReadingPrompt Creation: Where Words Meet Context