Skip to main content

Linking Dialogue Characters to MetaHuman Actors

This guide explains how a Dialogue Character asset becomes bound to a live MetaHuman (or any speaking actor) at runtime, ensuring audio, facial animation, camera logic and head/eye focus all resolve to the correct entity.

Core Identity Contract

BPI_ID_Speaker::GetIdCharacter() is the single authoritative source of which Dialogue Character asset an actor represents.

  • The ID_Character component is a convenience holder for the asset reference.
  • Do not duplicate ("shadow") the asset in a separate Blueprint variable unless you fully mirror updates.
  • All participant matching during dialogue start queries actors via this interface method.

Best Practice: Always return the Dialogue Character asset stored on the ID_Character component from your Blueprint implementation of GetIdCharacter.

High-Level Flow

  1. Actor Blueprint implements BPI_ID_Speaker.
  2. An ID_Character component (or equivalent) stores a Dialogue Character asset reference.
  3. The dialogue controller gathers candidate actors (level placed or spawned).
  4. For each dialogue participant in the Dialogue Graph, the system searches for an actor whose GetIdCharacter() returns the referenced asset.
  5. Bound actors receive line playback events (audio, facial animation, cameras, eye focus etc.).
Dialogue Graph Participant -> Dialogue Character Asset <-> Actor (BPI_ID_Speaker + ID_Character)
GetIdCharacter Blueprint Implementation Assign Dialogue Character Asset on Component

Required Components & Interfaces

ElementPurpose
ID_Character componentStores Dialogue Character asset reference
AC_ID_EyeFocus componentHandles dynamic eye / head focus logic
Audio ComponentVoice playback output
BPI_ID_Speaker interfaceUnified access to meshes, audio, identity

Interface Method Expectations

MethodShould ReturnNotes
GetVoiceAudioComponentThe actor's primary Audio ComponentMust be valid for voice lines
GetLeaderPoseMeshSkeletal mesh driving body animationUsed for head sync layering
GetFaceMeshFace mesh (MetaHuman face SKM)Drives facial animation curves
GetIdCharacterDialogue Character asset (from component)Identity contract

Example (Blueprint Implementation Notes)

  • Add: Audio Component, ID_Character, AC_ID_EyeFocus.
  • Implement interface functions, retrieving values directly from those components.
  • Assign the Dialogue Character asset inside ID_Character component details (or via construction script).

Matching Requirements Checklist

RequirementWhy It Matters
ID_Character component presentExposes identity to runtime matching
Dialogue Character asset assignedEnables participant mapping
Unique asset per speaking actorPrevents ambiguous binding
Interface implemented correctlyEnsures controller retrieves required data
Correct meshes returnedEnables head + face sync systems

Troubleshooting

SymptomLikely CauseResolution
Actor skippedAsset mismatch or not assignedVerify component holds correct asset
No audioGetVoiceAudioComponent returns nullReturn added Audio Component
Missing head syncWrong GetLeaderPoseMeshReturn the mesh running core body AnimBP
No face animGetFaceMesh wrong / ABP missingAssign face mesh + correct ABP
Two actors speak one lineDuplicate Dialogue Character assetEnsure uniqueness

Example C++ Snippet

// Inside your AMyMetaHumanCharacter class implementing the interface
UDialogueCharacter* AMyMetaHumanCharacter::GetIdCharacter_Implementation() const
{
return ID_CharacterComponent ? ID_CharacterComponent->GetDialogueCharacter() : nullptr;
}

Maintaining Consistency

  • Centralize assignment of Dialogue Character asset (Construction Script or a single initialization function).
  • Avoid temporary overrides; if you must swap identity (rare), broadcast a change event so caching systems can refresh.
  • Periodically validate (in PIE debug tools) that each expected speaker maps to exactly one world actor.