Linking Dialogue Characters to MetaHuman Actors
This guide explains how a Dialogue Character asset becomes bound to a live MetaHuman (or any speaking actor) at runtime, ensuring audio, facial animation, camera logic and head/eye focus all resolve to the correct entity.
Core Identity Contract
BPI_ID_Speaker::GetIdCharacter() is the single authoritative source of which Dialogue Character asset an actor represents.
- The
ID_Charactercomponent is a convenience holder for the asset reference. - Do not duplicate ("shadow") the asset in a separate Blueprint variable unless you fully mirror updates.
- All participant matching during dialogue start queries actors via this interface method.
Best Practice: Always return the Dialogue Character asset stored on the
ID_Charactercomponent from your Blueprint implementation ofGetIdCharacter.
High-Level Flow
- Actor Blueprint implements
BPI_ID_Speaker. - An
ID_Charactercomponent (or equivalent) stores aDialogue Characterasset reference. - The dialogue controller gathers candidate actors (level placed or spawned).
- For each dialogue participant in the Dialogue Graph, the system searches for an actor whose
GetIdCharacter()returns the referenced asset. - Bound actors receive line playback events (audio, facial animation, cameras, eye focus etc.).
Dialogue Graph Participant -> Dialogue Character Asset <-> Actor (BPI_ID_Speaker + ID_Character)
Required Components & Interfaces
| Element | Purpose |
|---|---|
ID_Character component | Stores Dialogue Character asset reference |
AC_ID_EyeFocus component | Handles dynamic eye / head focus logic |
| Audio Component | Voice playback output |
BPI_ID_Speaker interface | Unified access to meshes, audio, identity |
Interface Method Expectations
| Method | Should Return | Notes |
|---|---|---|
GetVoiceAudioComponent | The actor's primary Audio Component | Must be valid for voice lines |
GetLeaderPoseMesh | Skeletal mesh driving body animation | Used for head sync layering |
GetFaceMesh | Face mesh (MetaHuman face SKM) | Drives facial animation curves |
GetIdCharacter | Dialogue Character asset (from component) | Identity contract |
Example (Blueprint Implementation Notes)
- Add:
Audio Component,ID_Character,AC_ID_EyeFocus. - Implement interface functions, retrieving values directly from those components.
- Assign the Dialogue Character asset inside
ID_Charactercomponent details (or via construction script).
Matching Requirements Checklist
| Requirement | Why It Matters |
|---|---|
ID_Character component present | Exposes identity to runtime matching |
| Dialogue Character asset assigned | Enables participant mapping |
| Unique asset per speaking actor | Prevents ambiguous binding |
| Interface implemented correctly | Ensures controller retrieves required data |
| Correct meshes returned | Enables head + face sync systems |
Troubleshooting
| Symptom | Likely Cause | Resolution |
|---|---|---|
| Actor skipped | Asset mismatch or not assigned | Verify component holds correct asset |
| No audio | GetVoiceAudioComponent returns null | Return added Audio Component |
| Missing head sync | Wrong GetLeaderPoseMesh | Return the mesh running core body AnimBP |
| No face anim | GetFaceMesh wrong / ABP missing | Assign face mesh + correct ABP |
| Two actors speak one line | Duplicate Dialogue Character asset | Ensure uniqueness |
Example C++ Snippet
// Inside your AMyMetaHumanCharacter class implementing the interface
UDialogueCharacter* AMyMetaHumanCharacter::GetIdCharacter_Implementation() const
{
return ID_CharacterComponent ? ID_CharacterComponent->GetDialogueCharacter() : nullptr;
}
Maintaining Consistency
- Centralize assignment of Dialogue Character asset (Construction Script or a single initialization function).
- Avoid temporary overrides; if you must swap identity (rare), broadcast a change event so caching systems can refresh.
- Periodically validate (in PIE debug tools) that each expected speaker maps to exactly one world actor.