The avatar system provides complete control over 3D character models, animations, and behaviors. Each scenario can have its own avatar configuration with custom models, animation sets, and interaction behaviors.
Avatar settings are defined in the SceneConfig.js
file for each scenario. The system supports GLB/GLTF models with skeletal animations and automatic retargeting.
avatarSettings: { hipBone: "Root", // Root bone name for animation retargeting avatarFile: "/models/Eva/model.glb", // Path to avatar model animationFile: "/models/standing_animations.glb", // Path to animation library translations: { ["default"]: { position: [-0.02, 0, -0.975], // [x, y, z] position rotation: [0, 0, 0], // [x, y, z] rotation in radians scale: [1, 1, 1] // [x, y, z] scale factors } } }
Your avatar model should:
The animation system supports multiple animation states with weighted blending and seamless transitions between different character behaviors.
The system includes three primary animation states:
animationSet: { [AvatarAnimation.IDLE]: [ // Default resting state { loop: LoopRepeat, name: "Idle_Neutral", weight: 1 }, { loop: LoopRepeat, name: "Idle_Variation", weight: 0.3 // Lower weight = less frequent } ], [AvatarAnimation.THINKING]: [ // Processing/listening state { loop: LoopRepeat, name: "Thinking" } ], [AvatarAnimation.TALKING]: [ // Speaking state { loop: LoopOnce, name: "Talking_Gesture_1", weight: 1 }, { loop: LoopOnce, name: "Talking_Gesture_2", weight: 0.5 } ] }
Each animation can be configured with:
The system automatically blends between animations for smooth transitions:
// Transition times are handled automatically IDLE → THINKING → TALKING → IDLE
The avatar's head automatically tracks the camera for natural eye contact when speaking or listening.
For some avatars/animation combinations, you may need to adjust the head tracking offset:
lookAtCameraOffset: { x: 0, // Horizontal offset y: -0.13, // Vertical offset (negative = look down slightly) z: 0 // Depth offset }
The system includes automatic animation retargeting to apply animations from one character to another with not the exact proportions. Specify the root bone name in the avatar settings to enable this feature.
avatarSettings: { hipBone: "Root", // Specify the root bone for retargeting headBone: "head", // Specify the head bone for look-at tracking // Animation from different character will be retargeted avatarFile: "/models/Charlie/model.glb", animationFile: "/models/standing_animations.glb" }
For detailed facial expression configuration including visemes, emotions, and lip-sync, see the Facial Expressions documentation.
To optimize your models, consider using glTF Transform or its web-based implementations like gltf.report or glb.babylonpress.org.
You can extend the animation system with custom states:
// In your constants file export const CustomAnimation = { ...AvatarAnimation, GREETING: "greeting", FAREWELL: "farewell", EXCITED: "excited" } // In your SceneConfig animationSet: { ...standardAnimations, [CustomAnimation.GREETING]: [ { loop: LoopOnce, name: "Wave_Hello" } ], [CustomAnimation.FAREWELL]: [ { loop: LoopOnce, name: "Wave_Goodbye" } ] }
If you want the AI to trigger these custom animations, you need to adjust your scenario prompts accordingly. Refer to Get Scenario Data for more details.
The main logic is handled by the AI Engine, the rendering is handled by the following components:
/components/core/3d/avatar/Avatar.jsx
: Main avatar component and loading/components/core/3d/avatar/Animations.jsx
: Animation blending, transitions, and retargeting/components/[scenario]/3d/SceneConfig.js
: Scenario-specific avatar settings/store/constants.js
: Animation state constants