Core Research
Multimodal Emotion Understanding
Modeling emotion from speech, vision, and text with depth-aware representations.
Hi, I'm
Multimodal AI researcher focused on emotion understanding, LLM systems, and practical intelligent products.
I build intelligent systems that connect language, vision, and audio. My interests include multimodal affective computing, robust routing for experts, and turning research prototypes into products that people can actually use.
Core Research
Modeling emotion from speech, vision, and text with depth-aware representations.
Systems
Task-adaptive routing and efficient expert collaboration for better generalization.
Impact
Bridging research and deployment through reliable workflows and automation.
Representative directions across research and productization.
Research
Hierarchical emotion modeling with adaptive multi-level mixture-of-experts.
Platform
End-to-end pipeline for multimodal emotion analysis and conversational AI.
Workflow
Personal automation workflows for research, coding, and assistant orchestration.
Tooling
Templates and scripts to accelerate turning academic ideas into usable demos.
Latest paper and research updates.
For full CV details, publication list, and timeline, please contact me by email or GitHub. (You can replace this block with your PDF CV link anytime.)
Blog posts are coming soon. You can use this section for research notes, engineering logs, and project retrospectives.
Open to collaboration, research exchange, and product building.