Hi, I'm

Zhishuo

Multimodal AI researcher focused on emotion understanding, LLM systems, and practical intelligent products.

Multimodal Affective Computing LLM + MoE Systems Applied AI Productization

About

I build intelligent systems that connect language, vision, and audio. My interests include multimodal affective computing, robust routing for experts, and turning research prototypes into products that people can actually use.

Research Focus

Core Research

Multimodal Emotion Understanding

Modeling emotion from speech, vision, and text with depth-aware representations.

Systems

LLM + MoE Systems

Task-adaptive routing and efficient expert collaboration for better generalization.

Impact

Applied AI Products

Bridging research and deployment through reliable workflows and automation.

Selected Projects

Representative directions across research and productization.

Research

HEME

Hierarchical emotion modeling with adaptive multi-level mixture-of-experts.

Platform

Emotion Agent Stack

End-to-end pipeline for multimodal emotion analysis and conversational AI.

Workflow

OpenClaw Workflow Lab

Personal automation workflows for research, coding, and assistant orchestration.

Tooling

Paper-to-Product Toolkit

Templates and scripts to accelerate turning academic ideas into usable demos.

News

Latest paper and research updates.

Resume

Repository

For full CV details, publication list, and timeline, please contact me by email or GitHub. (You can replace this block with your PDF CV link anytime.)

Blog

Blog posts are coming soon. You can use this section for research notes, engineering logs, and project retrospectives.

Contact

Available for collaboration

Open to collaboration, research exchange, and product building.

Typical response

Usually within 24–48 hours.