About

I am interested in building human-like collaboration systems between humans and robots. My research focuses on integrating multimodal perception, high-level reasoning, and structured semantic representations to enable interpretable and reliable robot decision-making in real-world manufacturing scenarios.

Recently, I have worked on Omniverse-based digital twins and automated synthetic-data pipelines for metal additive manufacturing, multimodal knowledge graphs for technical documentation, and vision–language–action models for human–robot collaboration.

Research Highlights

Selected Projects

See the Projects page for more details.

News