Research

Research at NUS spans two closely related sub labs: LV AGI Lab and LV Robotics Lab.

LV AGI Lab

LV AGI Lab carries the original language-and-vision focus of the group and studies how foundation models can become more efficient, more capable in multimodal reasoning, and more adaptive over time. For collaborations, visiting opportunities, and student applications, please contact lv.lab.agi at gmail dot com.

Efficient AI

Model Re-architecture & Layerwise-aware Optimization

This direction focuses on enhancing the efficiency and adaptability of LLMs and MLLMs in the post-training stage. We redesign network topologies and introduce layer-aware optimizers, pushing the envelope of speed and memory efficiency without sacrificing accuracy.

Executive Function

Multimodal CoT, Proactive Reasoning, Global Workspace

This direction focuses on enhancing the reasoning capabilities of intelligent agents, evolving from traditional linguistic CoT to multimodal CoT, and from passive responses to proactive reasoning. We study how agents can plan ahead, incorporate world knowledge, and reason across modalities via a lightweight global workspace that coordinates perception, memory, and planning for robust executive control.

Evolving AI

Hybrid Memory & Continual Learning

This direction focuses on developing continual learning systems based on hybrid memory architectures that unify long-term and working memory, enabling adaptive knowledge acquisition while mitigating catastrophic forgetting. These systems support in-situ model evolution, temporal reasoning, and robust performance under non-stationary conditions.

Executive Function Airplane

A single Executive Function aircraft
propelled by Efficiency & Evolution

Email: lv.lab.agi at gmail dot com

LV Robotics Lab

LV Robotics Lab focuses on general-purpose robotic manipulation research. Our goal is to build general-purpose robotic systems with multimodal perception, physical world understanding, and dexterous manipulation capabilities.

VLA & World Models

Vision-Language-Action Learning, Simulation & Physical World Modeling

This direction studies how robots can connect perception, language, planning, and control through scalable vision-language-action models and world models. We are interested in representations that support forecasting, policy learning, simulator alignment, and robust transfer from virtual environments to real-world manipulation.

Tactile Perception

Multimodal Touch Sensing for Contact-Rich Interaction

This direction explores tactile sensing as a core modality for robotic intelligence. We study how touch can be fused with vision and action to infer contact states, object properties, and manipulation progress, especially in scenarios where visual signals alone are ambiguous or insufficient.

Dexterous Hands

Dexterous Manipulation with High-DoF End Effectors

This direction focuses on dexterous robotic hands and fine-grained manipulation skills. We study how multimodal feedback, learned control policies, and structured task representations can enable reliable grasping, tool use, in-hand adjustment, and other high-precision manipulation behaviors.