Welcome to LV-Lab @ NUS

The Language and Vision Laboratory (LV-Lab), formerly the Learning and Vision Lab, is a cutting-edge research group focused on advancing the foundations of artificial intelligence through the integrated study of vision, language, and other modalities. In response to the paradigm shift brought by foundation models, our research agenda has expanded beyond single-modality learning toward multimodal representation, reasoning, and generation. Today, LV-Lab has evolved into two parallel and complementary sub-labs, each carrying the LV spirit forward in its own direction.

LV-AGI — the LV Artificial General Intelligence Lab — pursues general-purpose intelligent systems capable of structured perception, proactive inference, and continual adaptation. Our research spans memory and continual learning, video generation, proactive agents, computational creativity, model architecture, inference acceleration, optimization, and reinforcement learning.

LV-Robotics — the Livenex Robotics Lab — focuses on building general-purpose robotic systems grounded in the physical world. Our research covers vision-language-action and visuo-tactile-action models (VLA/VTA), world models, simulators, teleoperation, and reinforcement learning, working toward robots that unify perception, reasoning, and control to transfer from simulation to real-world dexterous manipulation.

News in LV-Lab