Meta

Our New Model Helps AI Think Before it Acts

Takeaways

  • Today we’re introducing V-JEPA 2, a new world model that achieves state-of-the art visual understanding and prediction in the physical world, improving the physical reasoning of AI agents.
  • Physical reasoning is essential to building AI agents that can operate in the physical world, and to achieving advanced machine intelligence (AMI). 
  • We’re also releasing three new benchmarks to evaluate how well existing models can reason about the physical world from video.

Today, we’re excited to share V-JEPA 2, our state-of-the-art world model, trained on video, that enables robots and other AI agents to understand the physical world and predict how it will respond to their actions. These capabilities are essential to building AI agents that can think before they act, and V-JEPA 2 represents meaningful progress toward our ultimate goal of developing advanced machine intelligence (AMI). 

As humans, we have the ability to predict how the physical world will evolve in response to our actions or the actions of others. For example, you know that if you toss a tennis ball into the air, gravity will pull it back down. When you walk through an unfamiliar crowded area, you’re making moves toward our destination while also trying not to bump into people or obstacles along the path. When playing hockey, you skate to where the puck is going, not where it currently is. We achieve this physical intuition by observing the world around us and developing an internal model of it, which we can use to predict the outcomes of hypothetical actions. 

V-JEPA 2 helps AI agents mimic this intelligence, making them smarter about the physical world. The models we use to develop this kind of intelligence in machines are called world models, and they enable three essential capabilities: understanding, predicting and planning.

Building on V-JEPA, our first model trained on video that we released last year, V-JEPA 2 improves understanding and predicting, enabling robots to interact with unfamiliar objects and environments to complete a task. 

We trained V-JEPA 2 using video, which helped the model learn important patterns in the physical world, including how people interact with objects, how objects move in the physical world and how objects interact with other objects. When deployed on robots in our labs, we found that robots can use V-JEPA 2 to perform tasks like reaching, picking up an object and placing an object in a new location.  

Today, in addition to releasing V-JEPA 2, we’re sharing three new benchmarks to help the research community evaluate how well their existing models learn and reason about the world using video. By sharing this work, we aim to give researchers and developers access to the best models and benchmarks to help accelerate research and progress – ultimately leading to better and more capable AI systems that will help enhance people’s lives.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy