The Meta’s V-Jepa 2 Model Amnesty International knows to understand its surroundings


Mita revealed on Wednesday about the new V-jepa 2 Artificial intelligence model, which is a “global model” designed to help artificial intelligence agents understand the world around them.

V-jepa 2 is an extension of V-jepa The Meta model last year, which was trained in more than a million hours of video. Training data is assumed that these robots or other artificial intelligence factors in the material world, understand and predict how concepts such as gravity will affect what is happening in the sequence.

These are the types of proper bonds that children and young animals make with the development of their brains – when you play a cross with a dog, for example, the dog (we hope) will understand how the ball wears on the ground will lead to its recovery up, or how it should run towards the place where you think that the ball will fall, and not the place where the ball is at that precise moment.

Meta depicts examples where the robot can be faced, for example, the point of view of constipation with a distance and spoon and walking towards a stove with cooked eggs. Artificial intelligence can predict that the very possible procedure is to use the spoon to move eggs to the plate.

According to Meta, V-Jepa 2 is 30x than Nvidia’s universe The model, which also tries to enhance the intelligence related to the material world. However, Meta may evaluate its own models according to different criteria from NVIDIA.

“We believe that the models of the world will enter a new era of robots, allowing the artificial intelligence agent in the real world to help in homework and material tasks without the need for astronomical quantities of automatic training data,” explained by the chief artificial intelligence scientists in Mita Yan to be in a video clip.

Leave a Reply

Your email address will not be published. Required fields are marked *