Meta is developing new models for generating images, videos, and text
Meta is working on two new artificial intelligence models for processing images and videos, as well as generating text. According to The Wall Street Journal, based on internal QA, the company plans to release the models in the first half of 2026.
What's known
The model for images and videos is code-named Mango, the text model is Avocado. The development is led by the Meta Superintelligence Lab (MSL) team under the leadership of Alexander Wang, co-founder of Scale AI. During the meeting, Wang stated that Meta is exploring new "world models" capable of understanding visual information, reasoning, planning, and acting without the need for training on all possible scenarios. The text model is also planned to be optimized for programming.
This year, Meta reorganized its AI divisions, including changes in leadership and the involvement of researchers from other companies. Some have already left MSL. In November, the company's chief scientist in artificial intelligence, Yann LeCun, announced the creation of his own startup.
The Meta AI assistant is integrated into the company's applications, including the Facebook and Instagram search, and is accessible to billions of users. The first products from MSL will be part of the company's updated AI strategy.
Source: The Wall Street Journal