Mark Zuckerberg showed up in force at Meta Connect, the company’s biggest event of the year. He barely showed off the new Meta Quest 3S, and immediately jumped into Meta’s other big bet: artificial intelligence.
On this occasion, Meta wants to do more than just catch up with the best language models, with new features that are not found on other platforms. For starters, Meta has avoided the problems of ChatGPT with Scarlett Johansson and has hired actors and other famous personalities to give their voices to Meta AI, which will now be able to speak to the user. Among the available voices are those of John Cena, Judi Bench, Kristen Bell and many others.
Continuing with the voice, perhaps one of the most surprising news was the Automatic video translation in Reels. Meta is experimenting with AI translation of Reels that not only creates a fake voice in our language, but is also able to modify the person’s lips so that they speak in our language. At the moment, it is able to translate from Latin Spanish to English, and in the demonstration shown by Zuckerberg, the effect was somewhat creepy.
Meta AI is integrated into all Meta applications, including Instagram, WhatsApp and Facebook Messenger, allowing you to start text or voice conversations with access to all its features; However, these functions cannot be exercised in Europe or the rest of the European Union, and Mark Zuckerberg sent a message to the European Commissiondeclaring himself “eternally optimistic” that the problem can be solved. Among these functions now is the creation of AI images.
Users will only have to write or say a description of what they want, and the AI will send them a message with the created image, which can be shared in other applications or directly on Instagram and Facebook; Therefore, we can expect to encounter even more fake images on these platforms from now on. The ability of AI to edit our photosadd or remove elements, change the background or change clothes. But it also means that anyone can take a picture of us and do the same thing.
The new features are accompanied by a new version of the Meta language model, Llama 3.2; it is the first with vision functions, and the company hopes to be able to understand images like never before, with the possibility of giving titles to our photos or creations.
Llama 3.2 is “open source,” and Mark Zuckerberg has said he considers it the most advanced and cheapest way to develop AI, even if that means third parties can use and modify it. Llama 3.2 includes two vision models, with 11 billion parameters and 90 billion parameters, and two lighter text models that can run on mobile, with 1 billion parameters and 3 billion parameters.