Meta releases multimodal LLAMA 3.2 artificial intelligence model capable of understanding images and text simultaneously
Meta Platforms has released a multimodal LLAMA 3.2 artificial intelligence model capable of understanding images and text simultaneously. More than 1 million advertisers are using its generative artificial intelligence advertising tools. The company's artificial intelligence chatbots are used by more than 400 million people per month and 185 million people per week. The company is testing the Meta AI translation tool for automatic dubbing and lip syncing of short videos in English and Spanish.