OpenAI Unveils GPT-4o: The Next Leap in AI Evolution
OpenAI has introduced GPT-4o, an enhanced version of its GPT-4 model that powers ChatGPT. This updated model can imitate human speech patterns and even attempt to gauge users’ emotions, the company announced. OpenAI’s CTO, Mira Murati, highlighted that GPT-4o is “much faster” and enhances “capabilities across text, vision, and audio.” While the model will be available for free to all users, those on paid plans will benefit from “up to five times the capacity limits” compared to free users.
The rollout of GPT-4o’s features will be gradual, with text and image capabilities being introduced in ChatGPT immediately, as detailed in a company blog post. This launch precedes Google’s I/O developer conference, where updates to Google’s AI model, Gemini, are anticipated.
OpenAI CEO Sam Altman described GPT-4o as “natively multimodal,” meaning it can generate and understand content in voice, text, and images. This new version brings to mind the 2013 film “Her,” where a human character develops a complex relationship with an AI system.
The updated version of ChatGPT, powered by GPT-4o, promises faster performance and real-time reasoning across text, audio, and video. OpenAI has assured that GPT-4o will be accessible to all users within the coming weeks, including those on the free version of ChatGPT.
However, some experts believe OpenAI might be trailing behind its competitors. Gartner analyst Chirag Dekate commented that OpenAI appears to be catching up to larger rivals, noting that many of OpenAI’s new features were already showcased by Google in their Gemini 1.5 Pro launch. Despite OpenAI’s early lead with ChatGPT and GPT-3, Dekate pointed out emerging gaps in capabilities when compared to peers like Google.
Also Read: Navigating Ethical Concerns in AI Deployment: A Comprehensive Analysis