OpenAI's GPT-4o was released amidst much hype, coinciding with Google's biggest event of the year, Google I/O. The timing and the naming of the model (the letter "o" as in Google's event) led to speculations that OpenAI was trolling Google.
GPT-4o was touted as a breakthrough model that could process audio input and output within a single model, in real time. However, the audio part is not yet online, and it will take weeks before an Alpha version is released for testing. This has led to disappointment among some users, who feel the product was oversold and incomplete.
In terms of performance, GPT-4o is on par with its predecessor, GPT-4 Turbo, with only slight improvements. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API.
However, in six benchmark tests, GPT-4o barely outperformed GPT-4 Turbo, and even fell behind in an important benchmark for reading comprehension. Despite the lackluster performance and incomplete state of the product, the release of GPT-4o succeeded in diverting media attention away from Google's event.
The question remains whether a language model that barely outperforms its predecessor was worth all the hype and media attention it received. For OpenAI, the answer seems to be yes, as the announcement dominated news coverage over Google's event.