OpenAI Unveils Cheaper Small AI Model GPT-4o Mini
OpenAI, on July 18, launched its new small GPT-4o mini AI model, branding it as their most cost-efficient offering. Targeting app developers with its new model, OpenAI in its press note stated that it expects that the GPT-4o mini will expand the range of applications built with AI, as the new model makes AI capabilities more affordable.
The Microsoft-backed AI startup company said that the GPT-4o mini surpasses GPT-3.5 Turbo model and other small models from other companies on benchmark scores across textual and multimodal reasoning. Additionally, it stated that the GPT-4o mini enables a variety of tasks with low latency such as passing a large volume of context to the model, interacting with customers with real-time text responses and more.
OpenAI said that the GPT-4o mini has a context window of 128,000 tokens and supports up to 16,000 output tokens per request. The context window essentially determines the size of information an AI model can process.