GPT-4.1, mini and nano: OpenAI introduces the fastest and most economical models for programming
OpenAI has announced the launch of three new GPT-4.1 models via the OpenAI API. The GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano models are currently not available for ChatGPT.
Here's What We Know
GPT-4.1 includes significant improvements in code handling, instruction execution, and understanding of long contexts. According to OpenAI, the new models outperform the previous GPT-4o and GPT-4o mini in all tasks, especially in programming. The models support up to a million context tokens, which improves the understanding of long texts.
Testing on the SWE-bench Verified platform showed that GPT-4.1 is 21.4% more efficient than GPT-4o in programming tasks and 26.6% more efficient than GPT-4.5. The GPT-4.1 mini model, which reduces latency and provides 83% lower costs, is on par with or exceeds the GPT-4o in terms of performance. The GPT-4.1 nano is the fastest and lowest cost model, ideal for classification, autocomplete and similar tasks.
Many of the GPT-4.1 enhancements have already been incorporated into the GPT-4o version of ChatGPT, with more features to come in the future. The GPT-4.1 models have a date restriction until June 2024. This means that they are relevant for events that have taken place before then.
With the launch of GPT-4.1, OpenAI is discontinuing support for GPT-4.5 in the API, as the new models offer similar functionality at a lower cost.
Prices for the new models:
- GPT-4.1 - $2 per million incoming tokens and $8 per million outgoing tokens;
- GPT-4.1 mini - $0.40 per million incoming tokens and $1.60 per million outgoing tokens;
- GPT-4.1 nano - $0.10 per million incoming tokens and $0.40 per million outgoing tokens.
The price is higher for customised models.
Source: OpenAI