ATTN AI SIG: GPT-5.2 is now live in the OpenAI API

7 views
Skip to first unread message

Thomas Messerschmidt

unread,
Dec 11, 2025, 10:13:52 PM (2 days ago) Dec 11
to hbrob...@googlegroups.com
There is a “new and improved” release of ChatGPT!

OpenAI says this about its new ChatGPT 5.2 :

  • State of the art on long-context understanding: strong reasoning performance on complex, ambiguous, data-heavy tasks.
  • Advanced tool-calling capabilitiesmore reliable agent execution through improved tool calling.
  • Our strongest vision model yet:p more reliable for complex dashboards, app UIs, and diagram analysis.
  • State of the art on coding: It excels at front-end UI generation. 
I plan on giving it a test drive tonight!

ALSO


OT(Off Topic) over the last few days. I have collaborated with several different AI models to produce a music video. 


I wrote the lyrics. (I write songs and music as a hobby.) Sora AI wrote a 10 second melody. Suno AI extended it to a minute and 40 seconds. ChatGPT wrote prompts for Sora to create a music video. Sora gave me 12 different 12 second videos. I put the extended audio file with the video files into a video editor and after 2 hours of my  editing, it was done. It is called “The Blue, blue, blues.” The link is below.




Thomas Messerschmidt

-  

Need something prototyped, built or coded? I’ve been building prototypes for companies for 15 years. I am now incorporating generative AI into products.

Contact me directly or through LinkedIn:   




Begin forwarded message:

From: OpenAI <nor...@email.openai.com>
Date: December 11, 2025 at 6:44:51 PM PST
To: thomas...@gmail.com
Subject: GPT-5.2 is now live in the OpenAI API



GPT-5.2 is now available in the API

 

Today we released GPT-5.2 in the API and ChatGPT—our most advanced frontier model yet and our best model for real-world agentic work. GPT-5.2 excels at coding, document & data analysis, and customer support use cases.

Here’s why you may want to consider switching your workloads to GPT-5.2:

  • SOTA on long-context understanding: GPT-5.2 beats other models on the OpenAI MRCRv2 long-context eval, and customers like Notion, Box, Databricks, and Hex report strong reasoning performance on complex, ambiguous, data-heavy tasks.
  • Advanced tool-calling capabilities: GPT-5.2 is SOTA on Tool Decathlon and beats other models on τ²-Bench Telecom, both benchmarks for long-horizon tool use. Triple Whale and Zoom say GPT-5.2 enables more reliable agent execution through improved tool calling.
  • Our strongest vision model yet: GPT-5.2 is our strongest vision model yet, cutting chart-reasoning and UI-understanding errors by over 50%. Enhanced spatial reasoning makes it more reliable for complex dashboards, app UIs, and diagram analysis.
  • SOTA on coding: GPT-5.2 leads on SWE-Bench Pro, a benchmark for complex coding tasks. It excels at front-end UI generation and delivers meaningful improvements across debugging, refactoring, and shipping fixes. 

GPT-5.2 is now available in the Responses and Chat Completions API. The model adjusts its reasoning based on the complexity of the task and you can control the reasoning effort by setting it to none, low, medium, high, and for the first time, “xhigh” for the most complex tasks.


GPT-5.2 is 40% more expensive than GPT-5 and GPT-5.1. It costs $1.75/1M input tokens and $14/1M output tokens, with a 90% discount on cached inputs. The model is available on Priority Processing and Flex Processing plans, and can be used with the Batch API. More details on the API Pricing Page.

 

We’ve published prompting guidance and updated our Prompt Optimizer tool to help you get the most out of GPT-5.2. 

—The OpenAI Team

Logo

© 2025 OpenAI. All Rights Reserved.
3180 18th St, San Francisco, CA 94110
Unsubscribe

Reply all
Reply to author
Forward
0 new messages