OpenAI has updated ChatGPT and it can now see objects and describe them with Santa's voice

By: Vlad Cherevko | yesterday, 15:12

OpenAI, the creator of ChatGPT, has unveiled two innovations for its chatbot as part of its "12 Days, 12 Live Broadcasts" event, in which each day the company reveals a new feature or product.

Here's What We Know

The ChatGPT chatbot can now see objects in real-time through the camera and explain them using the new Advanced Voice Mode, which mimics human reactions. Users subscribed to the Plus, Team or Pro versions can simply point their phone's camera at an object and ChatGPT will instantly give a response. The feature can also interpret what's happening on the device's screen through a screen demo.

In addition, OpenAI has added a festive Santa mode that allows ChatGPT to speak in Santa's voice. Users can activate this mode by tapping on the snowflake icon next to the tooltip bar in the app.

The rollout of these features started on 12 December and will be completed within a week. However, ChatGPT Enterprise and Edu users will not be able to take advantage of the new features until January, and no timeline has been set for users in the EU, Switzerland, Iceland, Norway and Liechtenstein.

These innovations make ChatGPT more competitive with the recent launch of Google's Gemini 2.0 model, which emphasises the rapid development of artificial intelligence technology.

Source: OpenAI