Samsung's Galaxy S25 and S25+ smartphones may get a Dimensity 9400 chip that will support the Gemini Nano multimodal AI model

By: Vlad Cherevko | today, 13:08

Earlier, Google, along with Google Pixel 9 smartphones, announced an updated Gemini Nano AI model with multimodality, which is currently available only on Pixel 9 series devices. But according to recent reports, the new Gemini Nano model will soon be extended to devices from other manufacturers like Samsung.

Here's What We Know

MediaTek has announced that their new flagship Dimensity 9400 chipset will be optimised for the Gemini Nano's multi-modal AI. It's not yet known exactly which smartphones will be the first to get the new feature with this chip, but according to hints from Google DeepMind, it could be the Samsung Galaxy S25 series smartphones.

The multimodal Gemini Nano, developed with Google DeepMind, allows devices to better understand the context of text, images, audio and video. The Pixel 9 smartphones are powered by the feature, with apps like Pixel Screenshots, Pixel Recorder, Talkback and others.

Earlier, Google DeepMind mentioned on its blog that MediaTek is using their technology to accelerate the development of its most advanced chips, such as the new flagship Dimensity, which will be used in Samsung smartphones.

Since Samsung hasn't released a smartphone with the flagship Dimensity chip until now, it was most likely referring to the upcoming Galaxy S25 and S25+ flagships, as the Galaxy S25 Ultra will be based on another flagship chip, the Snapdragon 8 Gen 4. The use of Dimensity 9400 could alleviate some of the Exynos 2500's production issues and bring Gemini Nano's multimodal capabilities to the upcoming S25 series flagships.

Source: @negativeonehero, Google DeepMind

Go Deeper:

Multimodality in the context of artificial intelligence refers to a system's ability to process and integrate information from different types of data or modalities. For example, a multimodal system can simultaneously analyse text, images, audio and video to better understand and respond to user queries.

This allows for more complex and intuitive interactions, as the system can use different sources of information to provide a more accurate and contextualised response. For example, voice assistants that can recognise speech and simultaneously analyse visual data are an example of multimodal systems.