Google has created AI to communicate with dolphins

Dolphins are known for being one of the smartest animals on Earth. They can cooperate, teach each other and even recognise themselves in a mirror. Scientists have even identified clicking sounds that they use to identify each other, allowing them to call out to a specific individual in a pod. However, it is currently impossible to decipher their communication in full and understand how close this complex system of sounds is to the concept of a full-fledged language.
Google has introduced a new artificial intelligence model called DolphinGemma that can recognise and synthesise dolphin sounds. Yes, the idea from cartoons and science fiction has officially entered the scientific arena. The project was created in cooperation with the Georgia Institute of Technology and the Wild Dolphin Project, and it is not just a scientific experiment but a serious attempt to establish interspecies communication.
Here's What We Know
The DolphinGemma model is based on Google's Gemini . But this version is more compact: "only" 400 million parameters, which allows you to run it directly on your Pixel smartphone. The CHAT (Cetacean Hearing Augmentation Telemetry ) device is essentially a sealed case with a Pixel 6 phone inside and several specialised sensors. Google notes that the updated CHAT with Pixel 9 at its core will be used for the summer 2025 research season. This allows you to take the device with you and decipher dolphin communication in real time, and potentially respond.
How the technology works
Surprisingly, it's the technology of large language models (LLM), which we call AI, that is best suited to decipher unfamiliar languages or ciphers. What LLM does is essentially look for all possible methods to get the same result from the initial data and a known result. If you use enough high-quality data to train a language model, it will learn to generate reliable answers on its own. With the ability to analyse literally all dolphin sounds, scientists expect to discover all the patterns that exist.
In fact, the scientists are using two devices. CHAT Junior is attached to the chest and has a wide-range waterproof microphone. The CHAT Light is attached to the arm, which has buttons and a high-frequency emitter that scientists can use to "answer" the dolphins with the "words" they have already identified. This is, without exaggeration, a new level of communication with dolphins that was not available before.
It's also interesting that in this way Google is actually thumbing its nose at Apple, which claims that the release of its AI is delayed because it is trying to process everything on the device rather than using cloud computing. After all, despite the fact that DolphinGemma is currently only a highly specialised tool for studying dolphins, it works completely autonomously in places where there is no Internet access, right on a regular Pixel phone. And Google will undoubtedly use these developments to further develop its own AI models. And the "proprietary" savings on the RAM of its devices generally puts an end to the ability to work with large language models on old Apple devices.
Source: arstechnica.com