Scientists present a new method for compressing multilingual AI models

By: Bohdan Kaminskyi | 08.12.2023, 21:02
Scientists present a new method for compressing multilingual AI models
Volodymyr Hryshchenko/Unsplash

Researchers from Johns Hopkins University have proposed a new approach to optimising multilingual language models (MLMs), allowing to significantly reduce their size without losing performance.

Here's What We Know

MLMs allow generating and analysing texts in different languages. However, the more languages they cover, the worse they perform due to "language interference".

Unlike traditional methods, when a separate neural network is developed for each language, the researchers decided to use low rank matrices. They allow to compress the data and reduce the number of parameters needed to add new languages to the model.

According to Haoran Xu, one of the authors, it works like a limited colour palette for an artist. There is no need to give each child in the class their own set of paints, a common palette of three colours is sufficient. This greatly reduces the need for parameters when scaling the model.

The authors tested their method in 95 languages. The model showed excellent results while using far fewer parameters. This paves the way for the creation of compact and efficient MLMs, the researchers believe.

According to the scientists, in time there will be mobile AI applications that can work equally well in hundreds of languages. Their ultimate goal is to apply the new method to compress large MLMs without harming their performance.

Source: TechXplore