Stability AI announced a compact language model of 1.6 billion parameters
Stability AI, a company known for generative AI for text and images, has unveiled a new compact language model, Stable LM 2 1.6B. According to the developers, the release is intended to lower barriers to AI experimentation and expand the community with less computing power.
Here's What We Know
According to Carlos Riquelme of Stability AI, the main achievement of the new product is not the size, but the perfection of the algorithms and the amount of high-quality training data in seven languages. This puts Stable LM 2 1.6B ahead of other larger models, including Stability AI's own Stable LM 3B.
Stability AI says it's like with computers and microchips - over time they get smaller and more powerful at the same time.
Stable LM technology is typically used to generate and understand textual content. With smaller size, it still has a risk of "hallucinations" and toxic behaviour. The company promises improved risk management.
Stability AI has a separate focus on transparency. In addition to finished models, the company publishes raw versions of them before the final training cycle. This should give developers the opportunity to adapt the algorithms to their needs with better results.
As Stability AI notes, openness is necessary to expand the community of experts in generative AI.
Source: VentureBeat