Separator

Multiverse Computing Raises $ 217 Million in Funding Round

Separator

image

Spanish AI Company Multiverse Computing has successfully raised €189 million ($217 million) in funding to further develop its innovative AI language model compression technology.  

Reports suggest that significant contributions were made by Bullhound Capital, HP Inc., Forgepoint Capital, and Toshiba.

The technology created by Multiverse Computing can shrink the size of large language models (LLMs) by as much as 95 percent while preserving performance and reducing costs by nearly 80 percent .

 This breakthrough utilizes principles from quantum physics and machine learning to simulate quantum systems without the need for a quantum computer. With this recent funding round, Multiverse Computing has established itself as the largest AI startup in Spain, joining top European AI firms like Mistral, Aleph Alpha, Synthesia, Poolside, and Owkin.

 

The company has already released compressed versions of well-known LLMs, including Meta's Llama, China’s DeepSeek, and France's Mistral, with intentions to broaden its range of offerings.

According to CEO Enrique Lizaso Olmos, the aim is to concentrate on compressing widely used open-source LLMs, especially the Llama family, which is popular among enterprises. Furthermore, the compression tool is now available on the Amazon Web Services AI marketplace, enhancing its accessibility for businesses globally.

 

Also Read: India Set to Become Aviation's Most Exciting Global Market

"Multiverse has launched compressed versions of LLMs such as Meta's Llama, China's DeepSeek and France's Mistral, with additional models coming soon. We are focused just on compressing the most used open-source LLMs, the ones that the companies are already using," Chief Executive Officer Enrique Lizaso Olmos said."When you go to a corporation, most of them are using the Llama family of models," says Enrique Lizaso Olmos.

 


🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...