Liquid AI Is Redesigning the Neural Network

Artificial intelligence might now be solving advanced math, performing complex reasoning, and even using personal computers, but today’s algorithms could still learn a thing or two from microscopic worms.

Liquid AI, a startup spun out of MIT, will today reveal several new AI models based on a novel type of “liquid” neural network that has the potential to be more efficient, less power-hungry, and more transparent than the ones that underpin everything from chatbots to image generators to facial recognition systems.

Liquid AI’s new models include one for detecting fraud in financial transactions, another for controlling self-driving cars, and a third for analyzing genetic data. The company touted the new models, which it is licensing to outside companies, at an event held at MIT today. The company has received funding from investors that include Samsung and Shopify, both of which are also testing its technology.

“We are scaling up,” says Ramin Hasani, cofounder and CEO of Liquid AI, who co-invented liquid networks as a graduate student at MIT. Hasani’s research drew inspiration from the C. elegans, a millimeter-long worm typically found in soil or rotting vegetation. The worm is one of the few creatures to have had its nervous system mapped in its entirety, and it is capable of remarkably complex behavior despite having just a few hundred neurons. “It was once just a science project, but this technology is fully commercialized and fully ready to bring value for enterprises,” Hasani says.

Inside a regular neural network, the properties of each simulated neuron are defined by a static value or “weight” that affects its firing. Within a liquid neural network, the behavior of each neuron is governed by an equation that predicts its behavior over time, and the network solves a cascade of linked equations as the network functions. The design makes the network more efficient and more flexible, allowing it to learn even after training, unlike a conventional neural network. Liquid neural networks are also open to inspection in a way that existing models are not, because their behavior can essentially be rewound to see how it produced an output.

Liquid AI says it has built upon this work and invented new models that it is keeping secret for now. This September, the company revealed some large language models based on its network design. A version of its language model with 40 billion parameters outperformed the 70-billion-parameter version of Meta’s Llama 3.1 on a common set of problems known as MMLU-Pro, the startup says.

“The benchmark results for their SLMs look very promising,” says Sébastien Bubeck, a researcher at OpenAI who explores how AI models’ architecture and training affect their capabilities.

“Finding a new type of foundation model doesn’t happen every day,” says Tom Preston-Werner, a cofounder of GitHub and an early investor in Liquid AI, who notes that the transformer models that underpin large language models and other AI systems are starting to show their limitations. Preston-Werner adds that making AI more efficient should be a big priority for everyone. “We should do everything we can to make sure we aren’t running coal plants for longer,” he says.

One drawback with Liquid AI’s approach is that its networks are especially suited to certain tasks, particularly ones that involve temporal data. Making the technology work with other types of data requires custom code. And of course, another challenge will be persuading big companies to base important projects on a completely new AI design.

Hasani says the goal now is to demonstrate that the benefits—including efficiency, transparency, and energy costs—outweigh the challenges. “We are getting into stages where these models can alleviate a lot of the socio-technical challenges of AI systems,” he says.

Source : Wired