The robotic line cooks were deep in their recipe, toiling away in a room tightly packed with equipment. In one corner, an articulated arm selected and mixed ingredients, while another slid back and forth on a fixed track, working the ovens. A third was on plating duty, carefully shaking the contents of a crucible onto a dish. Gerbrand Ceder, a materials scientist at Lawrence Berkeley National Lab and UC Berkeley, nodded approvingly as a robotic arm delicately pinched and capped an empty plastic vial—an especially tricky task, and one of his favorites to observe. “These guys can work all night,” Ceder said, giving two of his grad students a wry look.
Stocked with ingredients like nickel oxide and lithium carbonate, the facility, called the A-Lab, is designed to make new and interesting materials, especially ones that might be useful for future battery designs. The results can be unpredictable. Even a human scientist usually gets a new recipe wrong the first time. So sometimes the robots produce a beautiful powder. Other times it’s a melted gluey mess, or it all evaporates and there’s nothing left. “At that point, the humans would have to make a decision: What do I do now?” Ceder says.
The robots are meant to do the same. They analyze what they’ve made, adjust the recipe, and try again. And again. And again. “You give them some recipes in the morning and when you come back home you might have a nice new soufflé,” says materials scientist Kristin Persson, Ceder’s close collaborator at LBNL (and also spouse). Or you might just return to a burned-up mess. “But at least tomorrow they’ll make a much better soufflé.”
Recently, the range of dishes available to Ceder’s robots has grown exponentially, thanks to an AI program developed by Google DeepMind. Called GNoME, the software was trained using data from the Materials Project, a free-to-use database of 150,000 known materials overseen by Persson. Using that information, the AI system came up with designs for 2.2 million new crystals, of which 380,000 were predicted to be stable—not likely to decompose or explode, and thus the most plausible candidates for synthesis in a lab—expanding the range of known stable materials nearly 10-fold. In a paper published today in Nature, the authors write that the next solid-state electrolyte, or solar cell materials, or high-temperature superconductor, could hide within this expanded database.
Finding those needles in the haystack starts off with actually making them, which is all the more reason to work quickly and through the night. In a recent set of experiments at LBNL, also published today in Nature, Ceder’s autonomous lab was able to create 41 of the theorized materials over 17 days, helping to validate both the AI model and the lab’s robotic techniques.
When deciding if a material can actually be made, whether by human hands or robot arms, among the first questions to ask is whether it is stable. Generally, that means that its collection of atoms are arranged into the lowest possible energy state. Otherwise, the crystal will want to become something else. For thousands of years, people have steadily added to the roster of stable materials, initially by observing those found in nature or discovering them through basic chemical intuition or accidents. More recently, candidates have been designed with computers.
The problem, according to Persson, is bias: Over time, that collective knowledge has come to favor certain familiar structures and elements. Materials scientists call this the “Edison effect,” referring to his rapid trial-and-error quest to deliver a lightbulb filament, testing thousands of types of carbon before arriving at a variety derived from bamboo. It took another decade for a Hungarian group to come up with tungsten. “He was limited by his knowledge,” Persson says. “He was biased, he was convinced.”
DeepMind’s approach is meant to look beyond those biases. The team started with 69,000 materials from Persson’s library, which is free to use and funded by the US Department of Energy. That was a good start, because the database contains the detailed energetic information needed to understand why some materials are stable and others aren’t. But it wasn’t enough data to overcome what Google DeepMind researcher Ekin Dogus Cubuk calls a “philosophical contradiction” between machine learning and empirical science. Like Edison, AI struggles to generate truly novel ideas beyond what it has seen before. “In physics, you never want to learn a thing that you already know,” he says. “You almost always want to generalize out of domain”—whether that’s to discover a different class of battery material or a new superconductivity theory.
GNoME relies on an approach called active learning. First, an AI called a graph neural network, or GNN, uses the database to learn patterns in the stable structures and figure out how to minimize the energy in the atomic bonds within new structures. Using the whole range of the periodic table, it then produces thousands of potentially stable candidates. The next step is to verify and adjust them, using a quantum mechanics technique called density-functional theory, or DFT. These refined results are then plugged back into the training data and the process is repeated.
The researchers found that, with multiple repetitions, this approach could generate more complex structures than were initially in the Materials Project data set, including some that were composed of five or six unique elements. (The data set used to train the AI largely capped out at four.) Those types of materials involve so many complex atomic interactions that they generally escape human intuition. “They were hard to find,” Cubuk says. “But now they’re not so hard to find anymore.”
But DFT is only a theoretical validation. The next step is actually making something. So Ceder’s team picked 58 crystals to create in the A-Lab. After taking into account the capabilities of the lab and available precursors, it was a random selection. And at first, as expected, the robots failed, then repeatedly adjusted their recipes. After 17 days of experiments, the A-Lab managed to produce 41 of the materials, or 71 percent, sometimes after trying a dozen different recipes.
Taylor Sparks, a materials scientist at the University of Utah who wasn’t involved in the research, says that it’s promising to see automation at work for new types of materials synthesis. But using AI to propose thousands of new hypothetical materials, and then chasing after them with automation, just isn’t practical, he adds. GNNs are becoming widely used to develop new ideas for materials, but usually researchers want to tailor their efforts to produce materials with useful properties—not blindly produce hundreds of thousands of them. “We’ve already had way too many things that we’ve wanted to investigate than we physically could,” he says. “I think the challenge is, is this scaled synthesis approaching the scale of the predictions? Not even close.”
Only a fraction of the 380,000 materials in the Nature paper will likely wind up being practical to create. Some involve radioactive elements, or ones that are too expensive or rare. Some will require types of synthesis that involve extreme conditions that can’t be produced in a lab, or precursors that lab suppliers don’t have on hand.
That’s likely even true for materials that could very well hold potential for the next photovoltaic cell or battery design. “We’ve come up with a lot of cool materials,” Persson says. “Making them and testing them has consistently been the bottleneck, especially if it’s a material that nobody’s ever made before. The number of people I can call up in my circle of friends who go, ‘Absolutely, let me get on that for you,’ is pretty much one or two people.’”
“Really, is it that high?” Ceder interjects with a laugh.
Even if a material can be made, there’s a long road to turning a basic crystal into a product. Persson brings up the example of an electrolyte inside a lithium-ion battery. Predictions about the energy and structure of a crystal can be applied to problems like figuring out how easily lithium ions can move across it—a key aspect of performance. What it can’t predict as easily is whether that electrolyte will react with neighboring materials and destroy the whole device. Plus, in general, the utility of new materials only becomes apparent in combination with other materials or by manipulating them with additives.
Still, the expanded range of materials expands the possibilities for synthesis, and also provides more data for future AI programs, says Anatole von Lilienfeld, a materials scientist at the University of Toronto who wasn’t involved in the research. It also helps nudge materials scientists away from their biases and towards the unknown. “Every new step that you take is fantastic,” he says. “It could usher in a new compound class.”
Google is also interested in exploring the possibilities of the new materials generated by GNoME, says Pushmeet Kohli, vice president of research at Google DeepMind. He compares GNoME to AlphaFold, the company’s software that startled structural biologists with its success at predicting how proteins fold. Both are addressing fundamental problems by creating an archive of new data that scientists can explore and expand. From here, the company plans to work on more specific problems, he says, such as homing in on interesting material properties and using AI to speed up synthesis. Both are challenging problems, because there is typically far less data to start with than there is for predicting stability.
Kohli says the company is exploring its options for working more directly with physical materials, whether by contracting outside labs or continuing with academic partnerships. It could also set up its own lab, he adds, referring to Isomorphic Labs, a drug discovery spinoff from DeepMind established in 2021 following the success of AlphaFold.
Things could get complicated for researchers trying to put the materials to practical use. The Materials Project is popular with both academic labs and corporations because it allows any type of use, including commercial ventures. Google DeepMind’s materials are being released under a separate license that forbids commercial use. “It’s released for academic purposes,” Kohli says. “If people want to investigate and explore commercial partnerships, and so on, we will review them on a case-by-case basis.”
Multiple scientists who work with new materials noted that it’s unclear what sort of say the company would have if testing in an academic lab led to a possible commercial use for a GNoME-generated material. An idea for a new crystal—without a particular use in mind—is generally not patentable, and tracing its provenance back to the database could be difficult.
Kohli also says that while the data is being released, there are no current plans to release the GNoME model. He cites safety considerations—the software could theoretically be used to dream up dangerous materials, he says—and uncertainty about Google DeepMind’s materials strategy. “It is difficult to make predictions about what the commercial impact would be,” Kohli says.
Sparks expects his fellow academics to bristle at the lack of code for GNoME, just as biologists did when AlphaFold was initially published without a complete model. (The company later released it.) “That’s lame,” he says. Other materials scientists will likely want to reproduce the results and investigate ways to improve the model or tailor it to specific uses. But without the model, they can’t do either, Sparks says.
In the meantime, the Google DeepMind researchers hope hundreds of thousands of new materials will be enough to keep theorists and synthesizers—both human and robotic—plenty busy. “Every technology could be improved with better materials. It’s a bottleneck,” Cubuk says. “This is why we have to enable the field by discovering more materials, and helping people discover even more.”
Source : Wired