Artificial intelligence (AI) isn’t here to replace scientists — it’s simply a powerful tool that can be used to enhance their work, according to David Kelley, Lauren Takahashi and Aditya Nandy.
These rising stars are among the most prolific early-career researchers in the Nature Index in terms of AI-related output. Here, they discuss how they’re harnessing new algorithms and models to advance research in gene regulation, materials development and protein dynamics.
David Kelley: gene-regulation decoder
When bioinformatics scientist David Kelley transitioned from academia to a role in industry in 2016, he felt as though he was stepping into a less competitive, more collaborative culture. “Everybody is really on the same team here,” he says of his current workplace, Calico Life Sciences, a biotechnology company in San Francisco, California. He also appreciates the focus that his industry role provides — the company’s mission is to understand the biology that controls human ageing and use that knowledge to inform the development of treatments for age-related diseases. This, Kelley says, acts as “a guiding light” for him and his colleagues, helping them to prioritize what they think will be the most impactful work.
“If I were at a university, I think I would be … getting distracted by this question or that question,” says Kelley. “Whereas by being in industry, you do have this strong pull towards: does this really matter? Is this really the most significant thing I could be working on right now?”
Kelley is investigating gene regulation as an underlying mechanism in age-related diseases. Gene regulation is the process by which cells control the expression of their genes, determining when, where and how much of certain components, such as proteins or RNA, are produced. One of the big challenges in understanding how genes are controlled is working out how certain DNA sequences, called enhancer regions, interact with genes. Enhancer regions are like switches that can turn a gene on, making it active. However, these switches can be located far from the genes they control — up to one million DNA base pairs away — making it tricky to find and study many of these connections.
In a paper published in Nature Methods1, Kelley and his co-authors reported the development of a deep-learning model, called Enformer, that could predict the expression of gene variants, including those that have long-distance interactions with enhancers. The model is “really just a few lines of code”, but has become a powerful tool, says Kelley. “It’s quite amazing and profound, the level of artificial intelligence that you can get from just having repetition after repetition of this single kind of mathematical operation that enables very general learning.”
2024 Research Leaders
One big concern that Kelley has about his field is the lack of ethnic diversity in genomics data sets. Many gene-regulation studies depend on whole-genome sequencing data from resources such as the UK Biobank, which has 500,000 participants, some 95% of whom are white2. By using such limited data sets, researchers are missing important clues in gene regulation, he says. “You really want to profile all the variation of the world and see all of the ways that these genetic variants can influence phenotypes.”
Another challenge is the fact that Kelley’s primary AI training set is the entire human genome sequence, which is a finite resource of some three billion base pairs. “If we want large amounts of additional training data, it’s not entirely obvious where to find it,” he says. His team is getting creative by working with data from mice, as well as humans, and training models on both species simultaneously. “This works because the regulatory drivers of our shared cell types are highly conserved,” says Kelley. “Profiling more primates or other mammals could be fruitful.”
Lauren Takahashi: catalyst engineer
Lauren Takahashi, a chemical engineer and information scientist at Hokkaido University in Sapporo, Japan, took a winding path in the early years of her career. As an undergraduate student at the University of Arizona in Tucson, she studied linguistics; she then moved to the University of Gothenburg in Sweden to complete a master of science degree that involved building a prototype search engine. The program was designed to answer questions such as ‘Where is this?’ or ‘Where can I go for that?’ using maps and locations, rather than just relying on the top results from searches, she says.
During her time in Sweden, Takahashi attended a lecture by physicist Andre Geim, who won a share of the 2010 physics Nobel for experiments with graphene — a material comprising a single layer of carbon atoms arranged in a hexagonal lattice. The talk inspired Takahashi to apply her understanding of search engines to build a model that researchers could use to search the scientific literature for insight when creating graphene and other 2D materials.
The project was a hobby at first, but “it just snowballed”, she says. On the basis of this work, a friend invited Takahashi to join a research project at Japan’s National Institute for Materials Science (NIMS) in Tsukuba, one of the world’s largest institutions in the field. While at NIMS, she embarked on a PhD in chemical engineering at the University of Tokyo.
At Hokkaido, Takahashi studies the use of robots and AI to produce high-performance catalysts — substances that speed up chemical reactions. One way in which she and her colleagues are doing this is by scouring the scientific literature for experimental conditions that boost the performance of catalysts. They feed this information into their AI system so that it can design experiments to produce compounds such as ethene — a building block for plastics and other materials — at lower temperatures and with higher yields than current methods3.
“We’re currently working on developing a robot that does the whole thing on its own,” says Takahashi. In the future, robots could work in concert with AI to run experiments; collect and analyse data; predict new conditions, settings and catalyst combinations; and then perform the next experiment, Takahashi and a colleague wrote in a 2023 paper4.
Accommodating self-directed ‘robot scientists’ would be a major adjustment for laboratories, says Takahashi. “It’s harder for the robot to function in an environment that’s developed by people. If we’re going to use AI successfully, we have to consider the needs of the AI and incorporate that whenever we develop the environments we work in.”