Our Terms & Conditions | Our Privacy Policy
The Physicist Working to Build Science-Literate AI
Physics dazzled Miles Cranmer from an early age. His grandfather, a physics professor at the University of Toronto, gave him books on the subject, and his parents took him to open houses at universities near their home in southern Ontario, Canada. The Perimeter Institute for Theoretical Physics was a favorite. “I remember someone talking about infinity when I was super young, and it was so cool to me,” Cranmer said. In high school, he interned at the University of Waterloo’s Institute for Quantum Computing — “the best summer of my life at that point.” Soon he began studying physics as an undergraduate at McGill University.
Then one night during his second year, the 19-year-old Cranmer read an interview with Lee Smolin in Scientific American in which the eminent theoretical physicist claimed it would “take generations” to reconcile quantum theory and relativity. “That just tripped something in my brain,” Cranmer said. “I can’t have that — it needs to go faster.” And for him, the only way to speed up the timeline of scientific progress was with artificial intelligence. “That night was a moment where I decided, ‘We have to do AI for science.’” He began studying machine learning, eventually fusing it with his doctoral research in astrophysics at Princeton University.
Nearly a decade later, Cranmer (now at the University of Cambridge) has seen AI begin to transform science, but not nearly as much as he envisions. Single-purpose systems like AlphaFold can generate scientific predictions with revolutionary accuracy, but researchers still lack “foundation models” designed for general scientific discovery. These models would work more like a scientifically accurate version of ChatGPT, flexibly generating simulations and predictions across multiple research areas. In 2023, Cranmer and more than two dozen other scientists launched the Polymathic AI initiative to begin developing these foundation models.
The first step, Cranmer said, is equipping the model with the scientific skills that still elude most state-of-the-art AI systems. “Some people wanted to do a language model for astrophysics, but I was really skeptical about this,” he recalled. “If you’re simulating massive fluid systems, being bad at general numerical processing” — as large language models arguably are — “is not going to cut it.” Neural networks also struggle to distill their predictions into tidy equations (like E = mc2), and the scientific data necessary for training them isn’t as plentiful on the internet as the raw text and video that ChatGPT and other generative AI models train on.
Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their legal original owners.
Comments are closed.