Analysing images of gravitational lensing has historically been an extremely painstaking and time-consuming procedure for astrophysicists.
It’s how images of astrological objects can be distorted by gravity, and the phenomenon can reveal fascinating new information about our universe. However, the analysis of a single image can take several weeks — and require expert knowledge and techniques.
But now, researchers at the Kavli Institute for Astrophysics and Cosmology (KIPAC) have discovered an alternative: Using artificial intelligence (AI) to analyse the images.
And it’s quicker by an order of magnitude, they wrote in a paper submitted to the journal Nature — a staggering 10 million times faster. (We saw the news via MIT Tech Review.)
The researchers trained a neural network using half a million images of gravitational lenses. It can now analyse new images with a comparable accuracy to traditional methods, in a fraction of the time: Up to 100 systems in a single second.
“The amazing thing is that neural networks learn by themselves what features to look for,” paper co-author and KIPAC staff scientist Phil Marshall said in a statement. “This is comparable to the way small children learn to recognise objects. You don’t tell them exactly what a dog is; you just show them pictures of dogs.”
Lead author Yashar Hezaveh added: “It’s as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs’ weight, height and age.”
“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone’s computer chip,” said postdoctoral fellow Laurence Perreault Levasseur, another of the paper’s authors.