Artificial intelligence has advanced by leaps and bounds in recent years, but it still pales in comparison to some human abilities.
Today’s most sophisticated AI systems rely on learning from tens to hundreds of examples, whereas humans can learn from a few or even one.
Not only that, but humans have a richer understanding of concepts, which we use for imagination, explanation, and acting.
But now, a team of researchers has developed an AI that they say can learn handwritten characters from various alphabets after “seeing” just a single example, according to a study published Thursday in the journal Science.
The research had two goals: To better understand how people learn, and to build machines that learn in more humanlike ways.
“For the first time, we think we have a machine system that can learn a large class of visual concepts in a humanlike way,” study leader Joshua Tenenbaum, a cognitive scientist at MIT, said in a news briefing on Wednesday.
Learning like a human
People, especially children, are remarkably good at induction, a concept that allows us to take a single example and generalize it to learn a broader concept.
Think of the first time you saw a Segway or a smartphone, suggested Tenenbaum during the briefing. You just needed one example, and you could recognise others that you came across later. Not only that, but you can use examples like these to explain, predict, and imagine other things.
By contrast, today’s AI algorithms — like those used by Facebook’s face recognition or Google’s translation service — often require huge datasets to learn even basic concepts, and while impressive, they still don’t have the rich understanding that humans have. The best-known of these is an approach known as “deep learning.”
An AI that can draw letters after seeing just one
Tenenbaum and his colleagues set out to build an AI that could do something most humans can do easily: See a handwritten alphabet character, recognise it, and draw it themselves.
To do this, they created a model that represents concepts as simple programs that explain examples in terms of their probability of being right, something they call “Bayesian program learning.” Their approach combines three important concepts: First, the idea that rich concepts are composed of simpler parts; second, that these concepts are produced by cause-and-effect; and third, that programs can “learn to learn” by relying on past experience.
Human volunteers from Amazon’s Mechanical Turk service were recruited to hand-draw thousands of characters from 50 different alphabets, including traditional languages like Latin, Greek, and Korean, as well as fictional ones like the alien language in “Futurama.”
Then, the researchers fed these characters one-at-a-time into their AI program, and asked it to identify the correct character, break it down into its component parts, and redraw it. It then had to draw a made-up character based on several related characters. They gave the same tasks to human volunteers.
Impressively, the new AI model performed as well as humans at this task, and even better than deep learning algorithms. In the task where they had to identify the correct character, people had an average error rate of 4.5%. The new AI averaged 3.3% errors, whereas competitor programs varied between 8% and 34.7%.
Can it pass a ‘Turing test’?
Next, the researchers compared humans against their AI in a “visual Turing test,” based on the classic test of AI developed by mathematician Alan Turing. In this version, a group of human judges were asked to compare the made-up figures drawn by combining similar characters, and determine whether a human or AI had made them.
The judges were barely better than chance at this task, suggesting the AI had successfully fooled them. Of course, this was a subjective test, and most AI researchers don’t consider a single Turing test to be an accurate measure of a machine’s intelligence.
Can you tell which of the grids of figures below were drawn by a human and which by the AI?
(The grids drawn by a machine, from left to right, are: B, A; A, B; A, B.)
This work has a number of interesting applications. For example, it could be used to analyse national security imagery, and in fact, several defence agencies helped fund the research. But it’s a pretty big leap to go from interpreting alphabet characters to human behaviour.
AI is still a long way from matching human abilities. Humans can not only build up a rich understanding of concepts from just a few examples, but we can also use these concepts to plan, explain, and communicate with one another.
But our ability to quickly extrapolate from a small set of data comes with some notable drawbacks. We make snap judgments and stereotypes that do more harm than good.
“We’re well-aware that humans are remarkable getting the world right as well as getting the world wrong,” Tenenbaum said. “It’s the inevitable flip side of learning so quickly.”
NOW WATCH: Benedict Cumberbatch And The Cast Of ‘The Imitation Game’ Have Mixed Feelings About Artificial Intelligence
NOW WATCH: Briefing videos
Business Insider Emails & Alerts
Site highlights each day to your inbox.