Scientists have invented a machine
that imitates the way the human brain learns new information, a step
forward for artificial intelligence, researchers reported.
The system described in the journal Science is a computer model "that captures humans' unique ability to learn new concepts from a single example," the study said.
"Though the model is only capable of learning handwritten characters from alphabets, the approach underlying it could be broadened to have applications for other symbol-based systems, like gestures, dance moves, and the words of spoken and signed languages."
Joshua Tenenbaum, a professor at
the Massachusetts Institute for Technology (MIT), said he wanted to
build a machine that could mimic the mental abilities of young children.
"Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven't seen," said Tenenbaum.
"We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts -- even simple visual concepts such as handwritten characters -- in ways that are hard to tell apart from humans."
The system is a called a "Bayesian Program Learning" (BPL) framework, where concepts are represented as simple computer programs.
Researchers showed that the model could use "knowledge from previous concepts to speed learning on new concepts," such as building on knowledge of the Latin alphabet to learn letters in the Greek alphabet.
"The authors applied their model to over 1,600 types of handwritten characters in 50 of the world's writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic -- and even invented characters such as those from the television series Futurama," said the study.
Since humans require very little data to learn a new concept, the research could lead to new advances in artificial intelligence, the study authors said.
"It has been very difficult to build machines that require as little data as humans when learning a new concept," said Ruslan Salakhutdinov, an assistant professor of computer science at the University of Toronto.
"Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science."
The system described in the journal Science is a computer model "that captures humans' unique ability to learn new concepts from a single example," the study said.
"Though the model is only capable of learning handwritten characters from alphabets, the approach underlying it could be broadened to have applications for other symbol-based systems, like gestures, dance moves, and the words of spoken and signed languages."
Picture taken at the permanent exhibition
"C3RV34U" at the "Cité des Sciences et de l'Industrie" in Paris,
dedicated to the human brain
"Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven't seen," said Tenenbaum.
"We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts -- even simple visual concepts such as handwritten characters -- in ways that are hard to tell apart from humans."
The system is a called a "Bayesian Program Learning" (BPL) framework, where concepts are represented as simple computer programs.
Researchers showed that the model could use "knowledge from previous concepts to speed learning on new concepts," such as building on knowledge of the Latin alphabet to learn letters in the Greek alphabet.
"The authors applied their model to over 1,600 types of handwritten characters in 50 of the world's writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic -- and even invented characters such as those from the television series Futurama," said the study.
Since humans require very little data to learn a new concept, the research could lead to new advances in artificial intelligence, the study authors said.
"It has been very difficult to build machines that require as little data as humans when learning a new concept," said Ruslan Salakhutdinov, an assistant professor of computer science at the University of Toronto.
"Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science."
The machine that learns like a CHILD: Algorithm recognises and scribbles symbols that look identical to those produced by humans
- Software has been called the Bayesian Program Learning framework
- It recognises a symbol by looking at it once and copying its general shape
- The framework can even draw symbols that are hard to spot by humans
- Machines usually take hundreds of attempts to memorise visual concepts
When
children are shown a new object, such as letter in the alphabet, a
picture or a real-world item, they generally only need a couple of
instances to be able to identify it accurately.
Machines,
by comparison, have to be trained hundreds and thousands of times to
not only identify an object, but also to recognise it from different
angles.
But
researchers have designed an algorithm to solve this problem by
allowing computers to learn visually in the same way as humans do.
Scroll down for video
A computer algorithm that memorises
general shapes of objects and can draw them out again has shown
computers can learn visually in the same way as young children. The
images above were drawn by a the computer and by a human. The machine
generated symbols 1 and 2 in the top row and 2 and 1 in the second
This
has allowed the machines to not only identify an object from its shape,
but also draw it for themselves - much in the same way young children
do when they are learning.
For example the machine can be shown a letter of the alphabet or a symbol and then draw it.
The resulting sketches were almost indistinguishable from those drawn by humans.
Professor
Joshua Tenenbaum, a cognitive scientist at the Massachusetts Institute
of Technology, who was one of the researchers involved in the study,
said: 'Before they get to kindergarten, children learn to recognise new
concepts from just a single example, and can even imagine new examples
they haven't seen'
Humans need a couple of instances to
be able to identify a symbol. It takes thousands of examples for a
machine to learn one (artist's impression)
'We
are still far from building machines as smart as a human child, but
this is the first time we have had a machine able to learn and use a
large class of real-world concepts - even simple visual concepts such as
handwritten characters - in ways that are hard to tell apart from
humans.'
The algorithm was created along with Dr Brenden Lake, a cognitive scientist from New York University.
When
a child is shown the letter A, for example, they can identify the same
shape even when it is written by different people in slightly different
ways.
Equally,
showing someone a single picture of a kettle will likely be enough for
that person to recognise other non-identical kettles, of different
shapes and colours.
The team's program, called 'Bayesian Program Learning' (BPL) framework, was developed to work in a similar way.
When
the computer is presented with a symbol - for instance, the letter A -
it starts randomly generating different examples of that symbol, in
various ways it could have been drawn.
Rather than looking at the symbol as a cluster of pixels, BPL memorises it as the result of a 'generative process'.
This involves establishing which specific strokes were made to draw it.
This
approach allows the machine to recognise the letter in various guises,
such as the differences between how two people draw the letter for
example.
In what they called a 'Visual Turing
Test', the researchers showed both the handwritten and the
machine-generated doodles to a group of people. Fewer than 25 per cent
could spot the computer generated doodles. In the picture above the
machine generated symbols are B and A in the top row and A and B in the
bottom
The model was tested on more than 1,600 types of handwritten symbols in 50 different alphabets or codes.
These included Sanskrit, Tibetan, Gujarati, Glagolitic, and even imaginary letters shown in the television series Futurama.
The program was consistently able to reproduce the characters after being shown only one example for each of them.
Taking
it a step further, the researchers asked the machine as well as human
volunteers to invent new characters in the style of those they had been
shown.
Then,
in what the researchers called a 'visual Turing Test', both the
handwritten and the machine-generated symbols were shown to another
group of people, who were asked to identify which symbols had been
created by the program.
The
researchers reported that fewer than 25 per cent of the 'judges'
managed to guess which symbols were computer-generated at a percentage
significantly better than chance.
'Our
results show that by reverse engineering how people think about a
problem, we can develop better algorithms,' explained Dr Lake, who was
the lead author on the study, which is published in the journal Science.
'Moreover, this work points to promising methods to narrow the gap for other machine learning tasks.
No comments:
Post a Comment