Breakthrough In AI Learning May Lead To Human-Level Performance

New AI can mimic human movement after seeing it just one time, which is how humans learn from infancy. Of course, there is no measure of intent as to why the human movement was performed in the first place. This is typical Technocrat thinking that ‘why’ is not as important as ‘what’. ⁃ TN Editor

A new breed of AI-powered robots could soon mimic almost any action after watching a human do them just once.

Scientists have developed a clawed machine that can learn new tasks, such as dropping a ball into a bowl or picking up a cup, simply by viewing a person perform them first.

Researchers said the trick allows the android to master new skills much faster than other robots, and could one day lead to machines capable of learning complex tasks purely through observation – much like humans and animals do.

Project lead scientist scientist Tianhe Yu wrote in a blog post: ‘Learning a new skill by observing another individual, the ability to imitate, is a key part of intelligence in human and animals.

‘Such a capability would make it dramatically easier for us to communicate new goals to robots – we could simply show robots what we want them to do.’

Developed by engineers at the University of California at Berkeley, the robot quickly learns new actions by watching a person do it on video.

Clips of the android show it picking up fruit and putting it into a bowl, as well as carefully moving around an obstacle following the same path demonstrated by a scientist.

Most machines, such as the robots in car factories, are programmed to complete tasks via computer code – a rigid and often time-consuming process.

More recently, androids have been developed that can learn by watching another robot complete the action, though they typically need to mimic the task thousands of times before perfecting it.

In the new paper, the UC team outline the technique which allowed them to teach a robot actions with just one demonstration – vastly speeding up the learning process.

They combined two different learning algorithms into a single super-AI.

One of these – a meta-learning algorithm – helps a robot to learn by incorporating the movements used in prior tasks rather than master each skill from scratch.

The other, an imitation algorithm, allows the machine to pick up a new skill by watching something else perform it.

Combining the two allowed scientists to build an AI that draws on both prior experience as well as mimicry to build new skills in a process the researchers call model-agnostic meta-learning (Maml).

This means it can learn to manipulate an object it has never seen before by watching a single video – a breakthrough that could accelerate machine learning.

Read full story here…

Related Articles That You Might Like

avatar
  Subscribe  
Notify of

The only Authoritative source for

Exposing Technocracy

Stories curated daily from around the world

Subscribe and get the digest!

No SPAM! We will not share your email with any 3rd party.

Thank You for Subscribing!

 

If you don't receive a confirmation email within a few

minutes, please check  your spam/junk folder.

Wath for a confirmation email.