What sets humans apart from machines is the speed at which we can learn from our surroundings.
But scientists have successfully trained computers to use artificial intelligence to learn from experience – and one day they could be smarter than their creators.
Now scientists have admitted they are already baffled by the mechanical brains they have built, raising the prospect that we could lose control of them altogether.
Computers are already performing incredible feats – such as driving cars and predicting diseases – but their makers say they aren’t entirely in control of their creations.
This could have catastrophic consequences for civilisation, tech experts have warned.
Take the strange driverless car which appeared on the streets of New Jersey, US, last year.
It differed from Google, Tesla or Uber’s autonomous vehicles, which follow the rules set by tech developers to react to scenarios while on the road.
This car could make its own decisions after watching how humans learnt how to drive.
And its creators, researchers at chip making company Nvidia (who supply some of the biggest car makers with supercomputer chips), said they weren’t 100 per cent sure how it did so, MIT Technology Review reported.
Its mysterious mind could be a sign of dark times to come, sceptics fear.
The car’s underlying technology, dubbed “deep learning”, is a powerful tool for solving problems.
It helps us tag our friends on Facebook, provides assistance on our smartphones using Siri, Cortana or Google.
Deep learning has helped computers get better at recognising objects than a person.
The military is pouring millions into the technology so it can be used to steer ships, control drones and destroy targets.
And there’s hope it will be able to diagnose deadly diseases, make traders billionaires by reading the stock market and totally transform the world we live in.
But if we don’t make sure creators have a full understanding of how it works, we’re in deep trouble, scientists claim.
If they can’t figure out how the algorithms (the formulas which keep computers performing the tasks we ask them to do) work, they won’t be able to predict when they fail.
Tommi Jaakkola, a professor at MIT who works on applications of machine learning warns: “If you had a very small neural network [deep learning algorithm], you might be able to understand it.”
“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”
That means a driverless car, like Nvidia’s, could soar headfirst into a tree and we would have no idea why it decided to do so.