Using artificial intelligence experts have created a ‘false reality’ that is so similar to real-life you may not be able to tell it is a simulation.
New advancements in graphics manipulations made by neural networks mean artificial simulations look deceptively like the real thing.
Developers say in the future AI-generated scenes could be used to create training data for self-driving cars.
However, this technology also has a darker side and could lead us into strange hyper-reality where simulation becomes indistinguishable from real life.
Researchers from Santa Clara-based technology company Nvidia have created images that show AI generated scenes created from real ones.
‘We present high-quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation,’ the company website said.
Researchers led by Ming-Yu Liu used ‘image-to-image’ translations to transform an outdoor winter image into an AI-generated summer scene.
They could also transform sunny weather into wet weather.
The system relies on generative adversarial networks (GAN).
Researchers at the Google Brain AI lab first developed GAN which consists of two neural networks that learn from looking at raw data.
It uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.
One looks at the raw data – in this case the real life scene – while the other generates fake images based on the data set.
‘The use of GANs isn’t novel in unsupervised learning, but the NVIDIA research produced results — with shadows peeking through thick foliage under partly cloudy skies — far ahead of anything seen before’, researchers led by Mr Lui wrote in a blog post.
‘For self-driving cars alone, training data could be captured once and then simulated across a variety of virtual conditions: sunny, cloudy, snowy, rainy, nighttime, etc’, researchers wrote.