Augmented Or Artificial: When Reality Isn’t Real

Share This Story!
image_pdfimage_print
Like it or not, Augmented Reality is rapidly advancing to the point that people will not be able to tell the difference between real and unreal. Where boundaries and privacy do not exist, this will create a dystopian world. ⁃ TN Editor

The martial arts actor Jet Li turned down a role in the Matrix and has been invisible on our screens because he does not want his fighting moves 3D-captured and owned by someone else. Soon everyone will be wearing 3D-capable cameras to support augmented reality (often referred to as mixed reality) applications. Everyone will have to deal with the sorts of digital-capture issues across every part of our life that Jet Li avoided in key roles and musicians have struggled to deal with since Napster. AR means anyone can rip, mix and burn reality itself.

Tim Cook has warned the industry about “the data industrial complex” and advocated for privacy as a human right. It doesn’t take too much thinking about where some parts of the tech industry are headed to see AR ushering in a dystopian future where we are bombarded with unwelcome visual distractions, and our every eye movement and emotional reaction is tracked for ad targeting. But as Tim Cook also said, “it doesn’t have to be creepy.” The industry has made data-capture mistakes while building today’s tech platforms, and it shouldn’t repeat them.

Dystopia is easy for us to imagine, as humans are hard-wired for loss aversion. This hard-wiring refers to people’s tendency to prefer avoiding a loss versus an equal win. It’s better to avoid losing $5 than to find $5. It’s an evolutionary survival mechanism that made us hyper-alert for threats. The loss of being eaten by a tiger was more impactful than the gain of finding some food to eat. When it comes to thinking about the future, we instinctively overreact to the downside risk and underappreciate the upside benefits.

How can we get a sense of what AR will mean in our everyday lives, that is (ironically) based in reality?

When we look at the tech stack enabling AR, it’s important to note there is now a new type of data being captured, unique to AR. It’s the computer vision-generated, machine-readable 3D map of the world. AR systems use it to synchronize or localize themselves in 3D space (and with each other). The operating system services based on this data are referred to as the “AR Cloud.” This data has never been captured at scale before, and the AR Cloud is 100 percent necessary for AR experiences to work at all, at scale.

Fundamental capabilities such as persistence, multi-user and occlusions outdoor all need it. Imagine a super version of Google Earth, but machines instead of people use it. This data set is entirely separate to the content and user data used by AR apps (e.g. login account details, user analytics, 3D assets, etc.).

The AR Cloud services are often thought of as just being a “point cloud,” which leads people to imagine simplistic solutions to manage this data. This data actually has potentially many layers, all of them providing varying degrees of usefulness to different use cases. The term “point” is just a shorthand way of referring to a concept, a 3D point in space. The data format for how that point is selected and described is unique to every state-of-the-art AR system.

The critical thing to note is that for an AR system to work best, the computer vision algorithms are tied so tightly to the data that they effectively become the same thing. Apple’s ARKit algorithms wouldn’t work with Google’s ARCore data even if Google gave them access. Same for HoloLens, Magic Leap and all the startups in the space. The performance of open-source mapping solutions are generations behind leading commercial systems.

So we’ve established that these “AR Clouds” will remain proprietary for some time, but exactly what data is in there, and should I be worried that it is being collected?

Read full story here…

Join our mailing list!


avatar
  Subscribe  
Notify of