The Last Frontier: Big Tech Wants To Read Your Thoughts
Not content with monitoring almost everything you do online, Facebooknow wants to read your mind as well. The social media giant recently announced a breakthrough in its plan to create a device that reads people’s brainwaves to allow them to type just by thinking. And Elon Musk wants to go even further. One of the Tesla boss’s other companies, Neuralink, is developing a brain implant to connect people’s minds directly to a computer.
Musk admits that he takes inspiration from science fiction and that he wants to make sure humans can “keep up” with artificial intelligence. He seems to have missed the part of sci-fi that acts as a warning for the implications of technology.
These mind-reading systems could affect our privacy, security, identity, equality and personal safety. Do we really want all that left to companies with philosophies such as that of Facebook’s former mantra, “move fast and break things”?
Though they sound futuristic, the technologies needed to make brainwave-reading devices are not that dissimilar to the standard MRI (magnetic resonance imaging) and EEG (electroencephalography) neuroscience tools used in hospitals all over the world. You can already buy a kit to control a drone with your mind, so using one to type out words is, in some ways, not that much of a leap. The advance will likely be due to the use of machine learning to sift through huge quantities of data collected from our brains and find the patterns in neuron activity that link thoughts to specific words.
A brain implant is likely to take a lot longer to develop, and it’s important to separate out the actual achievements of Neuralink from media hype and promotion. But Neuralink has made simultaneous improvements in materials for electrodes and robot-assisted surgery to implant them, packaging the technology neatly so it can be read via USB.
acebook and Neuralink’s plans may build on established medical practice. But when companies are collecting thoughts directly from our brains, the ethical issues are very different.
Any system that could collect data directly from our brains has clear privacy risks. Privacy is about consent. But it is very difficult to give proper consent if someone is tapping directly into our thoughts. Silicon Valley companies (and governments) already surreptitiously gather as much data on us as they can and use it in ways we’d rather they didn’t. How sure can we be that our random and personal thoughts won’t be captured and studied alongside the instructions we want to give the technology?