Social media companies can already use online data to make reliable guesses about pregnancy or suicidal ideation – and new BCI technology will push this even further.
It’s raining on your walk to the station after work, but you don’t have an umbrella. Out of the corner of your eye, you see a rain jacket in a shop window. You think to yourself: “A rain jacket like that would be perfect for weather like this.”
Later, as you’re scrolling on Instagram on the train, you see a similar-looking jacket. You take a closer look. Actually, it’s exactly the same one – and it’s a sponsored post. You feel a sudden wave of paranoia: did you say something out loud about the jacket? Had Instagram somehow read your mind?
While social media’s algorithms sometimes appear to “know” us in ways that can feel almost telepathic, ultimately their insights are the result of a triangulation of millions of recorded externalized online actions: clicks, searches, likes, conversations, purchases and so on. This is life under surveillance capitalism.
As powerful as the recommendation algorithms have become, we still assume that our innermost dialogue is internal unless otherwise disclosed. But recent advances in brain-computer interface (BCI) technology, which integrates cognitive activity with a computer, might challenge this.
In the past year, researchers have demonstrated that it is possible to translate directly from brain activity into synthetic speech or text by recording and decoding a person’s neural signals, using sophisticated AI algorithms.
While such technology offers a promising horizon for those suffering from neurological conditions that affect speech, this research is also being followed closely, and occasionally funded, by technology companies like Facebook. A shift to brain-computer interfaces, they propose, will offer a revolutionary way to communicate with our machines and each other, a direct line between mind and device.
But will the price we pay for these cognitive devices be an incursion into our last bastion of real privacy? Are we ready to surrender our cognitive liberty for more streamlined online services and better targeted ads?
A BCI is a device that allows for direct communication between the brain and a machine. Foundational to this technology is the ability to decode neural signals that arise in the brain into commands that can be recognized by the machine.
Because neural signals in the brain are often noisy, decoding is extremely difficult. While the past two decades have seen some success decoding sensory-motor signals into computational commands – allowing for impressive feats like moving a cursor across a screen with the mind or manipulating a robotic arm – brain activity associated with other forms of cognition, like speech, have remained too complex to decode.
But advances in deep learning, an AI technique that mimics the brain’s ability to learn from experience, is changing what’s possible. In April this year, a research team at the University of California, San Francisco, published results of a successful attempt at translating neural activity into speech via a deep-learning powered BCI.
The team placed small electronic arrays directly on the brains of five people and recorded their brain activity, as well as the movement of their jaws, mouths and tongues as they read out loud from children’s books. This data was then used to train two algorithms: one learned how brain signals instructed the facial muscles to move; the other learned how these facial movements became audible speech.
Once the algorithms were trained, the participants were again asked to read out from the children’s books, this time merely miming the words. Using only data collected from neural activity, the algorithmic systems could decipher what was being said, and produce intelligible synthetic versions of the mimed sentences.