In the weeks since the U.S. presidential election, Facebook CEO Mark Zuckerberg has been firefighting. Not literally, but figuratively. Widespread accusations assert that his social media company contributed to the election’s unexpected outcome by propagating fake news and “filter bubbles.” Zuckerberg has harshly refuted these allegations, but the case poses a thorny question: How do we ensure that technology works for society?
A Fourth Industrial Revolution is arising that will pose tough ethical questions with few simple, black-and-white answers. Smaller, more powerful and cheaper sensors; cognitive computing advancements in artificial intelligence, robotics, predictive analytics and machine learning; nano, neuro and biotechnology; the Internet of Things; 3D printing; and much more, are already demanding real answers really fast. And this will only get harder and more complex when we embed these new technologies into our bodies and brains to enhance our physical and cognitive functioning.
Take the choice society will soon have to make about autonomous cars as an example. If a crash cannot be avoided, should a car be programmed to minimize bystander casualties even if it harms the car’s occupants, or should the car protect its occupants under any circumstances?
Research demonstrates the public is conflicted. Consumers would prefer to minimize the number of overall casualties in a car accident, yet are unwilling to purchase a self-driving car if it is not self-protective. Of course, the ideal option is for companies to develop algorithms that bypass this possibility entirely, but this may not always be an option. What is clear, however, is that such ethical quandaries must be reconciled before any consumer hands over their keys to dark-holed algorithms.
With so many different stakeholders involved, how do we ensure a governance model that will make technology work for society?