Not long ago, information technology was heralded as a tool of democratic progress. Some referred to the Arab Spring uprisings that swept the Middle East as the “Facebook Revolution” because activists used social media to organize and rally fellow citizens. Online platform technologies, it was believed, helped promote equality, freedom, and democracy by empowering citizens to publish their ideas and broadcast their everyday realities unconstrained by gatekeepers, communicate freely with one another, and advocate for political reform.
In recent years, however, doubts have surfaced about the effects of information technology on democracy. A growing tech-skeptic chorus is drawing attention to the ways in which information technology disrupts democracy. No country is immune. From New Zealand to Myanmar to the United States, terrorists, authoritarian governments, and foreign adversaries have weaponized the internet. Russia’s online influence campaign during the 2016 United States presidential election demonstrated how easily and effectively bad actors could leverage platform technologies to pursue their own interests. Revelations about Cambridge Analytica, the political consulting firm hired by Donald Trump’s presidential campaign that acquired personal data from 87 million Facebook users, exposed Facebook’s failure to monitor the information third parties collect through its platform and prevent its misuse.
The concern extends beyond isolated incidents to the heart of the business model undergirding many of today’s large technology companies. The advertising revenue that fuels the attention economy leads companies to create new ways to keep users scrolling, viewing, clicking, posting, and commenting for as long as possible. Algorithms designed to accomplish this often end up displaying content curated to entertain, shock, and anger each individual user. The ways in which online platforms are currently engineered have thus come under fire for exacerbating polarization, radicalizing users, and rewarding engagement with disinformation and extremist content. While many large technology companies have underinvested in protecting their own platforms from abuse, they have designed a service that has amplified existing political tensions and spawned new political vulnerabilities.
Countries around the world have responded to this growing threat by launching investigations, passing new laws, and commissioning reports. The U.S., meanwhile, has lagged behind other governments even in the face of well-documented abuses during the 2016 election. The U.S. has been slower to rein in “big tech,” in part, because of a fear of state overreach, the constitutional and cultural commitment to free speech, and a reluctance to constrain the capacity of dynamic companies to innovate.
The steps taken by governments around the world, on the other hand, can be explained by some broad principles shared across borders. A growing international consensus holds that the ways in which today’s dominant online platforms are currently designed poses an inherent threat to democracy. Across a number of countries, lawmakers share the view that the structural design of the attention economy has given rise to disinformation and its rapid spread online. Today’s powerful technologies, they argue, have coarsened public discourse by satiating the appetite for political tribalism, serving up information—true or false—that accords with each users’ ideological preference. They believe the ways in which dominant platforms filter and spread information online presents a serious political threat not only to newer, more fragile democracies but also to long-standing Western liberal democracies.
While lawmakers in the U.S. are beginning to critique the ways in which online platforms have failed to police their own technologies, there remains a reluctance to respond to the digital economy’s negative side effects by establishing terms to regulate the flow of information and classifying certain content as unacceptable. This, many believe, would violate First Amendment free speech rights. Meanwhile, other countries have identified a clearer regulatory role to mitigate the threat online platforms pose to democratic societies.
A similar divide between the actions taken in Europe and the U.S. on online privacy issues has taken shape. Europe has responded forcefully to protect users’ online privacy, bolstering its already robust set of privacy laws when it passed the General Data Protection Regulation in the spring of 2016. The law is widely recognized as the toughest and most comprehensive digital privacy law on the books and is grounded in a cultural attachment to protecting the right of individuals to control access to their personal information.
Online platforms that rely on targeted advertising to generate revenue are in the business of amassing as much personal information on their users as possible. For years, tech companies have been able to collect, use, and share users’ data largely unconstrained. A New York Times investigation found that Facebook gave a number of large technology companies access to users’ personal data, including users’ private messages. In another investigation, the Wall Street Journal found that smartphone apps holding highly sensitive personal data, including information on users’ menstrual cycles, regularly share data with Facebook. While Facebook users can prohibit the social media site from using their data to receive targeted advertisements, users are unable to prevent Facebook from collecting their personal data in the first place.
Meanwhile, high-profile data breaches have highlighted the inability of some of the largest tech companies to protect users’ information from misuse. Cambridge Analytica, a political-data firm linked to Donald Trump’s presidential campaign targeted voters in the run-up to the 2016 presidential election by successfully collecting private information from as many as 87 million Facebook users, most of whom had not agreed to let Facebook release their information to third parties. The campaign used this data to target personalized messages to voters and “individually whisper something in each of their ears,” as whistleblower Christopher Wylie described. Just months after the Cambridge Analytica story, hackers successfully broke into Facebook’s computer network and exposed nearly 50 million users’ personal information.
While users enjoy free access to many tech platforms, they are handing over their personal information with little understanding of the amount, nature, or application of the data tech companies hold on them and little ability to stop its collection. The Cambridge Analytica scandal revealed that entire political systems and processes, not just individual users, are vulnerable when large tech companies fail to properly handle users’ data and leave the door open to those interested in exploiting social and political rifts.
The European Union has made online user privacy a top priority, establishing itself as a global leader on the issue after it passed its General Data Protection Regulation. The law sets out new requirements for obtaining user consent to process data, mandates data portability, requires organizations to notify users of data breaches in a timely fashion, and allows steep fines to be imposed on organizations that violate the regulation. Less than a year after GDPR’s passage, French officials levied a hefty $57 million fine against Google for failing to inform users about its data-collection practices and obtain consent for targeted advertising. After confronting pressure from the European Commission, Facebook agreed to make clear to users that it offers its services for free by utilizing personal data to run targeted advertisements. In Ireland, Facebook is facing several investigations into its compliance with European data protection laws. These moves signal Europe’s commitment to tough enforcement under its new privacy regime.