Algorithms Can Lie And Deceive But Can They Be Stopped?

Please Share This Story!

This must-read article deserves a big shout-out to its author, Cathy O’Neill, who finally raises the right questions about AI, its intended and unintended risks, and the lives that might be ruined because of it. Who says young people cannot or do not ‘get it’?  TN Editor

Algorithms can dictate whether you get a mortgage or how much you pay for insurance. But sometimes they’re wrong – and sometimes they are designed to deceive.

Lots of algorithms go bad unintentionally. Some of them, however, are made to be criminal. Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.

We’ve seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move. Financial risk models also use historical market changes to predict cataclysmic events in a more global sense, so not for an individual stock but rather for an entire market. The risk model for mortgage-backed securities was famously bad – intentionally so – and the trust in those models can be blamed for much of the scale and subsequent damage wrought by the 2008 financial crisis.

[the_ad id=”11018″]

Since 2008, we’ve heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.

The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system. Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.

I have a four-layer hierarchy when it comes to bad algorithms. At the top there are the unintentional problems that reflect cultural biases. For example, when Harvard professor Latanya Sweeney found that Google searches for names perceived to be black generated ads associated with criminal activity, we can assume that there was no Google engineer writing racist code. In fact, the ads were trained to be bad by previous users of Google search, who were more likely to click on a criminal records ad when they searched for a black sounding name. Another example: the Google image search result for “unprofessional hair”, which returned almost exclusively black women, is similarly trained by the people posting or clicking on search results throughout time.

Read full story here…

Join our mailing list!

Notify of
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments
Juan Juan

The numbered economic “institutional units” are programmed from an early age to respond to certain stimuli with expectations of receiving certain rewards, even if the reward is only of nominal value (in name only). That behavior modification programming is installed in a competitive environment where Intelligence Quotient (IQ) is ultimate objective. IQ is the institutional unit’s capacity to memorize data and information and recall it on queue or on demand. It makes no difference whether the data and information is incorrect, tainted, or even imbued with absurdities. The faculties of reason, which are substantively different from IQ are not needed… Read more »