Page Body

Page Main

Post Main

Post Article

How Can We Stop Algorithms Telling Lies?

Linked by Paul Ciano on July 20, 2017

Cathy O’Neil:

Since 2008, we’ve heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.

The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system. Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails.

At the top there are the unintentional problems that reflect cultural biases. For example, when Harvard professor Latanya Sweeney found that Google searches for names perceived to be black generated ads associated with criminal activity, we can assume that there was no Google engineer writing racist code. In fact, the ads were trained to be bad by previous users of Google search, who were more likely to click on a criminal records ad when they searched for a black sounding name.

One layer down we come to algorithms that go bad through neglect. These would include scheduling programs that prevent people who work minimum wage jobs from leading decent lives. The algorithms treat them like cogs in a machine, sending them to work at different times of the day and on different days each week, preventing them from having regular childcare, a second job, or going to night school. They are brutally efficient, hugely scaled, and largely legal, collecting pennies on the backs of workers.

The third layer consists of nasty but legal algorithms. For example, there were Facebook executives in Australia showing advertisers ways to find and target vulnerable teenagers. Awful but probably not explicitly illegal.

Finally, there’s the bottom layer, which consists of intentionally nefarious and sometimes outright illegal algorithms. There are hundreds of private companies, including dozens in the UK, that offer mass surveillance tools. They are marketed as a way of locating terrorists or criminals, but they can be used to target and root out citizen activists.

What organisation will put a stop to the oncoming crop of illegal algorithms? What is the analogue of the International Council on Clean Transportation? Does there yet exist an organisation that has the capacity, interest, and ability to put an end to illegal algorithms, and to prove that these algorithms are harmful?

The current nature of algorithms is secret, proprietary code, protected as the “secret sauce” of corporations. They’re so secret that most online scoring systems aren’t even apparent to the people targeted by them. That means those people also don’t know the score they’ve been given, nor can they complain about or contest those scores. Most important, they typically won’t know if something unfair has happened to them.

We can soon expect a fully fledged army of algorithms that skirt laws, that are sophisticated and silent, and that seek to get around rules and regulations. They will learn from how others were caught and do it better the next time. In other words, it will get progressively more difficult to catch them cheating. Our tactics have to get better over time too.

We can also expect to be told that the big companies are “dealing with it privately”. This is already happening with respect to fighting terrorism. We should not trust them when they say this. We need to create a standard testing framework – a standard definition of harm – and require that algorithms be submitted for testing.

It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently. When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around.

Paul Ciano

Enjoyed this post?

Subscribe to my feed for the latest updates.