Algorithms are opinions embedded in code. That’s really different from what most people think about algorithms. They think algorithms are objective, true, and scientific. That’s a marketing trick. It’s also a marketing trick to intimidate you with algorithms, to make you trust and fear algorithms because you trust and fear mathematics. A lot can go wrong when we put blind faith in big data.
Whereas an airplane that’s designed badly crashes to the Earth and everyone sees it, an algorithm designed badly can go on for a long time, silently wreaking havoc.
Algorithms don’t make things fair, if you just blithely, blindly apply algorithms. They repeat our past practices, our patterns. They automate the status quo.
What’s going on? Data laundering. It’s a process by which technologists hide ugly truths inside black-box algorithms and call them objective, call them meritocratic. When they’re secret, important, and destructive, I’ve coined a term for these algorithms, weapons of math destruction.
They’re everywhere, and it’s not a mistake. These are private companies building private algorithms for private ends…They call it their secret sauce. That’s why they can’t tell us about it. It’s also private power. They’re profiting from wielding the authority of the inscrutable.
We are the ones that are biased, and we are injecting those biases into the algorithms by choosing what data to collect, by trusting the data that is picking up on past practices, and by choosing the definition of success. How can we expect the algorithms to emerge unscathed? We can’t. We have to check them. We have to check them for fairness.
No algorithm is perfect, of course, so we have to consider the errors of every algorithm. How often are there errors and for whom does this model fail? What is the cost of that failure?
There’s a lot of money to be made in unfairness.