Page Body

Page Main

Post Main

Post Article

Jeffrey Dastin, Reuters:

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity.

The company’s experiment, which Reuters is first to report, offers a case study in the limitations of machine learning. It also serves as a lesson to the growing list of large companies including Hilton Worldwide Holdings Inc (HLT.N) and Goldman Sachs Group Inc (GS.N) that are looking to automate portions of the hiring process.

Some 55 percent of U.S. human resources managers said artificial intelligence, or AI, would be a regular part of their work within the next five years, according to a 2017 survey by talent software firm CareerBuilder.

Employers have long dreamed of harnessing technology to widen the hiring net and reduce reliance on subjective opinions of human recruiters. But computer scientists such as Nihar Shah, who teaches machine learning at Carnegie Mellon University, say there is still much work to do.

“How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable - that’s still quite far off,” he said.

Yup (1, 2, 3, 4, 5, 6).

Cathy O’Neil, Bloomberg:

Here’s what happens. An analytics person, usually quite senior, asks if I can help audit a company’s algorithm for things like sexism or other kinds of bias that would be illegal in its regulated field. This leads to a great phone call and promises of more and better phone calls. On the second call, they bring on their corporate counsel, who asks me some version of the following question: What if you find a problem with our algorithm that we cannot fix? And what if we someday get sued for that problem and in discovery they figure out that we already knew about it? I never get to the third call.

In short, the companies want plausible deniability. But they won’t be able to hide their heads in the sand for long. More cases like Amazon’s will surface, and journalists and regulators will start to connect the dots.

Ideally, companies will face up to the issue sooner, which will mean spending much more on recruiting, or at least on making sure their algorithms aren’t illegal, an upfront cost they don’t relish. More likely, they’ll keep ignoring it until they attract a series of major lawsuits — possibly from regulators but, given the current climate, probably through class actions. And when the plaintiffs start winning, shareholders will recognize that big data isn’t quite the blessing that they had hoped it would be.

Paul Ciano

Enjoyed this post?

Subscribe to my feed for the latest updates.