Many organizations balk at the idea of advertising salaries in job postings, believing that it will give them an upper hand in salary negotiations. One of the main problems with this practice according to human resources experts, is that it opens the door to salary discrimination, which puts companies at risk legally and financially. Earlier this year, the Department of Labor launched a lawsuit against Google for extreme gender pay discrimination. A class action suit by female employees is in the works. More recently, three Latina engineers have taken Uber to court for racial and gender bias, citing an employee ranking system that is not based on reliable measures and which favors white and Asian men.
Decades’ worth of studies have demonstrated gender and racial bias in perception of qualifications and compensation. A 2017 report by McKinsey & Co. and Sheryl Sandberg’s LeanIn.org found women of color bringing up the rear in the corporate world behind white men, men of color, and white women. Of course, salary transparency in job ads can’t fix these systemic problems, but it can put an organization on the right track and shield it from accusations of bias. Listing compensation in job descriptions also saves having to wade through candidates who are too expensive. Writing in Forbes, corporate veteran Liz Ryan makes much the same point, citing additional ethical arguments.
Companies like Google and Uber might have plenty of lawyers and money to fight these suits, most organizations do not.
This year marks the 20th anniversary of the defeat of chess Grand Master, Garry Kasparov, by IBM’s Deep Blue. It was the first time a computer had defeated a reigning world champion in tournament play. The event is also a milestone in AI or Artificial Intelligence. The efficiency that AI delivers has helped it make inroads in a variety of fields including HR. Writing in the online publication TLNT, Ji-A Min, a data scientist focused on talent acquisition, notes that over the next few years AI will transform 3 key areas of hiring: candidate sourcing, candidate screening, and candidate matching. In the not too distant future, getting your foot in the door may be a completely different experience than what we’ve come to know until now.
On the other side of the hiring desk, AI has some serious potential pitfalls. It turns out that robots can be as prejudiced as the people who program them. In a paper on language bias and its affect on machine learning, Princeton University computer scientists note that implicit bias, whether acknowledged or not, can be learned by machines (listen to one of the paper’s authors interviewed in the first part of this radio report on bias and AI or watch this presentation on YouTube). This means that the algorithms used by police to predict criminal behavior, or by employers to help with employment decisions, may be racist. As more and more HR functions become automated, the pre-employment screening industry must be vigilant against machine bias as well.