Is AI Biased? Of Course it is! And so are humans.
Reading resumes and conducting interview is ineffective and biased – and in the worst possible way… we don’t like to admit it. How many recruiters would lay claim to being “below average” in staff selection? Half of us are, by definition… but of course ego says, “not I”… it’s always someone else.
The jury has been in on the uselessness of resumes for decades. Even if they were not full of misleading information, half-truths and downright lies, they are valueless. And recent research from Yale School of Management by A. Prof. Jason Dana has shown that conducting interviews is not only useless… it’s worse than useless.
The pandemic has impacted HR in many ways and clearly the biggest change is in the use of ‘no-contact’ video interviews, especially “asynchronous” interviews that save all parties wasted time.
The recent AHRI survey on the impact of COVID-19 reveals how difficult this time has been, and things aren’t getting any easier. Nearly 30% of HR respondents are experiencing increased workloads and as lock-down eases and more companies begin to rehire there will be even greater emphasis on video technology.
Even before the world shut down, HR was using AI to vet candidates and create shortlists using powerful algorithms that interpret hundreds of physiological markers, voice modulation and language to rapidly determine who could be a good fit for the employer’s business.
In the United States the sheer size of some of the hiring challenges has meant increasing emphasis on AI – but it has also created a good deal of controversy.
Creating a Monster?
Some are disturbed by the burgeoning use of AI arguing that it will, inevitably exhibit bias. Disability, ethnicity and even gender bias have already been discovered in some AI engines. Amazon abandoned its own AI recruitment project because they could not find a way to remove it.
But of course no matter how a candidate is chosen, if 50 apply for the one job, 49 will be disappointed no matter what methodology is used to cull the candidates. And no-one could possibly argue that legacy recruitment methods such as reading resumes and conducting interviews are free of such biases.
Is Pandora’s Box already open?
AI is here, it’s in use in every internet advertisement you see and every command you give Siri, Alexa or Google (hey!). There’s little doubt that the use of AI is inevitable, or that it will continue to improve and develop.
But there is a danger that candidates will start to modify their behaviour based on what they think AI is looking for. But unless you’re a great method actor, this works against you.
Can humans ever truly escape bias?
“We worry about any and every form of bias. But bias is not a problem exclusive to intelligent machines. Human beings are inescapably burdened with scores of cognitive biases” says Dr Glyn Brokensha Co-Founder & Chairman of Expr3ss!
“The real battle is that the first and most fatal flaw is in ourselves as human beings, and in ourselves as employers and recruiters.”
“Knowing that we are inherently biased does not remove bias, caring that we are biased does not negate bias, trying not to be biased does little to alleviate bias. Although we should always strive to do so, we will always fall short.”
What’s to be done?
“My view?…” says Dr Brokensha, “use multiple objective methods of assessment, built to be as objective and unbiased as is humanly possible and constantly review and improve them.
Taking diverse “views” of the person improves the resolving power of any assessment methodology. Just as surround sound offers a richer experience of a movie, so too multiple methods of assessment add depth and rigour to the recruitment process.”
“Bias is something we should all work to remove, at every level, using objective criteria, constant improvement and a recognition that a diversity of views brings power to any method, just as diversity in hiring brings strength and resilience to an organisation .”
And yes, know that every AI is biased too! But that bias is something we can all work to remove, using objective criteria and constant improvement. And AI doesn’t have an ego to protect.