Home » Blogs » AI Recruiting Technology » Building AI to Unlearn Bias ...

Building AI to Unlearn Bias in Recruiting – Defining the problem

This is a 2 part series in which we explore Bias and AI. In part 1 of this series, we will describe the problem of AI and bias and share examples of how it is manifesting in recruiting. In part 2, we will look into ideas to get closer to solving bias in recruiting using AI.

“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst.” -Stephen Hawking

Bias has always existed in some shape or form in our cognitive fabric. In today’s world, it has become even more critical for us to be both aware of and solve for bias. There have been several studies out there proving that diversity of ideas and people is beneficial for both, society and business. In this article, we will explore the challenge of bias and how it pervades into existing systems.

What’s causing bias today?

In our current recruiting processes, most decisions are dependent on and driven by humans. A person’s judgments can often be based on a small number of anecdotal data points (e.g., if three similar applicants perform well in a Sales Executive interview, we might subconsciously start screening for traits common to them). Also these judgment calls vary based on the decision maker’s cultural, social and educational background, leading to low consistency across individuals. Furthermore, we aren’t always self-aware of all the filters we are applying to make judgements. This is commonly referred to as unconscious bias.

Unconscious Bias
Source: Diversity Australia

As automation has increased over time, humans get involved in fewer points of decision making. While this reduces inconsistency and task completion errors, the actual risk of bias might not have reduced much as most of the automated tasks were quite objective to begin with (e.g. automatic changing of statuses in the ATS). As long as humans are making subjective decisions without checks and balances, there would be bias in the system.

So can I just apply AI to reduce bias?

The new era of AI-led automation is replacing human’s subjective decision making and is now evaluating large amounts of data that were not earlier methodically factored in. The advantage of using machines is that judgements are based on holistic correlations on a statistically significant sample – leading to high consistency and better outcomes.

However, one needs to be aware of the following gotchas:

  • – Without explicit external guidance, machines will use every factor for its face value. This means, that while mathematically they aren’t biased, socially they could be using factors that we deem inappropriate.
  • – The current implementation of AI in the form of machine learning / deep learning feels much more like a black box. It is often difficult to dissect these learning systems and understand how decisions are being engendered within them. The internal guts and flows don’t always translate to real world notions.
  • – Most of these systems are GIGO – Garbage in Garbage out. This means that if incorrect data or more likely insufficient data is fed in, the models can be wrongly trained or over trained and thus not much good at making sound judgements.
  • – Training for the same outcomes that humans would have made doesn’t really take out bias (as the premise would be that the original decisions had no bias in them).

The advantage of using machines is that judgements are based on holistic correlations on a statistically significant sample.

Clearly, saying that ‘machines or AI will solve all bias problems’ is not sufficient.

What’s out there that I should be cautious about?

Let’s take examples of Recruiting AI technology sold today for subjective decision making –

Personality/skill based assessment systems: There are a ton of assessment systems that profile your current high and low performing employees to determine the right qualities of a new hire. While the smarter version of these systems don’t take into account a subject’s personal information, how does one make sure that the qualities that are taken into account aren’t directly correlated to more frowned upon factors? How do we know that if we hired more of the same type of “high-performing” people versus the current mix, the company performance would actually improve?

Automated resume screening for job matching: Given that resumes don’t really have a standard template, they are often bashed for being insufficient and inconsistent in representing candidate skills and qualities. There are also known predilections on how different genders speak about their skills in resumes. Yet resume parsing technology often does not account for these factors when scoring candidates.

Video assessment: There are video interviewing tools out there that automatically screen for certain qualities, like number of times a candidate says ‘please’ or number of times they smile. They also give hiring managers the ability to view the videos before inviting someone for the interview. How do we make sure that this doesn’t lead to unconscious screening for gender or race?

Any response from the solution providers that they aren’t directly using factors unaccepted socially doesn’t mean they are not perpetuating bias unknowingly. It all comes down to proving that bias isn’t being introduced or actively discouraged.

 A print-ready version of our tip-shee on “Building AI to Unlearn Bias in Recruiting”t is ready to download.

 

SUBSCRIBE TO ALLYO BLOG

Stay up-to-date with the latest insights and trends from AI recruiting brought to you by AllyO Blog!

Schedule Demo Widget

Ready to transform your HR communications?

Learn More

AllyO Virtual Hiring Events™Learn More
+