Subjective insights: The No. 1 red flag to look for when building your recruiting tech stack

Categories
Recruiting staffing Technology Tips

Human decision-making for recruitment and talent management has one fatal flaw that must never be perpetuated in AI systems: subjectivity.

HR teams report that 85% to 97% of hiring decisions are made based on gut instinct rather than data-driven analysis, and long-standing research suggests that algorithms outperform human decisions by at least 25%. Many employers and staffing agencies are taking steps to address unconscious bias in recruitment, but how do you determine if an AI solution you’re considering for your tech stack helps ensure a fair hiring process?

First, you must recognize that any AI that provides subjective insights on candidates is a huge red flag, as it can lead users to subjective conclusions or bias. 

How to Spot Subjective Insights 

Subjective insights stem from personal opinions, emotions or biases rather than objective facts. Within recruiting applications, subjective data can manifest in several forms, including: 

  • Summaries of candidate profiles
  • Summaries of interviews or conversations
  • Debriefs between hiring managers/interviewers 
  • Employee performance assessments 

Subjective opinions or perspectives on candidate attributes such as personality traits, attitudes, appearances or demeanors vary widely among individuals. What one interviewer might perceive as a “calm demeanor” could be interpreted differently by another as “nervousness.” This is why for decades, recruiters and hiring managers have justified rejecting otherwise qualified candidates because they just don’t have the elusive “something special” they’re looking for. 

PREMIUM CONTENT: Online Job Advertising Market: 2024 Update

The critical issue with subjective information lies in its lack of factual basis, rendering it less suitable for making data-driven personnel decisions. Relying on this kind of subjective information as “data” would only introduce more potential for bias. Although such biases originate from human judgment, it’s imperative not to perpetuate them in AI systems.

Below are several terms commonly utilized by both AI and human decision-makers that inherently possess subjective connotations:

  • Calm 
  • Well-spoken 
  • Confident 
  • Nervous 
  • Negative attitude 
  • Energetic 
  • Disengaged

If your AI or conversational analytics solutions use terms like this to summarize candidates or employee performance, consider it a red flag.

Verify: Does Your AI Solution Eliminate Subjective Information?

Any AI solution considered for hiring or talent management should explicitly filter out subjective information (and hallucinations that may result in subjective information), ensuring decision-makers receive only objective, fact-based data for more equitable and accurate assessments. This approach promotes a fair evaluation process while mitigating the risk of perpetuating human biases in AI-driven decision-making.

For example, here’s a breakdown of how an interview intelligence solution should filter data into objective insights.

Objective data provided in a candidate summary

  • Experience and accomplishments
  • Skills and certifications
  • Previous job titles and roles
  • Career goals and aspirations

Subjective data that should be not be used

  • Personal information, such as the candidate’s age, race, gender, ability, religion or orientation
  • Casual topics, such as conversations about the weather, sports or family life
  • Attitudinal descriptors  such as  personality, enthusiasm or demeanor
  •  Physical descriptors related to tone of voice, accent or appearance

AI holds the promise of enhancing processes and mitigating biases. With careful utilization and human supervision, AI recruitment solutions possess the capability to pinpoint and reduce systemic biases based on subjective insights across your organization. However, while subjectivity remains a concern, it’s not the sole aspect to scrutinize when assessing AI solutions. Keep an eye out for the next installment in this three-part series, where I’ll delve into unraveling provider assertions regarding privacy and bias.