Recruiters are no strangers to the cautionary tales surrounding the hazards of weaving AI solutions into your recruitment toolbox, particularly concerning privacy and bias. Yet, in a sea of AI providers echoing similar assurances about their offerings, the real challenge lies in sifting through the noise to uncover solutions that uphold industry standards and legal protocols for safeguarding candidate privacy and combating bias.
My advice: Forget about what’s on the provider’s website and ads.
Each organization’s requirements are unique, and the intricate web of AI regulations varies depending on your geographical location. Therefore, it’s solely up to you and your IT team to determine whether an AI solution aligns with your privacy and bias criteria, employing two crucial methods: thoroughly vetting the provider and conducting a pilot of the solution.
Unpacking Privacy and Bias Considerations and Claims
Concerns about the privacy, security and potential bias of AI solutions are top of mind among recruiters and company leaders, and justifiably so. As regulations like the EU AI Act become more widespread, consequences for non-compliance related to privacy and bias will become more common, as will user trepidation.
To meet rising caution surrounding AI solutions for hiring and talent management, providers are boasting more about the privacy and bias protections of their solutions. But the use of language around “privacy” and “bias” among AI providers may be as empty as the use of words like “all natural” and “free range” among food brands. To avoid violating candidate trust or legal risk, don’t take provider claims about privacy and bias at face value.
PREMIUM CONTENT: History of the Staffing Industry
Essential Questions for Vetting Providers
To ensure the AI provider is actually taking the necessary steps to ensure privacy and bias mitigation, explore the following topics directly with them. If they’re unable to respond with a clear explanation of how they address these areas, it’s a red flag.
- Was the solution purposefully designed to understand the nuances of hiring and talent management, taking into account the challenges of providing objective insights, avoiding bias and safeguarding data?
- How does the solution handle sensitive personal information such as social security numbers, and does it capture or store any biometrics?
- Has the vendor obtained or are they in the process of obtaining any relevant certifications indicating adherence to security protocols such as Service Organization Control (SOC) compliance or responsible AI standards such as ISO/IEC 42001?
- What data handling and encryption processes are in place for storing, transferring, accessing and deleting data?
There are no one-size-fits-all answers to these questions, but engaging in dialogue with the provider and requesting transparent insights into their processes can provide further clarity and confidence in your decision.
Another way to vet potential AI providers is to conduct a pilot assessing the AI solution’s adherence to privacy regulations and mitigating biases. Clearly outline the goals of the pilot, then execute it with a subset of users or within a controlled environment. Monitor the AI solution’s performance in real-world scenarios and track any instances of privacy breaches or biased outcomes.
Work with your IT department and technology officers to evaluate each AI solution you’re considering for your recruiting tech stack and determine if their technology lives up to their privacy and security claims. The art of determining the ideal AI solutions for your organization remains firmly in human hands, sans any artificial intelligence assistance — at least for now.
In the next part of this series, I will describe friction and unrealistic expectations. Click here to read part one of this series on subjective insights.