Artificial intelligence is an exciting new frontier becoming more readily accessible to the public. As governments grapple with the right approach to regulating AI, legal risks are already present, including potential perils for employers arising from concerns around bias and discrimination, as well as inaccurate data, privacy considerations and intellectual property issues.
Now is the time for employers to consider enacting explicit policies regulating the use of artificial intelligence; HR professionals will be at the forefront of this effort.
Government oversight
There is an emerging patchwork of laws that impacts companies’ use of AI. Legislators in New York, Connecticut, Virginia, Colorado and Minnesota have announced a multi-state task force to create model AI legislation this fall. New York City, Illinois and Maryland already have enacted laws regulating employers’ use of artificial intelligence in the hiring process (and many other jurisdictions have legislation pending).
The Illinois Artificial Intelligence Video Interview Act governs using AI to assess job candidates. Employers hiring for positions located within Illinois must:
- obtain consent from applicants before using AI in video interviews, after explaining how the AI works and its evaluation standards;
- delete recordings upon request; and
- if the employer relies solely on artificial intelligence to determine whether a candidate will advance for an in-person interview, report data to a state agency indicating how candidates in different demographic categories fare under the AI evaluation.
New York City recently began enforcing a new law regulating the use of AI in “employment decisions.” That law provides that, before employers or HR departments use an “automated employment decision tool” to assess or evaluate candidates within the city, they must:
- conduct a bias audit;
- publish a summary of the results of the bias audit either on or linked to their website, disclosing selection or scoring rates across gender and race/ethnicity categories; and
- give advance notice to candidates about the use of that tool and provide the opportunity to request an “alternative selection process or accommodation.”
New York state has similar legislation pending.
AI-related enforcement activity has also begun. Earlier this year, the EEOC issued a draft strategic enforcement plan that included AI-related employment discrimination on its list of priorities, highlighting the risk that AI can “intentionally exclude or adversely impact protected groups.” Making good on that priority, on Aug. 9, the EEOC settled its first lawsuit against an employer that allegedly used AI in a discriminatory way (i.e., to reject older job applicants).
AI and employees
As governments take steps to design regulatory frameworks, issues regarding the use of artificial intelligence in the workplace are proliferating, including:
Candidate assessment
AI products are evolving to make recruiting more efficient and effective. This includes identifying potential candidates who may not have applied for a particular job but are deemed to have the required skills and qualifications; screening large volumes of résumés, matching job requirements with candidate qualifications and experience; and using predictive analytics to analyze candidate data, including social media profiles, to predict which candidates are most likely to be successful in the role. This essentially “black box” assessment of candidates is fraught with peril, especially in light of new and pending legislation.
Predicting misconduct
Various tools on the market claim they can identify “hot spots” for potential misconduct to allow management and HR to take action before a problem arises. By analyzing large volumes of information, including the “tone” of workplace communications and work schedules and volumes, artificial intelligence represents that it can help pinpoint problematic areas for HR’s proactive engagement.
With limited, if any, visibility into the accuracy of its predictive assessments, employers should proceed with caution, particularly given that the use of predictive tools has the potential to raise significant concerns among employees related to privacy, fairness and discrimination.
Retaining talent
Companies are also leveraging AI in their efforts to retain top talent by using machine learning to predict whether an employee is likely to depart. Some AI programs claim they can identify why people stay and when someone is at risk of leaving by predicting key factors that could lead employees to depart.
Reliability of generative AI
Generative AI processes extremely large sets of information to produce new content and can do so in a format that the AI tool creates (e.g., images, written text, audible output). Early reports of the high-quality output from generative AI tools created a boom in their use to support work in a variety of industries. However, recent examples have shown that the reliability of AI outputs, particularly when asked to analyze fact sets or perform basic computation functions, fluctuates significantly and is far from assured.
See also: How 4 leading CHROs are turning to generative AI to boost HR efficiency
For example, researchers found that, in March 2023, one popular generative AI tool could correctly answer relatively straightforward math questions 98% of the time. However, when asked to answer the same question in July 2023, it was correct only 2% of the time.
Likewise, law firm attorneys recently were sanctioned when they utilized AI to draft a brief after it was discovered that the AI tool had fabricated legal authority, including creating legal opinions that did not exist. As AI proliferates, employees will likely utilize it more frequently and in a growing number of ways in the course of performing their work, and that usage may not be apparent to—or sanctioned by—the employer.
Privacy concerns
The rapid increase in generative AI tools has coincided with the tremendous expansion of U.S. state privacy laws. HR professionals must be aware—and make their stakeholders aware—of the inherent risks associated with disclosing data about their workforce to artificial intelligence tools. Specifically, consider the following:
- What are the risks associated with the disclosure of personal data to AI tools? By inputting personal data into an AI tool, an employer may lose control of the data and find it was made publicly available as the result of a data breach. Employee data is often highly sensitive, and the repercussions of inadvertent disclosure can be great. To mitigate this risk, data can be de-identified prior to submitting it to an AI tool, but companies must be careful to adhere to standards as to what constitutes “de-identified” under applicable law. Companies must also understand and review the terms and conditions and privacy policy of AI tools prior to using them to understand how data inputted into those tools will be used and what rights the company has once data is submitted.
- Is the company still able to comply with requests to exercise data rights as required by applicable law if data is inputted into an artificial intelligence tool? Depending on where an employee resides (e.g., California), the employee may have rights to access, correct, delete or prevent the processing of their personal data. If that personal data has been submitted to an AI tool, deleting or limiting the use of the personal data may be problematic.
Other concerns
Artificial intelligence raises various other issues that HR professionals may be called upon to address, including ownership of, and intellectual property protection for, AI-generated work product. There also is a strong potential for copyright infringement, given that AI companies have not, to date, sought permission from copyright owners to use their works as part of the large data sets ingested by AI tools. These issues should be considered in crafting AI policies in conjunction with relevant company stakeholders.
Recommendations for HR professionals
It is not a question of whether employers will need to address AI in the workplace. Rather, it is an issue of when and how they should address it. Given the rapid proliferation of AI to date, and the ever-increasing governmental regulation, the time is now.
HR professionals must be nimble, closely following regulatory developments to ensure that their policies remain up to date in this fast-changing AI landscape. In the short term, HR professionals should take the following steps:
- Become familiar with what artificial intelligence is generally, what AI the company is already using and what AI it may be using in the near future.
2. Assemble the right group of stakeholders to discuss appropriate policies governing the use of AI at work. Who needs to be at the table—chief technology officer, business leaders, chief people officer, others?
3. Consider what uses of AI are appropriate for your workplace and, equally as important, what uses are not appropriate.
4. Incorporate legal compliance considerations when designing your policy, including: ensuring that AI is not used in a way that could adversely impact any group based on protected characteristics; consider performing a bias audit to ensure that AI is being implemented appropriately; providing appropriate notice to candidates and/or employees concerning the company’s use of AI and obtaining consent as may be required under applicable law; and ensuring that the use of AI does not conflict with any statutory or contractual right to privacy held by candidates, employees or consultants.
5. Develop and implement a policy for employees governing the use of AI in the workplace, specifying which AI tools may be used and what information is permitted to be submitted to such AI tools. Consider offering training to employees on appropriate uses of AI to ensure a clear understanding across your workforce.
6. If applicable, develop and implement a similar policy for how vendors and/or independent contractors may use AI in the work they perform for your organization. Additionally, consider whether vendor agreements need to be updated to control whether and how vendors are permitted to use your data in AI applications.
7. Understand how data is being gathered and used. What is AI collecting, and how is it assimilating and using data at an organizational level and at a personal level? Even if data is deleted, it may have been incorporated into the calibration of AI in a future analysis. Is that something the company is comfortable with?
8. Assign responsibility for all aspects of the use of AI within your organization so that roles are clearly understood and accountability exists.
Artificial intelligence offers exciting new opportunities, but it also comes with risks and a degree of uncertainty. By gaining an understanding of the uses of AI within the organization, the way it functions and the end results, HR professionals can assist companies in effectively utilizing this tool while minimizing legal risk.
The post Planning for AI in the workplace? 8 things to think about appeared first on HR Executive.