As business leaders contemplate the possibilities of innovating with artificial intelligence while mitigating its risks, governing representatives worldwide came together to discuss the current state of AI regulation and its impact on the HR industry. This group influences guidelines that touch millions of people, and human resource leaders are a key link in the chain.
Top global regulators at HR Technology Europe
As part of an ongoing effort to reach as many stakeholders as possible, Keith Sonderling, Commissioner of the United States Equal Employment Opportunity Commission (EEOC), moderated a keynote panel at HR Technology Europe, held in Amsterdam earlier this month.
He was joined by Emily Campbell-Ratcliffe, head of AI assurance at the U.K.’s Department for Science, Innovation and Technology (DSIT); Tobias Muellensiefen, team leader of the Future of Work unit at the European Commission’s Directorate-General for Employment, Social Affairs; and Irakli Beridze, head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations.
Sonderling acknowledged that HR leaders are constantly pressured to look beyond trends and stay prepared to do their jobs. “How do we prevent the next big disaster when it comes to employment discrimination and employment issues?” pondered the Commissioner. “And as you all know, everything is going to technology, and this conference is proof positive of that.”
Sonderling said regulatory uncertainty has stymied some business leaders who feel they are not yet prepared to implement AI-based tools. Depending on where the organization is located, some governments have not yet approached the task of regulating the application of new technology. HR leaders are left wondering what measures regulators will take to scrutinize its use and how they should respond.
However, several key organizations have rolled out guidance, including the United States, the European Union, the United Kingdom and the United Nations. Here’s what the audience learned about these efforts.
The comprehensive European Union AI Act
The discussion opened with a review of the EU’s recently passed AI Act, the first comprehensive global regulation of artificial intelligence. Muellensiefen explained that the Act aims to position the EU as a hub for ethical, trustworthy and human-centric AI while supporting innovation.
The act is structured around a risk-based approach, categorizing AI systems into four levels based on the possibility of threats posed to health, safety and fundamental rights. Muellensiefen referred to it as a “product safety regulation” rather than a user regulation. The framework stratifies AI systems into the following risk levels:
EU AI Act risk levels
- Minimal: This is the lowest category with no or very little risk. Many AI systems fall into this category, and these are not regulated.
- Limited: Systems that lack transparency will likely fall into this category. The AI Act aims to ensure that people are aware they’re dealing with artificial intelligence—if not, limited risk is identified.
- High: This category includes HR-related AI systems, like those used for recruitment and management. AI systems must be properly documented, and the data used must be of high quality. Activity logging is required, and these systems must be designed with human oversight.
- Unacceptable: This category identifies AI systems that will be prohibited because they are considered unethical. This includes, for example, social scoring and emotion recognition at the workplace, unless it is used for medical or safety purposes.
The United Kingdom’s approach to AI regulation
Campbell-Ratcliffe discussed the U.K.’s approach to AI risk management. She emphasized the importance of “justified trust,” where organizations can measure and evaluate their AI systems to demonstrate their trustworthiness, both internally and to regulators.
“We’re very conscious about making sure there’s not lots of different rules and lots of different sectors for HR professionals,” said Campbell-Ratcliffe. “So, we see kind of this self-regulation as eventually underpinning a lot of the actual regulation that will come in.”
The U.K. has brought together government and industry to develop new techniques for addressing issues like bias and fairness. “We take a very context-specific approach,” said Campbell-Ratcliffe. “We have five cross-sectoral principles that apply to every sector, all systems, but it is up to your sectoral regulator on how that is applied in your area.” She confirmed that her organization’s white paper for regulation was produced in March 2023, followed by consultation with industry and government stakeholders.
Campbell-Ratcliffe said that vendors in the HR tech space have quite a big responsibility and that they need to be able to provide evidence to support their sales claims. She told the audience that there is an onus on vendors to explain how their systems work, what limitations they have and what risks they pose.
She said product developers need to ensure their system works in a test environment and also with the actual data of the customer organization. Campbell-Ratcliffe confirmed that after a system is sold and implemented, vendors should continue testing with the client’s data to check that the platform still meets the performance bar they claimed it would.
Campbell-Ratcliffe said the overall approach to regulating artificial intelligence in the U.K. hasn’t changed dramatically with the rise in generative AI, but the pace of her department’s work has accelerated. She said the rapid advancement of generative AI has simply made her work more urgent. DSIT is moving faster to develop the guidance to support organizations as they incorporate generative AI into their operations ethically and responsibly, minimizing disruption and ensuring a smooth transition.
Agencies lead AI regulation efforts in the United States
Overall, the United States has opted for a decentralized approach, utilizing existing resources within individual agencies rather than implementing sweeping regulations. However, said Sonderling, there’s a growing focus on AI rulings at the state and city levels.
Sonderling explained that throughout the U.S. federal government, each agency is responsible for its unique, specialized knowledge and set of enforceable laws as they relate to outcomes initiated by artificial intelligence.
For instance, the EEOC is responsible for AI as it impacts HR, and Sonderling said his agency has been proactive in collaborating with all stakeholders. Companies intending to use or purchase AI systems for workforce-related tasks need to understand their legal responsibilities because they will be held accountable for the tools they employ. He reminded the audience that, despite any new technology, employers are obligated to remain in compliance with long-standing civil rights laws, such as the right to work without discrimination.
The EEOC focuses on the results produced by AI tools rather than getting “bogged down” in the intricacies of the technology itself, according to Sonderling. He explained that HR’s emphasis is on making informed decisions, whether related to hiring, firing or promotions, and that the decisions themselves are the target of regulation, not the technology.
The Commissioner reminded the audience that, regardless of whether a human or AI administers decision-making steps, it falls upon the employer to take responsibility. “At the end of the day, the employer is the only one in the United States that can make that employment decision,” said Sonderling.
A collaborative approach to AI regulation by the United Nations
Irakli Beridze provided the U.N.’s perspective, noting that its AI Center has been working on AI governance for over a decade, creating instruments for 193 countries as well as implementing AI projects to address concrete problems.
“At the moment, for the U.N., AI probably is one of the most important topics,” said Beridze. He said U.N. Secretary-General António Guterres identifies artificial intelligence as one of the top priorities of the U.N. because it is a “game changer.”
He highlighted the importance of governments collaborating with all stakeholders—including HR professionals—to get AI implementation right. “International policymaking [for 193 countries] is a very complex issue, and this is not going to happen overnight,” said Beridze. “And then you need to catch up with the regulation side as well to make everybody equally happy, or—as we call it in the U.N.—make everybody equally unhappy.”
Beridze said the U.N. is working with private sector companies that are developing products that bring AI-related innovations to market. “That was a deliberate decision taken,” he said. “And I really stand by that the U.N. should not be creating the tools; we should not be competing with the private sector.”
About half of the world’s countries are investing heavily in AI strategies that address employment and HR issues, according to Beridze. This includes looking at both the opportunities and threats posed by AI and automation, and how to deal with potential job displacement.
However, he warned the audience that the other half of the world’s countries do not have any such strategies in place. This means around 4 billion people live in countries lacking a plan to manage the impacts of AI and automation.
This could create a “very unequal world and very dangerous from the security perspective as well,” said Beridze. He invited the audience to imagine if civil wars and the mass displacement of millions of refugees disrupted the political situations in many countries—and if billions of people are then also impacted by technology-driven unemployment or other AI-related disruptions. He said the scale of the problem could be overwhelming.
This is why governments need to be proactive and adopt comprehensive strategies to mitigate the upscaling risks of AI and automation, warned Beridze. He proposed that leaders should apply specialized approaches to deal with AI-related challenges, including investing more in education and workforce retraining to keep their citizens’ skills relevant. Without such forward-thinking strategies in place globally, said Beridze, the world risks becoming a much more unequal and unstable place.
AI in HR is no longer a question of ‘if’ but ‘how’
The panelists agreed that the use of AI in HR is no longer a question of “if” but “how.” Sonderling emphasized the need for HR leaders to proactively engage with regulators and leverage the tools and guidance being developed to implement AI safely and ethically.
The Commissioner noted that the presence of global regulators together on one panel shows a strong willingness for the world’s leaders to work together in the management of AI guidance. He commented that global collaboration is useful to developers, tech buyers and HR practitioners, arming them with background crucial for AI-related decision-making. “I hope this is just the beginning of the conversation,” said Sonderling. “Because none of us are in the room when you’re figuring out which HR technology to invest in, to develop or to buy.”
The post AI regulation: Where the U.N. and other global leaders stand appeared first on HR Executive.