From FOMO to FOMU: Turn caution about AI for HR into confidence

Categories
AI and machine learning AI for HR artificial intelligence gen AI Guest viewpoints

A year ago, generative AI felt like the golden ticket to business transformation. Companies rushed to build or integrate large language models (LLMs), driven by promises of exponential benefits and competitive advantage. But today, the narrative has shifted. What started as a “fear of missing out” (FOMO) on AI has transformed into a “fear of messing up” (FOMU) as organizations and vendors alike tread cautiously to avoid costly mistakes.

– Advertisement –

Technology in search of a value proposition

When generative AI burst onto the scene, its potential was thrilling. However, many businesses thought they could outperform OpenAI and similar companies by developing their own LLMs. What they quickly discovered was that the cost of building, training and maintaining these models was staggering. Even organizations with deep expertise found themselves overwhelmed.

The initial and ongoing financial burden, coupled with the reality of high hallucination rates, made these models impractical for enterprise applications. It wasn’t just a question of “Can we do this?” but “Can we afford to?”—and the answer, for most, was no.

The technology explainability challenge

Realizing the limits of their resources, companies turned to tech vendors for solutions like guardrails, frameworks, prompt engineering and embedded AI. These tools aimed to ground AI in business data, reducing hallucinations and improving output reliability.

Yet, these solutions were far from perfect. Vendors themselves struggled to deliver enterprise-ready AI, and many deployed products prematurely, leading to data leaks, improper training practices and inconsistent results. Even worse, vendors were often unclear or unwilling to explain how their models and training data were sourced and managed. Meanwhile, concerns about bias, particularly when it came to applying HR to things like candidate scoring and performance reviews, made that lack of explainability even worse.

Fear is rooted in a lack of trust

Early, highly publicized errors and failures with AI significantly eroded trust. Today, while some have pushed ahead with AI initiatives in HR, they’ve been on a limited and “human in the loop” basis—and legal and compliance teams are slowing generative AI rollouts, fearing lawsuits, reputational damage, and employee or customer backlash. Until AI solutions are truly “people-ready,” adoption will remain cautious and incremental.

Transparency around data usage, secure practices and realistic expectations is essential. Vendors need to admit that AI is complex and imperfect—and that they’re learning, too. Truthfulness, alongside robust training and compliance measures, will help restore confidence.

HR leaders have an opportunity to lead

Rather than being paralyzed by FOMU, HR leaders should take advantage of their history as the stewards of sensitive data. Partner with IT, legal and compliance to define policies on how AI can be used, how it will be validated and what it will mean about employee training and reskilling.

– Advertisement –

They should not accept “we’re working on it” as a response from IT, or “it’s too complicated for you to understand” from their vendors. They should not be afraid to ask questions and persevere until they get answers that they can understand and validate—or find another vendor who can answer satisfactorily.

HR vendors need to step up

If HR vendors really want to lead their customers into the AI age, they need to engage with HR leaders to answer the hard questions. They need to leave the complicated technical definitions and PowerPoint graphics behind and come up with clear and understandable explanations of how their AI is differentiated, how customer data is protected and how their models and data sets drive high levels of accuracy and limit risk and bias.

They also need to create low-risk environments for experimentation, templates and blueprints for what “good HR AI policies” and practices look like and prescriptive guidance on how to identify high-value use cases that also mitigate risk. They need to meet customers where they are and be collaborative, not condescending, on the journey to AI.

Generative AI is not a panacea to cure HR woes; deployed properly, it can deliver not just incremental but exponential results, enabling HR to elevate all managers, take data-driven, personalized employee journeys to new levels and relieve HR of much of the busy work and box-checking that hinders HR from delivering on strategic objectives. We’re optimistic about the promise of AI in 2025 for leveling up the policies and practices for managing one of the most valuable assets of an organization: its people.

A little bit of FOMU is healthy. HR leaders who proceed bravely with caution and partner with vendors who are invested in ensuring their success as partners, not just customers, can reap the benefits of AI while mitigating FOMU risks.

The post From FOMO to FOMU: Turn caution about AI for HR into confidence appeared first on HR Executive.