There has been a lot written about the emergence of ChatGPT and the impact it will have on everything from college term papers to journalism to HR. It’s a fascinating topic that simultaneously fills me with both amazement at the technology and a sense of dread about what it means to the human condition.
The reality is that AI is ubiquitous, and it’s powering investment. Microsoft recently invested heavily in OpenAI (the group responsible for ChatGPT), with the global AI market projected to reach a value of $1.84 billion by 2030. Meanwhile, Forbes reports that the number of active AI start-ups since 2000 has increased by 14 times, and 72% of executives in a recent PwC survey believe that AI will be the most significant business advantage of the future. It’s reminiscent of the early days of the dot-com era, when everyone scrambled to find ways to both profit from the explosion in technical capabilities and avoid being left behind as their competitors adopted new approaches to work.
As someone who works with organizations to transform their business, I see the tremendous opportunities associated with AI. In digital transformations, AI can automate much of the transactional work needed to move historical data and test configurations. It can drive efficiency in business processes through RPA, enabling automation where delivered functionality is not present. Additionally, AI can power Tier 0 support in shared services through chatbots that can answer employees’ basic questions.
Unfortunately, in the rush to exploit the benefits of new technology, there is an inherent danger in moving so quickly that you can’t assess risk. And with the legal lag that comes with rapidly emerging technologies, regulation is not equipped to proactively address some of the challenges associated with AI before it does unintentional harm, despite the fact that 81% of tech leaders would like to see more regulation, according to a recent report by DataRobot. This regulatory delay means it’s up to business leaders to act responsibly.
Channeling my inner Dr. Ian Malcolm, I want to highlight some of the areas where businesses should be prepared to address the impact of AI.
Hacking/phishing
Cybersecurity remains a key concern for businesses, as increased incidents of ransomware attacks and data theft have brought the need to safeguard infrastructure top of mind. Ironically, AI has the potential to help boost cyber-protection by analyzing the patterns of attacks to separate the real threats from the noise. Unfortunately, the sophistication that allows AI to safeguard against attacks also means it is more successful at infiltrating any protection tools an organization may have in place.
See also: Thinking about an HR tech purchase? Look beyond the technology
In addition to being able to rapidly adapt to any blockers, AI has found success in bypassing security altogether by reaching out to human beings—via email. Research has shown that AI-generated phishing emails have higher rates of being opened than the old-fashioned kind. I have seen this evolution up close and can confirm that these emails look 100% authentic. Gone are the days of emails with subjects like “I wAnt sH@re This wiTh yoou”; AI-driven phishing emails can perfectly spoof the subject, content and sender information. Even Amazon has seen an increase in threat, as AI has gotten better at using personal info shared online to craft incredibly sophisticated and personal messages to entice a reader to open malware.
To help combat this, leaders need a zone defense: Ensure your cybersecurity tools are updated and take the time to educate your employees about how to recognize sophisticated phishing attacks. This means double-checking with the supposed sender that they intended to share a file, scrutinizing the content for any mistakes and generally being more cautious. While this may slow the pace of business in the short-term, it will certainly save hours of work and reputational repair in the long-term.
Bias
In the early days, AI was touted as the solution to help break bias in HR technology, particularly in the hiring process. The hope was that removing humans from the equation in the screening process would lead to a more equitable consideration of candidates based on merit, not emotion. Stripping away everything except relevant factors would magically lead to a more diverse workforce.
Sadly, that prediction was sorely misguided. In fact, investigations into the impact of AI have found a negative effect on hiring equity. Amazon scrapped its screening tool early on because it found that the AI actively discriminated against female candidates, and a new law just went into effect in New York City that penalizes organizations found to have AI bias in their hiring processes. HR is not the only industry facing scrutiny, as healthcare is under fire for similar bias. For instance, an algorithm designed to help identify high-risk patients to offer more care was found to discriminate against poorer patients because one of the key factors tied to the algorithm was total spend on healthcare—ignoring the fact that poorer patients will often put off treatment because they cannot afford it.
The reality is, AI is still programmed by people, and people are biased. That doesn’t mean AI can’t add value to your hiring or business processes; it just means you need to recognize the potential risk and take steps to mitigate it. Use AI to automate administrative tasks, regularly audit your outcomes for potential bias and continue to train employees on the importance of bias recognition.
Creativity and analysis
Human beings are complex, emotional creatures. Our awareness of our own mortality has driven us to the arts and philosophy. We are the species of Plato and Socrates. We think, therefore, we are. We find beauty in words and music, and patterns in numbers and behaviors. The abilities to create, to grow, to analyze are what separate us from other life on our planet.
Related: The ‘enormous possibilities’ for ChatGPT, according to Josh Bersin
Yet, every time we invent something new, we tend to give it power over us. Electricity was a wonderment that allowed society to grow by automating our housework and making it safer to travel at night. But have you ever noticed how quiet it is when the power goes out? Or how much we connect with our loved ones when all we can do is sit and play card games by candlelight? The pattern has continued, whether it be telephones, the internet, computers—inventions that were designed to make our lives easier may actually make our lives less rich. AI will no doubt continue this cycle unless we strive to break it.
AI can generate artwork, creating images both sublime and ridiculous. ChatGPT is a marvel; the content it generates is remarkably coherent. I’ve seen it write interview questions that would rival those from the best recruiters and compose a fairly eloquent argument about how it will destabilize work in the future. Buzzfeed has fully embraced AI as a content generation tool, announcing that it will use OpenAI as a core component of its site, causing its stock to jump 150%. As people continue to feed it information, it gets better and better, learning from what it consumes and inching ever closer to mimicking human beings.
I can’t help but think, is that what we really want?
AI is a superior curator of information. It can quickly and effectively scrape data from multiple sources and produce a cohesive and concise summary of what it has gleaned. It can write simple copy and produce on-demand blog posts, news updates and other content that would take people hours to complete. What’s missing in all of this is the answer to the question “So, what?” I’ve been reading a lot of AI-generated content recently to evaluate its potential impact on business, and what I see missing is true analysis. AI can show you the what, but it is not very good at finding the why.
Because of that, leaders should be wary of leaning too much on AI to help make deeper, strategic decisions. AI is an excellent tool for aggregating and summarizing information in a consumable format, but ultimately, there needs to be a bridge between that information and how to act on that information—and that bridge is the human element. I’m reminded of the story of the soldier who acted contrary to what technology told him should be done … and ended up averting a nuclear war. The moral of the lesson is that AI is a tool—not the actor. We shouldn’t abdicate our accountability to an algorithm.
Despite all the caveats, I’m incredibly excited to see what’s next for AI. The possibilities are endless, but as we know from the immortal words of Peter Parker’s family, with great power comes great responsibility. And that responsibility is ours.
The post ChatGPT and the dark side of AI appeared first on HR Executive.