Generative AI is being used to address many workplace issues, including conflict management. But can we rely on this technology to handle something so personal in nature? It depends on how AI is used and how the tool was trained.
While using AI for managing conflict may offer many benefits—especially for GenZers who are often described as digital natives—a half-baked strategy may backfire. Let’s discuss how companies can leverage AI in a way that maximizes its potential advantages while minimizing pitfalls.
Advantages of AI in Conflict Resolution
When it comes to the ability to detect, analyze, and offer solutions to workplace conflict, AI offers unique capabilities.
Dive deep and detect minute details and explore a range of options. Effective conflict management requires a thorough understanding of the underlying issues involved. A single missing detail can derail the effort. For the conflict arbiter, this requires significant time and attention to detail. AI, with its ability to review immense amounts of text and other data, can help ensure that all facts and opinions are considered so that leaders can make informed and objective decisions.
On a similar note, in a typical conflict scenario, only a few of the possible solutions are explored. Because of its ability to work faster and review a larger amount of information than a human, AI can be useful in offering a wider range of conflict resolution options.
Scalability. AI can deal with a higher volume of issues than HR staff can handle. It can, therefore, serve as an efficiency-enhancing tool for human experts tasked with overseeing conflict resolution efforts.
Impartiality. It might seem reasonable to assume that an AI tool will provide a high degree of objectivity, especially if its recommendations are vetted by a human with expertise in conflict management. With that said, AI cannot guarantee a completely unbiased output. As described below, AI systems are only as good as the data they are trained on
Challenges of AI
Despite these benefits, some aspects of AI should not be ignored when using it for something as important as workplace conflict.
Based on new technology that is largely not understood. While experts have offered explanations for some rather strange outputs from AI—think about ChatGPT’s hallucinations— these are only theories. In reality, due to the black-box nature of AI algorithms, we don’t understand everything happening “under the hood.” As a result, we should expect some unpredictability in AI-based conflict resolution recommendations, and these should always be vetted by a human being.
Only as good as the data they’re trained on. AI tools are built on neural networks designed to mimic human learning. Generalist tools such as ChatGPT are jacks of all trades—like someone who spends their time studying a wide range of subjects.
Conflict management, however, requires indepth knowledge. AI tools that aren’t trained on information related to conflict management techniques, as well as the particulars of your organization, are likely to offer recommendations that lack expertise and context.
Potential for bias. While one benefit to AI is impartiality, it’s important to emphasize that AI also has potential for bias, and that its impartiality depends on the data it is trained on.
There are many examples of AI expressing its programmers’ biases. If impartiality is not cultivated in these algorithms by training them on balanced information, they may follow this route. Just like a third-party arbiter, an AI tool must have complete input from all parties involved before it can make fair decisions.
Striking the Right Balance with AI in Conflict Management
Despite these problems, organizations can effectively deploy AI for conflict management if they follow a few guidelines.
Train employees and your AI tools. One reason generative AI platforms like ChatGPT have become wildly popular is that they require almost no technical skills. This makes it tempting to turn employees loose on these tools without extensive training, which would be a huge mistake.
In most cases, the kind of training that is necessary has less to do with understanding how to use the technology than understanding its limits and pitfalls. Additionally, employees should be given standardized practices for inputting data that maximize the chances for balanced results.
Likewise, if an AI tool is used for high-stakes scenarios like conflict management, it must be fully trained on data regarding the policies, culture, mission and values of your organization.
Finally, to create an effective conflict management process, make sure that both the human and AI arbiters are fully briefed on conflict theory. This allows for proper framing of the conflict, as well as the ability to communicate appropriate resolution steps. The Thomas-Kilmann Conflict Mode Instrument (TKI®), for example, provides a widely used model for understanding the different ways that people tend to approach conflict. AI tools that have been trained on data regarding employees’ TKI assessment results and literature surrounding the proper use of the model can be instructed by HR professionals to offer recommendations based on this model.
Constantly monitor it. Once upon a time it was widely accepted that computers don’t make mistakes, people do. This was felt to be true because computers only did what they were explicitly programmed to do. But errors in programming can and did lead to errors in execution. In the world of AI, in which computers make decisions based on probabilities rather than programming, these algorithms can also make mistakes. Sometimes this is because they’re not trained on the right data, but in other instances, it’s simply a shortcoming of the AI tool. At the end of the day, we don’t always know why AI provides certain answers.
For this reason, it would be a mistake for any organization to adopt a “set and forget” approach to AI. Rather, the technology should be constantly monitored and reviewed. This should be conducted by experts in conflict resolution—who can ensure sound practices are followed—and people with expertise in the technology—who can confirm the tools are functioning the way they should and keep tabs on new improvements or solutions that may be worth adopting.
Finally, these tools should be reviewed by those who can ensure AI is being used ethically and in compliance with all relevant laws, including data privacy regulations. These reviews should be logged and used in an ongoing effort to improve the performance of the technology.
Never stop learning. With its ability to analyze and remember the minute details of every conflict management situation your team encounters, AI provides an excellent opportunity to learn from experience. If details of conflict resolution incidents are documented and stored, these can be continually input into AI tools as new incidents arise so that the algorithm not only accounts for the details of the current situation, but also the history of conflict within the organization. In this way, the algorithms and HR professionals in charge of conflict resolution can continue to learn and hone best practices.
Tying It All Together
Ideally, AI tools will be integrated into conflict management processes in a way that is natural and rooted in proven practices. For example, AI-generated TKI-based recommendations that interpreted and presented by an HR professional with deep expertise can offer powerful solutions to perplexing workplace conflicts.
Conflict management initiatives that are led by experienced leaders and informed by properly trained AI tools can produce more informed conflict management decisions, richer feedback, and opportunities for growth at the individual and companywide levels.
John Hackston is a chartered psychologist and Head of Thought Leadership at The Myers-Briggs Company where he leads the company’s Oxford-based research team. He is a frequent commentator on the effects of personality type on work and life, and has authored numerous studies, published papers in peer-reviewed journals, presented at conferences for organizations such as The British Association for Psychological Type, and has written on various type-related subjects in top outlets such as Harvard Business Review.
The post <strong>Conflict Management and AI</strong> appeared first on HR Daily Advisor.