Listen to the article

0:00
0:00

Professionals Bristle When Clients Fact-Check Their Expertise with AI

A new study from Monash Business School has revealed that professional advisors feel disrespected when clients use AI tools to evaluate their recommendations, even when used solely for background information.

The research, published in the academic journal “Computers in Human Behaviour,” found that professionals become significantly less motivated to work with clients who consult AI systems to verify their advice.

“Advisors view AI as substantially inferior to themselves; thus, being placed in the same category as an AI system feels insulting and signals disrespect, undermining advisors’ willingness to engage,” explains Associate Professor Gerri Spassova, the study’s lead author.

The effect is comparable to spending an hour helping a client plan a complex trip with carefully considered flights, hotels, and itineraries, only to discover the client has taken those recommendations and booked everything through an AI chatbot instead.

Researchers discovered that professionals who lost business to an AI were markedly less inclined to work with that client again in future engagements. Additionally, clients who consult AI may be perceived as less competent and less warm by the professionals they approach.

This reaction cuts across professional fields, affecting relationships between clients and doctors, lawyers, financial advisors, and other experts whose specialized knowledge might be second-guessed using AI tools.

“As AI gets better, it may threaten our sense of worth and self-regard,” Professor Spassova noted. “When clients defer to AI, it prompts advisors to question the value of their human contribution, and this problem may intensify as AI capabilities advance.”

The study highlights that even in established professional relationships, advisors may feel betrayed when clients cross-check their expertise with AI. For new client-advisor relationships, researchers suggest people should avoid disclosing their AI consultations before meetings.

This reaction is understandable from the professional’s perspective. A doctor who has invested years in medical training and clinical experience likely resents being second-guessed by a patient who spent mere minutes consulting ChatGPT.

AI tools typically provide generalized information and can make significant errors. Their outputs are highly dependent on the information supplied by users, and without comprehensive details, their responses can be misleading or incomplete. Additionally, users can easily prompt AI systems to generate answers that align with their preconceptions.

Given these limitations, comparing a professional’s years of specialized training and experience with AI-generated responses creates an uneven and potentially unfair comparison. The research suggests this behavior creates a fundamental “lack of trust” that damages professional relationships.

The findings come at a time when AI tools are increasingly accessible to the public, creating new dynamics in professional service relationships. While AI can democratize access to information, the study suggests that its use as a verification tool can undermine the human relationships essential to professional services.

Until professional norms adjust to AI’s growing presence in advisory contexts, clients might be wise to keep their AI fact-checking private or risk damaging valuable professional relationships that depend on mutual trust and respect.

As AI continues to integrate into more aspects of professional life, navigating these sensitivities will likely become an increasingly important aspect of client-advisor interactions across industries.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Lucas Rodriguez on

    It’s a tricky balance. Professionals don’t want to feel devalued, but clients also need to be able to fact-check. I wonder if there are ways professionals could embrace AI tools as a complement to their expertise, rather than seeing them as a threat.

  2. Robert V. Lee on

    This is a complex issue without easy answers. Professionals may need to find ways to embrace AI tools and show clients how their expertise adds value beyond just factual information. Collaboration could be the key.

  3. Jennifer White on

    This highlights the challenges that can arise as AI becomes more integrated into everyday decision-making. Professionals will need to adapt and find ways to demonstrate the unique value they offer, beyond just factual information.

  4. I’m curious to see how this dynamic evolves. AI tools can be incredibly useful, but shouldn’t completely replace human expertise. Perhaps there’s an opportunity for professionals to partner with AI in ways that strengthen their advisory services.

  5. Amelia Johnson on

    This is an interesting dilemma. Professionals understandably want their expertise to be valued, but clients should also be empowered to verify information. AI tools can be helpful for background research, but shouldn’t replace professionals’ thoughtful advice.

  6. Robert Jones on

    This is a fascinating topic that highlights the evolving relationship between humans and AI. Professionals will need to find ways to demonstrate their unique value and work collaboratively with AI tools, rather than seeing them as a threat.

  7. Robert Jones on

    It’s understandable that professionals would feel threatened by AI fact-checking, but clients should also have the ability to verify information. Perhaps there’s a middle ground where AI and human experts work together to provide the best possible advice.

  8. Elizabeth Taylor on

    I can see both sides here. Professionals have spent years honing their skills and deserve respect. But clients also need to feel confident in the advice they’re receiving. Perhaps there’s a way for AI and human experts to work together productively.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.