Listen to the article
In a growing battle against AI-generated misinformation, “Real Housewives of Atlanta” star Kandi Burruss has spoken out against fake content circulating about her divorce from Todd Tucker, with particular concern about deepfake audio mimicking her voice.
“This is ridiculous at this point,” Burruss said in a TikTok video posted Wednesday, visibly frustrated by the escalating situation. “They did an AI of my voice with a whole message. I’ve already seen other AI videos saying that I said certain stuff — and that was crazy — but to actually hear a voice sounding like mine that I know is AI… what can I do about this?”
The reality TV star’s concerns highlight the increasing sophistication of AI technology capable of cloning voices with alarming accuracy. In her video, Burruss expressed particular alarm about content that used her synthesized voice to make statements regarding her divorce that she never actually made.
“Like some of y’all pages that just be making up statements from me and all that stuff, that’s already annoying as it is,” she continued. “But to have somebody who did a full thing that sounded like my voice over the video… crazy.”
Burruss filed for divorce from Tucker last November after approximately a decade of marriage. Since then, their separation has become fodder for gossip websites and social media accounts, with details about custody agreements and housing arrangements making headlines as new court filings emerge.
In the caption of her TikTok post, Burruss elaborated on the frequency of these fake posts: “Every other day I’ve been seeing fake AI post with me & made up statements about my divorce. But today I just saw one using an AI voice that sounded like mine talking saying stuff that I never said! It’s crazy! I’m so annoyed.”
The issue Burruss faces reflects a broader challenge for public figures in the age of artificial intelligence. As AI tools become more accessible to the general public, celebrities and public personalities increasingly find themselves combating unauthorized digital replicas of their likeness and voice.
During a recent appearance on “Watch What Happens Live with Andy Cohen,” Burruss provided some genuine insight into her divorce proceedings, stating that she and Tucker are attempting to remain cordial throughout the process despite having some “intense” conversations. She expressed confidence in their ability to co-parent effectively moving forward.
Without revealing specific details, Burruss mentioned that the turning point in their relationship occurred in July of last year, describing it as the catalyst that ultimately led her to file for divorce.
The proliferation of AI-generated content targeting celebrities raises important questions about digital rights, privacy, and the potential for reputational damage. Currently, legislation in many jurisdictions has not kept pace with rapidly advancing AI technology, leaving public figures with limited recourse when their identities are appropriated.
Experts in digital media ethics suggest that social platforms should implement stronger verification systems for content featuring public figures, particularly when voice or video manipulation is involved. However, the decentralized nature of social media makes comprehensive enforcement challenging.
For celebrities like Burruss, who rely heavily on their public image and perception, AI-generated misinformation presents a particularly troubling challenge as they navigate highly personal life events while in the public eye.
As AI technology continues to advance, this case illustrates the growing tension between technological innovation and personal rights, suggesting that more robust protections may be necessary to prevent the misuse of synthetic media technologies in spreading false information about individuals.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
As AI capabilities advance, the risks of voice cloning and deepfakes become more serious. Kandi is right to call this out and try to stop the spread of false information about her personal life.
Absolutely, protecting one’s identity and reputation from AI-generated impersonation is an increasingly important issue.
Kandi is facing a difficult situation with this deepfake impersonation of her voice. As AI advances, protecting one’s identity and reputation will become an increasingly complex challenge. I hope she’s able to find a resolution to this problem.
You’re right, this is a complex issue that will only become more prevalent. Kandi is right to call attention to it.
This is a concerning example of how deepfake technology can be weaponized. I sympathize with Kandi having to deal with this intrusion into her private life. Regulating these AI tools will be crucial going forward.
You’re right, the lack of oversight on deepfake technology is troubling. Kandi is right to raise awareness about this problem.
The rise of AI-generated deepfakes is a real challenge for public figures. Kandi is right to be frustrated and call out this misuse of the technology. It’s important to find ways to combat this kind of synthetic content.
Absolutely. The potential for deepfakes to be abused is worrying, and Kandi is right to demand action against it.
Wow, this is a concerning development. Deepfake technology can be abused to spread misinformation, which is really troubling. I hope Kandi is able to get this resolved and prevent further abuse of her likeness.
Agreed, the potential for deepfakes to be used maliciously is alarming. Kandi seems rightfully frustrated by this situation.