Listen to the article

0:00
0:00

Metropolitan police believe AI-generated fake complaints are increasingly being used to target businesses, following a case where a London businessman admitted to creating false statements to prevent a nightclub from reopening.

Aldo d’Aponte, the 47-year-old CEO of Arbitrage Group Properties, has pleaded guilty to fabricating two letters purportedly written by neighbors who objected to the reopening of Heaven, a prominent LGBTQ nightclub in central London. The court issued d’Aponte a 12-month conditional discharge and ordered him to pay £85 in costs along with a £26 victim surcharge.

The case has raised concerns about the potential misuse of artificial intelligence in regulatory and licensing procedures. A Metropolitan Police source confirmed that the emergence of AI-generated complaint letters from nonexistent individuals represents a growing challenge for authorities.

The incident began when Heaven temporarily closed in November 2024 following a rape allegation against one of its security guards. Although the venue was permitted to resume operations a month later with enhanced security and welfare measures—and the guard was ultimately found not guilty—d’Aponte attempted to prevent the reopening by submitting fabricated objection letters.

Philip Kolvin KC, a planning lawyer who represented the nightclub during its license suspension, became suspicious about the unusual nature of the complaints received by council officials. Working pro bono, he investigated the letters and discovered they had likely been generated using AI. Further research revealed that the supposed authors of these complaints either did not exist or did not reside at the addresses listed.

“This whole situation is open to abuse if councils are not alert to this problem and not checking the veracity of these objections,” Kolvin stated, adding that he “felt very sorry” for the nightclub owner, who found the objection letters “traumatic.”

Police traced the IP addresses connected to two of the letters directly to d’Aponte. According to sources, authorities are currently investigating two additional cases involving potentially AI-generated false representations.

D’Aponte had also submitted a legitimate complaint under his own name, in which he and his husband expressed concern about noise disturbances from the venue. Their representation to Westminster council claimed that their window overlooked the club’s entrance and that the operation of the club was “fundamentally at odds with family and community life in what is a residential neighbourhood.”

Representing d’Aponte in court, Saba Naqshbandi KC described the incident as “completely out of character” and “a foolish and desperate act.” The defense explained that d’Aponte, his husband, and their children had been “suffering for some eight years by the constant nuisance caused by the venue,” and that the temporary closure had brought them “very much needed relief of constant sleep and peace.”

D’Aponte was charged under section 158 of the Licensing Act 2003, which criminalizes knowingly or recklessly making false statements in connection with premises license applications or reviews. The offense carries a potential penalty of an unlimited fine.

Following Thursday’s court hearing, d’Aponte expressed deep regret for his actions while reiterating his frustration with what he described as ongoing disturbances from the nightclub. “Heaven and its proprietors need to take steps to better coexist with the local community and protect the safety and wellbeing of its customers, neighbours, and my family,” he stated.

The case highlights growing concerns about the potential for AI to be weaponized in regulatory proceedings, potentially requiring new verification processes for public comments and objections in licensing matters. Legal experts suggest that local authorities may need to develop more robust authentication methods to ensure the legitimacy of public input in such proceedings.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. Michael Jones on

    This case serves as a wake-up call about the evolving threats posed by AI-generated disinformation. Businesses, regulators, and the public will all need to be vigilant and collaborate to develop effective countermeasures against these emerging challenges.

    • Jennifer Lopez on

      Agreed. Proactive, multi-stakeholder efforts to stay ahead of these threats will be essential. Identifying vulnerabilities and implementing robust safeguards should be a key priority moving forward.

  2. Lucas Martin on

    This is concerning to hear about the potential misuse of AI to fabricate complaints and target businesses. It highlights the importance of verifying the authenticity of any complaints or claims, especially when it comes to licensing and regulatory processes.

    • Absolutely, the authorities will need to stay vigilant and find ways to detect and counter these kinds of AI-generated fraudulent activities. Protecting the integrity of regulatory procedures is crucial.

  3. It’s disheartening to see how AI can be wielded to undermine legitimate businesses and regulatory processes. This case highlights the need for greater transparency, accountability, and oversight when it comes to the development and deployment of these powerful technologies.

  4. It’s concerning to see how AI can be weaponized in this way to target businesses. Authorities will need to stay vigilant and work with experts to develop effective ways to identify and address AI-generated disinformation and false claims.

  5. Ava Rodriguez on

    I’m curious to know more about the specific steps the authorities took to detect and address this AI-enabled fraud. Understanding their investigative process could provide valuable insights for strengthening protections against similar abuses in the future.

  6. Michael Williams on

    It’s troubling that someone would use AI in this way to undermine a nightclub’s operations. Generating false complaints to prevent a business from reopening is a malicious act that undermines fairness and transparency. Hopefully the authorities can address this issue effectively.

    • Agreed, the use of AI to create fake complaints is a serious concern that needs to be addressed. Proper verification processes will be key to ensuring the regulatory system is not exploited in this way.

  7. Patricia Lopez on

    This case is a sobering example of the potential for misuse of AI technology. It underscores the importance of responsible development and deployment of these powerful tools, with appropriate safeguards and oversight to prevent harmful applications.

    • Amelia I. Thomas on

      Absolutely. Proactive measures to ensure the ethical and accountable use of AI will be crucial, both in the regulatory context and more broadly, to mitigate the risks of malicious exploitation.

  8. Ava Rodriguez on

    I’m curious to learn more about the specific techniques this individual used to generate the false complaints. Understanding the methods could help authorities develop better countermeasures against this type of AI-enabled fraud.

    • Oliver Jackson on

      Yes, gaining insight into the technical details would be valuable. Analyzing the AI models and approaches used could inform strategies to detect and mitigate these kinds of manipulative tactics in the future.

  9. John Z. Moore on

    This case highlights the potential for bad actors to misuse emerging technologies like AI for nefarious purposes. It’s a reminder that we need robust safeguards and oversight to prevent the abuse of these powerful tools. Protecting businesses and the integrity of regulatory processes is crucial.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.