Listen to the article
AI Tourism Blunder Leads Travelers on Wild Goose Chase in Tasmania
A cautionary tale from Australia has emerged as artificial intelligence creates headaches for tourists and locals alike in Tasmania. The incident highlights growing concerns about AI reliability in travel planning, even as more consumers embrace the technology.
In northeastern Tasmania, the small town of Weldborough became an unwitting victim of AI-generated misinformation when a tour company’s website promoted “Weldborough Hot Springs” – a non-existent attraction described as a “tranquil haven” and “peaceful escape” nestled in Tasmania’s forests.
The fabricated destination was so convincingly portrayed that travelers made special journeys to experience these supposed geothermal springs, only to discover they don’t exist. According to Kristy Probert, owner of the Weldborough Hotel, “droves” of disappointed visitors arrived seeking the “untouched” waters “rich in therapeutic minerals” that the AI had invented.
“The North George River that borders Weldborough is freezing,” Probert told ABC News. “The only people who go in are tin and sapphire prospectors wearing wetsuits.”
The company responsible, Australian Tours and Cruises, has since removed the blog, admitting to ABC News that “our AI has messed up completely.” The firm described the subsequent online backlash and damage to their reputation as “soul-destroying,” while insisting they are a “legitimate” business.
Tourism experts are particularly concerned about the potential dangers such AI hallucinations present. The fake information not only wasted travelers’ time and money but encouraged potentially hazardous behavior – suggesting visitors enter low-temperature waters and explore remote areas of Tasmania that lack mobile phone coverage.
Dr. Anne Hardy, adjunct professor in tourism at Southern Cross University, Australia, told CNN that this incident exemplifies a worrying trend. “Nearly 90% of itineraries that AI generates have mistakes in them,” Hardy noted, adding that roughly four in ten tourists now use AI for travel advice or itineraries, and many “trust AI more than review sites.”
This trust comes despite growing evidence of AI’s limitations in travel planning. Beyond Tasmania’s phantom hot springs, other documented cases include AI recommending non-existent hiking trails on dangerous routes with fatally inaccurate mapping.
The Tasmanian incident occurs against a backdrop of rapid AI integration across the travel industry. Major companies like Delta Airlines are developing AI “butlers,” while online travel giants including Trip.com, Kayak, Booking.com, Expedia, and Google continue expanding their ChatGPT-powered features to meet consumer demand.
Research from the UK travel industry association ABTA confirms this trend, with their 2025 survey revealing that the proportion of holidaymakers using AI to plan trips had doubled over the previous year.
Even Google, while promoting its Gemini “Gems” personalized travel guide service, includes disclaimers that results are “for illustrative purposes and may vary,” advising users to “check responses for accuracy.”
Industry observers note that as AI travel planning tools proliferate, so do opportunities for misinformation – whether through deliberate scams or innocent but problematic “AI hallucinations” like the Weldborough Hot Springs case.
For travelers, the message is becoming increasingly clear: while AI offers convenience in travel planning, human verification remains essential. And for businesses incorporating AI into their operations, the Tasmanian example serves as a stark reminder of the reputational damage that can occur when artificial intelligence generates artificial destinations.
As one disgruntled visitor to Weldborough reportedly remarked, “A hot spring would be nice – but a real one would be better.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
Wow, this is a cautionary tale about the potential pitfalls of AI-generated content. It’s concerning that travelers were misled by such convincing but fabricated information about these ‘hot springs’. Destinations should be thoroughly verified before promoting them to the public.
It’s a shame to see AI being used to spread misinformation, even if unintentionally. Tourists deserve accurate, reliable information when planning their trips. This incident highlights the need for better AI oversight and quality control, especially in the travel industry. Hopefully lessons can be learned to prevent similar situations in the future.
Wow, this is a cautionary tale about the dangers of AI-generated information. It’s crazy that tourists were misled by fabricated details about these nonexistent hot springs in Tasmania. I hope the tour company takes responsibility and does more to verify information before publishing it.
Agreed, it’s concerning that AI can create such convincing misinformation. Tourists deserve accurate information, especially for remote destinations. The company should implement better fact-checking processes to avoid this kind of blunder in the future.
AI-generated content can be useful, but cases like this highlight the need for human oversight and verification. It’s good that the local business owner was able to clarify the reality of the situation for disappointed visitors. Hopefully this incident will lead to improvements in how AI is used for travel information.
This highlights the importance of verifying information, especially when it comes to travel planning. It’s worrying that AI can generate such detailed and realistic-sounding details about places that don’t even exist. I hope this incident leads to more scrutiny around the use of AI in the tourism industry.
Absolutely. AI should be used to enhance the travel experience, not mislead people. The company needs to review its processes and take steps to ensure this doesn’t happen again. Travelers need to be cautious and cross-check information, even if it seems convincing.
This is a fascinating case study on the challenges of AI-driven content generation. While the technology has many promising applications, this incident shows how it can also be used to create convincing but entirely fabricated information. Robust fact-checking protocols are clearly needed to prevent such egregious errors.
Agreed. The tourism industry should take this as a wake-up call to scrutinize AI-generated content more closely. Travelers need to be able to trust the information they’re given, so companies need to ensure their systems are reliable and accountable.
As someone interested in mining and commodities, this story is concerning. AI-generated misinformation could have serious implications for industries that rely on accurate information. I hope this leads to greater oversight and accountability when it comes to the use of AI in sensitive sectors.
Good point. Misinformation about mining and energy resources could have significant economic and environmental consequences. Rigorous verification of AI-generated data is critical to maintain trust and transparency in these industries.
This is an interesting example of how AI can sometimes get carried away and fabricate entire tourist attractions. While the technology has many benefits, stories like this show the importance of critical thinking and fact-checking, even when using AI-powered resources. Tourists should always verify information before planning a trip.
This is a bizarre and troubling story. I can’t believe an AI system was able to create such a convincing description of a non-existent tourist attraction. It’s a good reminder that we can’t always trust technology blindly, especially when it comes to travel planning. Fact-checking is still essential, even in the digital age.
This is a concerning development, particularly for industries like mining and energy that rely on accurate, up-to-date information. AI-generated misinformation has the potential to cause real harm, both to businesses and to consumers. Rigorous verification processes need to be implemented to prevent these kinds of blunders.
This is a strange and unfortunate situation. I wonder how the AI system managed to create such a detailed and believable description of a non-existent attraction. Tourists must have been very disappointed to find that the ‘tranquil haven’ didn’t actually exist.
Yes, it’s a shame the tourists wasted their time and money traveling to see something that was completely made up. The AI clearly needs better safeguards to prevent the spread of misinformation, especially for travel-related content.