Listen to the article

0:00
0:00

AI Generates Inaccurate Neanderthal Depictions, Revealing Scientific Knowledge Gap

Researchers have discovered a concerning trend in generative AI’s ability to accurately represent scientific knowledge, particularly when illustrating our ancient human relatives. When Matthew Magnani of the University of Maine and Jon Clindaniel of the University of Chicago prompted ChatGPT and DALL-E to create images and text about Neanderthals, the AI systems produced grossly inaccurate depictions that more closely resembled outdated 20th-century stereotypes than current scientific understanding.

“A majority of images depict human-like figures, slightly stooped, with large quantities of body hair. These depictions have more in common with early twentieth-century drawings of Neanderthals than contemporary scientific knowledge,” the researchers noted in their study published in Advances in Archaeological Practice.

The AI-generated images portrayed Neanderthals with exaggerated features, including protruding jaws and brow ridges far more pronounced than those found in actual Neanderthal skulls. The figures appeared hunched over in ape-like postures that more closely resemble chimpanzees or earlier australopiths than Neanderthals, who were much more similar to modern humans than these illustrations suggest.

The issue stems from how AI systems are trained. While these programs scrape the internet for data, much scientific research is locked behind paywalls, limiting access to the most current information. As a result, AI systems may rely more heavily on outdated, freely available materials that perpetuate stereotypes developed in the 19th and early 20th centuries.

When Neanderthal remains were first discovered in the 1860s, archaeologists interpreted the bones as belonging to a “crude prototype of our own species” – a hunched, hairy primitive hominin with a cranium more resembling chimps than humans. This early characterization persisted in the public imagination and museum displays well into the 20th century.

Modern archaeological evidence paints a dramatically different picture. Recent discoveries have revealed Neanderthals as sophisticated beings capable of complex behaviors, including making fitted clothes, using medicinal plants, and mastering fire. Genetic data has firmly linked them to modern humans, with evidence of interbreeding between the two species.

When testing both expert and non-expert AI prompts, the researchers found consistent issues with accuracy. The AI-generated text, while mentioning known Neanderthal technologies like stone tools and fire use, still presented an oversimplified view of their lifestyle and cultural complexity. Women and children were noticeably underrepresented in the generated images.

Interestingly, when researchers revised their expert prompts with more specific instructions, the AI produced somewhat improved images with less body hair and more accurate facial structures. This suggests that careful prompt engineering by subject matter experts can partially mitigate, though not eliminate, the problem.

“Our current research suggests that the way we structure and make information available will directly influence AI output and, by extension, the way we imagine the past,” Clindaniel and Magnani concluded. “Moving forward, data policies will inform the way archaeological material is written about and visualized.”

This research highlights a broader concern about generative AI: its reliability is fundamentally limited by the accessibility of accurate, up-to-date information in its training data. When current scientific knowledge remains behind paywalls or is otherwise inaccessible, AI systems will continue to perpetuate outdated concepts and stereotypes.

For educators, researchers, and the public, this serves as an important reminder that AI-generated content, particularly on scientific topics, should be approached with healthy skepticism and verified against current academic understanding.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

5 Comments

  1. Inaccurate AI-generated images of Neanderthals is concerning, as it could perpetuate outdated and potentially misleading perceptions. Researchers need to carefully evaluate AI’s ability to represent scientific knowledge, especially on topics where public understanding may lag behind the latest evidence.

    • Agreed, this is a crucial issue that deserves further investigation. AI systems need to be trained on the most up-to-date scientific information to avoid spreading misinformation, especially on sensitive topics like human evolution.

  2. This is an interesting finding. It highlights the limitations of current AI systems in accurately depicting scientific knowledge, especially when it comes to our ancient human ancestors like the Neanderthals. We still have a lot to learn about their true appearance and way of life.

    • Oliver Thompson on

      You’re right, the AI’s portrayal of Neanderthals seems to be based on outdated stereotypes rather than modern scientific understanding. It will be important for AI developers to address these knowledge gaps.

  3. Elizabeth Martin on

    It’s troubling that AI is generating Neanderthal images that seem to reflect 20th-century stereotypes rather than contemporary scientific understanding. This highlights the need for AI developers to carefully curate their training data and evaluation processes to ensure accurate representation of scientific knowledge.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.