Listen to the article
Computer Engineer Pioneering AI Security and Hardware Design Research
Dr. Hammond Pearce of UNSW School of Computer Science and Engineering is charting new territory at the intersection of artificial intelligence, hardware security, and misinformation detection – fields increasingly vital in today’s technology landscape.
Pearce’s journey into computing began with hands-on experimentation in his youth. “I got started playing with automation, controllers, microcontrollers and things that dad had just brought home. I was always really interested in robotics,” he explains. This practical foundation shaped his professional identity: “I very much describe myself as a computer engineer, not a computer scientist. I mostly focus on the building of things – then on top of that, I’m interested in AI and security.”
His career path, which included a stint at NASA and years of research in hardware security, positioned him perfectly to notice emerging opportunities when large language models (LLMs) began appearing around 2020. Pearce was among the first researchers to explore whether these AI systems could automate hardware design processes.
“We did the very first study asking, can we use these newfangled, large language models to make hardware? And it sort of worked,” Pearce recalls. His research focused on whether AI could translate plain English descriptions into specialized code that computers understand – a task traditionally performed by human engineers.
The results were promising, with the model correctly generating code for nearly 95% of evaluation tasks. This groundbreaking work was detailed in his paper “DAVE: Deriving Automatically Verilog from English.” Although some in the tech community initially dismissed it as a gimmick, the research garnered unprecedented attention. “It was the first time any tech magazine had ever written about my work, even though they savaged it. I thought, this is fantastic, I’m doing stuff people are reading about.”
Throughout his career, Pearce has maintained focus on understanding physical computer systems and their security vulnerabilities. “When we teach hardware security, you want to teach people how to carry out attacks, and then to defend against those attacks,” he explains. To facilitate this learning, he designs low-cost circuit boards and physical training platforms that allow students to hack hardware in controlled environments.
Pearce notes that while AI has made significant strides in many domains, it still struggles with hardware design compared to human engineers. He attributes this limitation to two key factors: less available training data in the hardware domain and the delayed feedback loop when errors only become visible after physical construction.
“There are just considerably fewer things for the AI to learn from. They [LLMs] can do things, but quite often they make mistakes and if you don’t know what you’re doing, even me, I’ll get trapped,” he acknowledges. “Hardware doesn’t tend to have instant feedback loops like you can get with software.”
As generative AI models became increasingly capable, Pearce naturally expanded his research to examine AI security. “After it first came out, we started looking at the security aspects of the code that was being generated by the early GitHub Copilot [AI platform]. We found that it was pretty bad,” he says.
This investigation resulted in “Asleep at the Keyboard,” one of the first studies highlighting security vulnerabilities in AI-generated code. This pioneering work was followed by two additional papers, “Examining Zero Shot Vulnerability Repair” and “Lost at C,” which collectively established early benchmarks for evaluating AI security implications.
With the rise of conversational AI, Pearce’s research focus evolved once more – this time toward misinformation. “It was pretty obvious that these [LLMs] were going to be used to create spam bots,” he observes. “They might be bad at generating hardware, but they’re great at generating short texts.”
In 2024, this interest culminated in “Capture the Narrative,” a world-first social media simulation game allowing students to build AI bots designed to influence a fictional election. The project yielded concerning insights: “The students could use AI to generate a huge amount of fake content and fake news, but they weren’t that good at spotting it.”
Pearce and his team plan to enhance the game this year with features encouraging better detection of AI-generated spam. Their findings highlight a troubling reality about the democratization of powerful influence tools through AI.
“Students working in their dorm room with just laptops and a budget of like $3 can build influence campaigns that are every bit as powerful as what governments would have done 10 years ago,” Pearce warns, underscoring the urgent need for improved AI security measures and digital literacy as these technologies continue to advance.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
Hardware security is an increasingly critical area as AI and other technologies become more deeply embedded in our systems. Glad to see researchers like Pearce tackling these challenges head-on.
Misinformation and security risks are such critical issues as AI capabilities advance. Proactive research into detecting and mitigating these threats is crucial. Pearce’s work at the intersection of AI, hardware, and cybersecurity sounds vitally important.
Agreed. Staying ahead of potential misuse and vulnerabilities will be key as AI becomes more widespread and powerful.
Robotics and hands-on experimentation from a young age seem to have given Pearce a great foundation for his current work. It’s inspiring to see how early interests can shape a career trajectory in impactful ways.
Navigating the security and misinformation risks of AI will require multidisciplinary expertise. Pearce’s background spanning computing, engineering, and emerging tech makes him well-positioned to take on these challenges.
Absolutely. Interdisciplinary approaches that can bridge theory and practice will be crucial as AI becomes more ubiquitous.
Fascinating to see AI being applied to hardware design and security challenges. Pearce’s background in computer engineering and hands-on robotics seems well-suited to bridging the gap between AI and real-world systems.
The potential to use large language models for hardware design automation is really intriguing. I’m curious to learn more about the specific techniques and applications Pearce and his team are exploring in this area.
It’s great to see a computer engineer with a practical, applied focus taking on these challenges. Hands-on experience is so valuable when dealing with the real-world implementation of advanced technologies like AI.