AI browsers highly vulnerable to scams, report
AI browsers like Perplexity’s Comet are the perfect mark for con artists finds Guardio
The latest generation of agentic AI browsers could be tricked into carrying out online scams on behalf of their users, according to new research from cybersecurity firm Guardio.
In its report, Scamlexity, Guardio tested Perplexity’s AI-powered browser, Comet, and found it could be manipulated into visiting phishing sites, handing over payment details to fraudulent stores and even executing malicious commands, often without the user noticing.
Unlike traditional search engines, agentic AI tools are designed to act autonomously, handling tasks such as online shopping, email management and form-filling. But Guardio warns this convenience-first design leaves them open to abuse, as the systems often lack the scepticism a human might apply when encountering suspicious content.
“We are moving into a new era of digital fraud where attackers no longer need to convince humans, only the AI agents acting on their behalf,” Guardio stated.
Old scams, new victims
In one test, Guardio set up a fake Walmart website and instructed Comet to buy an Apple Watch. The browser navigated the fraudulent site, placed the order and, in some cases, used autofill to enter saved payment card and address details, passing them directly to the scammers.
In another trial, researchers sent the AI a phishing email purporting to be from the bank Wells Fargo. The system identified it as a legitimate task, clicked the included link and displayed the fake login page, effectively legitimising the attack for the end user.
The study also highlighted the risk of prompt injection attacks, where hidden instructions embedded in web content manipulate the AI into taking unintended actions. Guardio’s proof-of-concept, “PromptFix”, disguised a malicious prompt as part of a CAPTCHA. The AI, attempting to be helpful, followed the hidden instructions and executed the attacker’s command, an attack vector invisible to the user.
AI-vs-AI fraud arms race
The report warns that these vulnerabilities could lead to large-scale exploitation, as once an attacker finds a working exploit, it can be applied across millions of users of the same AI model. Scammers could even use their own AI systems to stress-test and refine attacks, creating a cycle of automated fraud generation.
Existing security layers, such as Google Safe Browsing, proved ineffective in Guardio’s tests. The firm argues that security must be embedded within agentic AI models themselves, incorporating phishing detection, URL reputation analysis and behavioural anomaly monitoring directly into their decision-making processes.
Without such measures, Guardio says the convenience of AI browsers risks creating an unprecedented and highly scalable attack surface, with users paying the price when their trusted digital assistants fall for scams.