A hype cycle as overwhelming and logic-defying as the AI boom comes with its own whirlwind succession of trends that are their own mini booms driven by billions of dollars of money.
Once the world got used to large language model-powered AI chatbots, autonomous AI agents became the next big thing. This past year, video generating models have been having their time in the Sun after rapid improvements. What will be the next hot trend? So-called “world models” that can simulate physical environments?
Maybe. But for now, instead, it’s “AI browsers” designed to supercharge your web experience with machine learning features. OpenAI is currently trying to will this trend into existence with the release of its own web browser called “ChatGPT Atlas,” which it announced Tuesday. It reeks of a company bereft of exciting ideas, sure, but if anyone can make it a thing, it would be the makers of the world’s most popular chatbot.
New research from the web browser company Brave, however, should dampen the enthusiasm for the tech. In a report released Tuesday, the company outlined glaring security flaws with Perplexity’s Comet Browser, which allows users to take screenshots on websites so a built-in AI can analyze them and answer questions. According to Brave’s findings, the screenshot feature can be a vector for an attack known as a prompt injection, in which a hacker delivers a hidden message to an AI to carry out harmful instructions. These messages can be embedded in malicious webpages designed by the hacker.
In a video demonstration, the Perplexity AI browser is asked “Who is the author?” of a screenshot of a photograph. Within seconds, the AI opens the user’s personal email and visits a website setup by a hacker. The photograph, it turned out, contained text instructions imperceptible to the human eye — but the AI extracted and followed them without distinguishing it from the user’s prompt, according to the researchers.
“The scariest aspect of these security flaws is that an AI assistant can act with the user’s authenticated privileges,” Brave warned. “An agentic browser hijacked by a malicious site can access a user’s banking, work email or other sensitive accounts.”
Prompt injection attacks aren’t new, and have been a cause for concern ever since ChatGPT exploded the popularity of LLMs. But the stakes of the havoc they can wreak have been raised with the advent of autonomous AI models, or agents, that can control a user’s desktop unlike a typical chatbot, enabling them to browse the web and access and change files.
Now with AI browsers on the horizon, countless more users are just a button-click away from being exposed to these risks that they’re likely oblivious to. A previous report from Brave showed how another prompt injection attack tricked Perplexity’s Comet browser into potentially giving hackers access to your bank account by showing it a single Reddit post.
“AI-powered browsers that can take actions on your behalf are powerful yet extremely risky,” the report warned. The attacks “boil down to a failure to maintain clear boundaries between trusted user input and untrusted Web content when constructing LLM prompts while allowing the browser to take powerful actions on behalf of the user.”
These are problems inherent both to LLMs and their questionable wedding with a web browser. In other words, expect these same vulnerabilities to show up in OpenAI’s AI browser, too — only with millions of more people exposed to them.
More on AI: OpenAI Faces New Allegations in Teen’s Death
The post Researchers Find Severe Vulnerabilities in AI Browser appeared first on Futurism.