-
Privacy advocates and tech users are calling out Anthropic’s Claude Desktop app for silently modifying browser settings during installation, without clear user disclosure.
-
The app reportedly installs a hidden native messaging bridge across multiple browsers, including ones the user does not actively use.
-
Pre-authorized browser extensions allegedly run in the background with no visible indicator to the user. So far, Anthropic has not issued any formal public response to the growing allegations.
A growing wave of tech users and privacy advocates is calling out Anthropic’s Claude Desktop application for what they describe as covert, spyware-like behavior hiding inside its installation process, and the AI safety company is just quiet about it.
The controversy first gained traction on X (formerly Twitter), where users began documenting a series of undisclosed system modifications that Claude Desktop allegedly carries out silently the moment a user installs it on macOS or Windows.
Hidden components surface after installation
The loudest complaint centers on what critics are calling a “native messaging bridge,” a hidden technical component that the app reportedly injects into multiple web browsers on a user’s device during installation. Users claim the bridge does not limit itself to the browsers they actively use or browsers that are even compatible with Claude. It reportedly drops into every browser it finds, used or not.
On top of that, the application allegedly pre-authorizes browser extensions capable of running silently in the background, with no visible window and no clear indicator alerting the user. Tech commentators argue this grants the software persistent access to a user’s browser environment, access that goes far beyond what any standard AI productivity tool should need.
The risks associated with malicious or overreaching browser extensions are well-documented, fake Chrome extensions have stolen data from over 300,000 users by impersonating legitimate AI tools and productivity add-ons, demonstrating that when users grant permissions to extensions without understanding what they do, the consequences can range from privacy violations to full-scale data theft and account compromise.
What makes the allegations particularly damaging is not just what the app reportedly does, but also what users say Anthropic never told them. Multiple voices in the tech community insist that the company did not clearly disclose any of these system-level changes during the installation process, raising pointed questions about informed consent and basic data transparency.
“This is not how trustworthy software behaves.”
The reaction on X moved fast and pulled no punches. Users and privacy-focused commentators drew direct comparisons to classic spyware tactics, describing software that installs hidden components, hooks into sensitive system areas like browsers, and operates without the user’s knowledge.
“This is not how trustworthy software behaves,” one commenter wrote, a sentiment dozens of users echoed across the thread. Others questioned why a conversational AI application would require any browser-level integration at all, let alone one that extends to browsers the user has never opened.
Screenshots circulating online show fast-moving comment threads filled with expressions of betrayal, particularly from users who chose Anthropic specifically because the company markets itself on the principles of safety, transparency, and responsible AI development.
One cybersecurity commentator online put it plainly: “Intent doesn’t matter if users don’t know what you’re doing to their system.” Even legitimate technical functions become a problem the moment a company stops being upfront about them.
Anthropic stays silent as trust takes the hit
As of the time of publication, Anthropic has not released any formal statement addressing the specific allegations circulating on social media. The company has neither confirmed nor denied the existence of the native messaging bridge or the pre-authorized background extensions.
Several users who wrote about the issue online report monitoring their own system files before and after installing Claude Desktop, with a number of them claiming to have found evidence consistent with what the allegations describe.
The silence matters because Anthropic has spent considerable effort positioning itself as the responsible player in the generative AI race, the company that puts safety and ethical development ahead of everything else. Any confirmation that its flagship desktop application quietly rewrites system configurations without adequate user disclosure would cut sharply against that image.
In an industry where public trust is already fragile, even unconfirmed allegations of undisclosed system modifications carry real weight.