If someone says shoppers are basically naked online, that can sound dramatic until you line up the evidence. The point is not that every site is secretly changing every price for every person. The point is that the modern web collects, profiles, ranks, and pressures people at a scale most normal users never see clearly.
Start with trust. Pew Research Center reported that 81% of Americans said the potential risks of companies collecting their data outweigh the benefits, 79% said they were concerned about how companies use the data they collect, and 72% said most of what they do online or on their cellphone is being tracked by companies. That is not fringe paranoia. That is mainstream distrust.
The tracking layer is also enormous. Princeton's large-scale web measurement found Google-owned trackers on about 75% of the top one million websites and Facebook-owned trackers on about 25%. In other words, a person does not need to wander into a shady corner of the internet to get tracked. Ordinary browsing is enough.
Fingerprinting makes that worse because it survives the little rituals people already use to protect themselves. EFF's classic browser fingerprinting work found that 83.6% of browsers were unique from fingerprintable attributes alone, and with Flash or Java that figure rose to 94.2%. The exact mix of signals changes over time, but the lesson does not: deleting cookies is not the same thing as becoming hard to recognize.
More recent evidence suggests the visible web can understate the real problem. A 2025 real-user study reported that automated crawls missed 45% of the fingerprinting websites users actually encountered in normal browsing. If lab pipelines miss that much, then the tracking people feel in real life can be substantially heavier than older measurement stories suggest.
The data does not just sit in one place either. The Irish Council for Civil Liberties reported that the average person's data was broadcast in real-time bidding auctions 747 times per day in the United States and 376 times per day in Europe. That is the hidden machinery behind a lot of ordinary browsing: information about the session moves through a market of intermediaries most people have never heard of.
Regulators are now treating the downstream consequences as real. In 2024 the FTC ordered eight companies to provide information about surveillance pricing systems that can use location, demographics, browsing history, shopping history, and other personal data to shape how people are treated. California also said Sephora's use of third-party trackers amounted to selling personal data and reached a $1.2 million settlement. Those are not abstract ethics debates. They are official signals that normal tracking infrastructure can become a consumer-protection problem.
Some of the most concrete cases are about sensitive data leaving the page entirely. The FTC said BetterHelp shared email addresses, IP addresses, and health questionnaire information with advertising platforms and announced a $7.8 million settlement. The agency also moved against Outlogic, formerly X-Mode, and sued Kochava over the sale of sensitive location data tied to places like clinics, shelters, and places of worship. Once ordinary device and location signals enter those pipelines, the user is no longer just being measured for convenience.
People have been noticing the shopping side of this for years. The Wall Street Journal reported that Orbitz learned Mac users tended to spend more and showed them pricier hotels first. It also reported that Staples displayed different online prices based on how close someone lived to rival stores. The New York Times Magazine showed how Target used shopping patterns to infer pregnancy before families had disclosed it. Even when these stories are older, they capture something users still feel: browsing behavior can quietly change how a system sees you and what it decides to show you.
That is also why incognito mode myths and cookie-clearing rituals matter. Travel reporters keep revisiting the question of whether repeat searches raise prices because users repeatedly notice suspicious differences and respond with self-protection tactics anyway. The distrust is real even when every mechanism is not directly provable from the outside. People act like the web is watching because, at scale, it is.
This is the most honest way to talk about Cloak. We may not be able to inspect every black-box algorithm directly. But we can see the tracking inputs, data leakage, fingerprinting pressure, and checkout manipulation around those systems. A useful privacy product should start there: show what was blocked, show what was reduced, and show when Cloak raised a checkout warning.
That is why this evidence matters for customers. The point is not to scare people into thinking every checkout page is a crime scene. The point is to show that the internet already gives platforms too much room to observe, infer, and push. Cloak exists to give some of that room back to the person on the other side of the screen.