A lot of shoppers think privacy harm starts only when a site changes the final price. In practice, ranking can matter earlier and more quietly. If one user sees the cheaper plan, calmer option, or lower-pressure listing first while another sees the expensive or aggressive version first, the treatment has already diverged before anyone notices a line-item difference.
That is why behavioral ranking deserves more scrutiny. The FTC's 2024 surveillance-pricing inquiry said companies may use data such as browsing behavior, shopping history, demographics, and location to shape how consumers are treated. The agency did not reduce the concern to literal price tags alone. It explicitly included the possibility that the system changes what people are shown. Offer ranking sits squarely inside that concern.
The Orbitz example remains useful because it makes ranking legible. As The Wall Street Journal reported, Orbitz found Mac users tended to spend more and adjusted hotel results so those users were more likely to see pricier options first. That story is old, but it is still one of the cleanest demonstrations that a platform can steer attention through ordering rather than through an obvious surcharge. A shopper may feel like they are browsing neutrally when the page has already made assumptions about what tier to surface.
The FTC's report A Look Behind the Screens adds the infrastructure story behind that kind of treatment. It describes extensive collection, combination, retention, and monetization practices across major digital services. Once a company has enough event history, audience segmentation, and inference machinery, ranking stops being a generic convenience feature. It becomes a delivery mechanism for the profile.
This is why ranking deserves its own privacy language. Most people know to ask whether a site tracked them. Fewer ask whether the tracking changed the order of offers, bundles, financing options, or upgrade prompts they encountered next. Yet ordering is one of the easiest ways to tilt a decision without making the manipulation feel dramatic.
Cloak's role is not to claim it can inspect every hidden ranking model. The honest claim is stronger than that: it can help reduce the signals those models feed on and make the treatment context more visible. If the page is collecting behavior in order to decide what you should see first, that is already a privacy and autonomy problem worth naming clearly.