The nightmare version of algorithmic management is not just that a platform tracks workers. It is that the platform gets good enough at predicting desperation to know who will accept less. Once a model can estimate urgency, location dependence, schedule inflexibility, response speed, device stability, or willingness to keep accepting bad terms, it can start treating some workers as easier to squeeze than others.
That logic is not far away from the same surveillance-pricing pattern regulators are already examining in consumer markets. The FTC’s 2024 inquiry said systems can use location, browsing history, shopping history, demographics, and credit information to shape offers or charges. Move that logic into labor platforms and the downstream question becomes obvious: if a system can infer who feels stuck, what stops it from serving that person lower-quality opportunities, harsher conditions, or thinner margins?
It is important to be precise here. Public proof of any one platform using a single “desperation score” is not the point. The point is that the incentives already exist. When optimization systems are rewarded for margin, fulfillment, or acceptance rate, they have reasons to sort people by how much pressure they appear able to absorb. Behavioral signals make that sorting easier.
This matters because labor platforms are already full of asymmetry. The worker sees the offer. The platform sees the history, the context, the confidence score, and the model outputs driving the ranking logic. That means exploitation can hide inside allocation, not just wages. A worker may never be told that the system thinks they are likely to accept less. They only live inside the consequences of that inference.
Cloak is starting at checkout, not at gig work dispatch. But the broader principle is the same: behavioral data should not quietly become a machine for extracting more from the people least able to resist. A privacy defense layer matters because once systems can read constraint and urgency well enough, silence becomes its own kind of vulnerability.