I’ve always found radar speed signs to be interesting indicators of our relationship with technology, and I think how we relate to these signs can tell us something about privacy and technology.
I’m talking about the signs that tell you “this is the speed limit / this is your current speed.” These devices, which seem increasingly common, do not identify you or your vehicle; they keep no record of your speed, and they don’t report it to anyone. They are mere mirrors, reflecting our own behavior back to us. And yet they have consistently been found to be effective in getting drivers to reduce their speeds. I find this intriguing. I reckon that they work by sending drivers a message to the effect of, “you are speeding, and I know it.”
Of course the “I” in this case is a computer. Does that matter? Here we come to the increasingly significant question of whether computers, or only humans, can invade our privacy. I argued in a 2012 piece that computers can very much invade our privacy, and that the principal reason is that, in the end, what we truly care about are consequences. Computers, like humans, can bring consequences down on our heads when they are permitted to scrutinize our lives—for example, when an algorithm flags a person as “suspicious.”
At the same time, because it’s consequences that people fear rather than scrutiny by some abstract “sentience,” people learn to fear computers where eavesdropping by them can reverberate later—but also not to fear either computers or people where their eavesdropping can’t later reverberate in their lives. And so, as I pointed out, humans often act brazenly in front of anonymous urban crowds, intimates, and servants or others over whom they hold power precisely because in those circumstances they are relieved from having to worry that their words or behavior will come back to hurt them later.
As a result of that line of thinking, I’ve always thought that the effect of radar speed signs will inevitably diminish over time, as their lack of consequences gradually becomes apparent to us and sinks in at an intuitive level.
In some ways the signs could be compared to ineffective disciplinarians. Students often live in fear of a strict teacher, and are exceedingly cautious about what they do and say in front of him or her. But a teacher who is a lax disciplinarian, whose words are not backed up by consistent punitive actions, even if they storm and yell, will soon be flouted and ignored, even if they seem tough at first. The differences between how we react to humans and computers are not so great.
I don’t actually know as an empirical matter whether radar speed signs become less effective over time. But even if they do not, my overall point could still be correct: it’s possible that they are effective not because people worry about the signs themselves, but simply because they interrupt the solipsism of driving and remind us that our speed is easily apparent to others, and thus of the risk that a police officer or disapproving neighbor will see us speeding. It’s also possible that people worry that these signs are, in fact, identifying and reporting their behavior; anxiety about objects storing and reporting data about us is generally not paranoid these days. Looking online I see that some sign manufacturers do offer the optional capability of maintaining statistics on vehicles’ speeds. As far as I can tell, those speed measurements are stored in the aggregate and are not personally identifiable, though I suppose in theory the speed readings they collect, including time stamps, could be correlated with license recognition or video data and tied to a particular car.
There’s also a reasonable argument to be made that effectiveness of the signs will not wear off. In an interesting 2010 article, Ryan Calo looks at how anthropomorphic or “human-like” technologies have been found to affect us in many of the same ways that we’re affected by actual people. The potential chilling effects of monitoring machines, he writes, are well-established in “an extensive literature in communications and psychology evincing our hard-wired reaction” to human-like technologies. In one oft-cited example, merely hanging a poster with human eyes was found to significantly change people’s behavior. And of relevance for my radar speed camera example, the studies find that, as Calo puts it:
Technology need not itself be anthropomorphic, in the sense of appearing human, to trigger social inhibitions; technologies commonly understood to stand in for people as a remote proxy tend to have the same effect.
He cites the example of security cameras, which do not resemble humans but were found to spark the same effect.
Overall, this literature reinforces my larger argument that when it comes to privacy, the human/machine distinction does not matter.
But it also suggests that I might be wrong in predicting that the chilling effects of technologies like radar speed cameras will wear off over time. Much of the science that Calo describes portrays the chilling effects of anthropomorphic technologies as involuntary, sub-conscious, and hard-wired—with the implication that they will be constant. Calo worries that because of the hard-wired nature of these reactions, surrounding ourselves with “fake human” technologies could be a bad thing, pitching us into a self-conscious state of “constant psychological arousal” that comes from “the sensation of being observed and evaluated.” In short, making us feel like we’re always being watched. He writes,
One might argue that humans will adjust to social machines and software the way the rich adjust to servants, the poor adjust to living on top of many relatives, or the chronically ill get accustomed to pharmacists, nurses, orderlies, and doctors. We may, after a time, feel solitude among machines as we acclimate to their presence.
This claim is not as reassuring as it might seem. What evidence there is suggests that the effects do not wear off.
Calo cites two studies that suggest this, but the evidence is thin and there does not seem to have been extensive study of this question.
In any case, there’s a key distinction between machines that engage in monitoring that has the potential to reverberate in our lives, such as a surveillance camera recording footage we don’t control, and “socially inert” monitoring such as radar speed cameras that are not going to affect us later. The difference is between our animal-like responses to a mere “mirror,” and ultimately more rational (if still often unconscious and intuitive) response to machines that pose genuine social threats to us. I think Calo is entirely right to worry about “social machines” where those machines are in fact truly social, but that where they are not we will adjust.
For more information Speed Radar Sign, led lane control signals, pedestrian traffic light, please get in touch with us!