The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court….
Inspired by the recent case of Fitbit history being used in a personal injury claim the excellent Kate Crawford discusses the rise of the algorithmic expert witness in the Atlantic. The important point is not just that wearable tech records may become evidence in a court of law but also that data – or rather the conclusions drawn by analytics companies – becomes a new kind of witness.
The decisions about what is “normal” and “healthy” that these companies come to depends on which research they’re using. Who is defining what constitutes the “average” healthy person? This contextual information isn’t generally visible. Analytics companies aren’t required to reveal which data sets they are using and how they are being analyzed.
The current lawsuit is an example of Fitbit data being used to support a plaintiff in an injury case, but wearables data could just as easily be used by insurers to deny disability claims, or by prosecutors seeking a rich source of self-incriminating evidence.
We should therefore not only be aware that wearable sensors are surveillance devices and that the records they produce can be subpenaed but also resist the idea that the data is a source of objective truth and instead see it as the opinion of the experts or companies interpreting the data.
Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of “truth.” Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.
Read the article here