Brookings Institute report on cyborg law and policy.

Brookings Institute report on cyborg law and policy.

In June 2014, the Supreme Court handed down its decision in Riley v. California, in which the justices unanimously ruled that police officers may not, without a warrant, search the data on a cell phone seized during an arrest. Writing for eight justices, Chief Justice John Roberts declared that “modern cell phones . . . are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”1

This may be the first time the Supreme Court has explicitly contemplated the cyborg in case law—admittedly as a kind of metaphor. But the idea that the law will have to accommodate the integration of technology into the human being has actually been kicking around for a while.

Speaking at the Brookings Institution in 2011 at an event on the future of the Constitution in the face of technological change, Columbia Law Professor Tim Wu mused that “we’re talking about something different than we realize.” Because our cell phones are not attached to us, not embedded in us, Wu argued, we are missing the magnitude of the questions we contemplate as we make law and policy regulating human interactions with these ubiquitous machines that mediate so much of our lives. We are, in fact, he argued, reaching “the very beginnings of [a] sort of understanding [of] cyborg law, that is to say the law of augmented humans.”


The report is interesting and thoughtful. It asks exactly the kinds of questions we need to consider as a society.

Read the full report here


Since we are cited as an example twice we need to briefly clarify our views:

(1) The report states

How exactly we will mediate between the rights of cyborgs and the rights of anti-cyborgs remains to be seen—but we are already seeing some basic principles emerge. For example, the proposition that individuals should have special rights with respect to the use of therapeutic or restorative technologies appears to be so accepted that it has prompted a kind of intuitive carve-out for those who otherwise oppose wearable and similar technologies. Such is the case with Stop the Cyborgs, an organization that emerged directly in response to the public adoption of “wearable” technologies such as Google Glass. On its website, the group promotes “Google Glass ban signs” for owners of restaurants, bars and cafes to download and encourages the creation of “surveillance-free” zones.76 Yet the site also expressly requests that those who choose to ban Google Glass and “similar devices” from their property to also respect the rights of those who rely on assistive devices.77

This is  true (it refers to our section on ‘Disability rights & assistive devices‘) but our stance is a little more nuanced than implied in the report. The core issues are agency and coersion rather than some normative conception of what a human should be.

If the cyborg’s extended body includes components that they do not fully control such as:

  • Remotely controlled devices
  • Closed source devices
  • Cloud services or data storage
  • Hackable or remotely updateable networked devices

Then the cyborg does not have control over there own extended body and are in a vunerable position. They are potentially subject to external surveillance, coersion and control. Further because they are carriers of external forces they may subject those arround them to external surveillance, coersion and risk. Because their extended body comprises networked technical systems they cannot reassure people that they are not going to do X because they do not control their own extended body. Thus cyborgisation forces us to replace behavioural requests “please turn your camera off and leave it outside” with the exclusion of particular extended bodies “you cannot come in to the Tibetan dissidents meeting because your body is a camera which automatically syncs with Baidu“.

Cyborgisation threatens the idea of individual agency and responsibility.

Depending on the situation it may be the cyborg themselves or those arround them that suffer most. Further depending on the situation the degree of choice that the cyborg has about using the device may differ. In the case of assistive devices the user may have little choice. All available devices may subject the wearer and those arround them to external monitoring but because the consequences of not being able to see, or hear, move or otherwise function as huge they have little choise but to accept. Similary some people may be coersed by their insurers or employeers into wearing or being implanted with a device. Then finally we have people like glassholes or lifeloggers who have freely choosen to wear a device.

Where the cyborg is subject to coersion our sympathies are with them. If a technical part makes up your extended body then you should control it not some corporation but unfortunately the majority of medical and assistive devices are closed propriety systems. Further no-one should be coersed by people, corporations or indeed wider economic or social forces into wearing or being implanted with any device. However it is clear that many people unfortunately are.

In the case of glassholes, lifeloggers or views are clear. The loss to these people of removing their device is minimial and even if they have embedded a camera in their head – noone and no circumstance forced them to do it.


Cyborg Unplug – Plug to Unplug


Cyborg Unplug is a wireless anti-surveillance system for the home and workplace. ‘Plug to Unplug’, it detects and kicks devices known to pose a risk to personal privacy from your local wireless network, breaking uploads and streams. Detected devices currently include: Google Glass, Dropcam, small drones/copters, wireless ‘spy’ microphones and various other network-dependent surveillance devices.

Cyborg Unplug will be available for pre-order September 30, 2014. Subscribe to ensure you are kept up-to-date with the launch and to receive other low-volume information about the project.


Great blog post by Mark Carrigan of

He starts off with personal experiance of using a tracking device:

Earlier this week I finally bought the Jawbone Up24 after weeks of deliberation. I’d got bored with the Nike Fuel Band, losing interest in the opaque ‘fuel points’ measurement and increasingly finding it to be an unwelcome presence on my wrist. I’d also been ever more aware of how weird my sleep patterns have become in the past couple of years, cycling between rising early and staying up late, with little discernible rhyme or reason. The idea of tracking my sleep in a reasonably accurate fashion, using degree of bodily movement as a cypher for the depth of sleep, appealed to me on a reflexive level.

This experiance of being nudged by wearable tech makes him consider how intrusive wearable tech be if were made manditory and used to enforce behaviour.

I set the ‘idle alert’. I did so because I found it an appealing idea. It was an expression of my own agency. But it left me with a sense of quite how intrusive and aggressive this technology could be if it were ever mandated.
How hard is it to imagine a situation where Amazon factory workers are expected to wear similar bands, programmed to issue a vibrating warning after 15 minutes of idleness and to alert the supervisor if the worker is still idle a few minutes later? Is it at all challenging to imagine a comparable band with an RFID chip being used to track and sanction a call centre operator who spends too long in a toilet?
The most interesting point is on conditionality of welfare as a method of diffusion of these control techniques. Governements use tags to track offenders and Sobriety tags are being trialled in London to enforce abstinence on people banned from drinking. With tight budgets, a fondness for technological solutions and political rhetoric which divides recipients of welfare into the deserving and undeserving. How long these techniques move from ‘offenders’ to the ‘dependant’, before wealthfare payments and healthcare are made conditional on ‘good behaviour’ – enforced by a wearable monitoring system?
How hard is it to imagine a situation where a Conservative government, eager to separate ‘strivers’ from ‘skivers’ demands that welfare recipients submit to monitoring of their alcohol and nicotine intake?
How hard is it to imagine a situation where recipients of weight related interventions on the NHS are made to wear activity tracking bands with the threat of withdrawn rights to healthcare in the case of unhealthy eating or sedentary lifestyles?
Consumer stuff like Fitbit & Glass is just the first wave. It normalises wearable tech and introduces us gently to the idea of being monitored and nudged. It’s fun, it’s  cool, it makes us ‘better’. The next wave is being coersed or forced by employeers, insurers, carers or government into to wearing devices that enforce ‘correct’ behaviour.
You can read the whole blog post here

Special edition of International Journal of Communication on critical approaches to #bigdata

Big Data| Critiquing Big Data: Politics, Ethics, Epistemology | Special Section of the International Journal of Communication.

This special section of the International Journal of Communication brings together critical accounts of big data as theory, practice, archive, myth, and rhetorical move. The essays collected here interrogate rather than accept the realities conjured through our political, economic, and cultural imaginings of big data. From neoliberal economic logics shaping the deployment of big data to the cultural precursors that underlie data mining techniques, the issue covers both the macro and micro contexts. We have drawn together researchers from communication, anthropology, geography, information science, sociology, and critical media studies to, among other things, examine the political and epistemological ramifications of big data for a range of audiences. Articles in this collection also interrogate the ethics of big data use and critically consider who gets access to big data and how access (or lack of it) matters to the issues of class, race, gender, sexuality, and geography.


The issue can be found here (Articles are Open access)


Surveillance in the Workplace an overview of issues of privacy, monitoring, and ethics

Surveillance in the Workplace
an overview of issues of privacy, monitoring, and ethics

MICHAEL BLAKEMORE / Briefing Paper for GMB September 2005

Professor Michael Blakemore



1   Surveillance is nothing new, but the nature of surveillance is changing
2   Surveillance pre-Internet did not require consent, but it was selective, costly, and not pervasive
3   Over-reliance on technological surveillance can be problematical
4   Function-creep has always been a characteristic of surveillant technologies
5   Surveillance in many circumstances is a positive process, but not without problems
6   Surveillance of employees focused in the past mainly on physical removal of property
7   Those using surveillance technologies often rely in simple linear arguments of good and bad 5
8   Propagate a powerful myth and embed it into the `need’ for pervasive surveillance
9   Surveillance in the retail sector
9.1. Routine surveillance in a retail situation is also promoted as a form of employee protection – whether it realistically protects employees, or at least helps in the detection of criminals
9.2. How do I know whether I am being surveilled?
9.3. Am I justified in being worried by surveillance?
9.4. Areas of surveillance
10  Pervasive computing does not necessarily lead to positive benefits
11  Call Centres
12  Legislative reactions
13  Health and Safety, Risk Assessment
14  Consumer choice can be influenced by `social sorting’
15  The problem is not just the technologies, but may be more one of consent
16  The demise of the implied social contract?
17  Sources of Imagery
18  Sources

Continue reading

The border is everywhere: The history and future of biometric security

The birth of biometric security

We are currently witnessing a rapid rise in biometric security. Borders are apparently becoming ‘smart’; passports are becoming e-passports, and when you set out on your travels your data double is already at your destination. Access to airports and even continents will increasingly be determined not by your national citizenship but by the security of your identity. Biometric security has received little anthropological attention despite historical associations with the discipline. Here I wish to outline a brief genealogy of biometric security in order to argue that, beyond the apparent newness of the technology, key biometric technologies owe their origins to 19th-entury deployments and then as now they may be understood as a form of bio-governmentality in which the security of identity opens possibilities for population control.

Maguire, Mark (2009) The birth of biometric security. Anthropology Today, 25 (2). pp. 9-14. ISSN 0268-540X


Full paper here, bbc radio 4 interview here


Identity dominance: The U.S Military’s Biometric War in Afghanistan

For years the U.S. military has been waging a biometric war in Afghanistan, working to unravel the insurgent networks operating throughout the country by collecting the personal identifiers of large portions of the population.  A restricted U.S. Army guide on the use of biometrics in Afghanistan obtained by Public Intelligence provides an inside look at this ongoing battle to identify the Afghan people.


Article here


Face recognition in retail, transport & buildings


When a person in your database steps into one of your stores, you are sent an email, text, or SMS alert that includes their picture and all biographical information of the known individual so you can take immediate and appropriate action.

  • Receive descriptive alerts when pre-identified shoplifters walk through any door at any store.
  • Get alerts when known litigious individuals enter any of your locations.
  • Build a database of good customers, recognize them when they come through the door, and make them feel more welcome.
  • Enhance treatment of frequent travelers. Build a database of frequent travelers to ensure they are properly recognized and greeted.


  • Spot parties from watch lists and alert authorities worldwide. Catch individuals on local, national and international watch lists.
  • Control employee access. Receive alerts instantly when employees enter areas of your facility for which they are not authorized.
  • Enhance treatment of frequent travelers. Build a database of frequent travelers to ensure they are properly recognized and greeted.


  • Receive descriptive alerts when anyone walks into your building who is not wanted there.
  • Flag individuals who have caused problems in the past.
  • Be alerted when known litigious individuals enter any of your properties.
  • Cooperate with law enforcement. Load their criminal data into your  database so you can notify them if one enters your building.
  • Monitor the movement of people in your facility to ensure that no one is in an area in which they are not authorized to be.


Source here


An open letter to Glass explorers attending the canberra #GLASSMEETUPS event

This is an open letter addressed to Glass explorers attending the Glass Meetups event that will occur at the University of Canberra, INSPIRE Centre on 12 May, 2014.

Date: 12th May 2014
Time: 5:30 PM till 7:00PM AEST Australia
Location: University of Canberra, INSPIRE Centre, Building 25 Pantowora Street, Bruce ACT 2167 Australia

Tickets:  are sold out but you can participate (appropriately enough) via a Google Hangout.


A copy of this letter can be found on the Inspire centre’s website here


Dear Glass Explorers,

Greetings from ‘stop the cyborgs’ which you may know from our ban signs and possibly a few media articles. We are mainly technology people so we are definitely not ‘anti tech’.  We are not calling for a complete government ban on wearable tech like glass. Nor do we believe that you shouldn’t wear it at all. Rather we want to help define sensible norms around where people do or don’t wear devices; encourage individual people to think about the social impact of new technologies; and to discourage the normalisation of surveillance.

Even though Google Glass is still a limited prototype it has generated excitement and controversy in equal measure.  Whether you love it or loath one thing we can agree on is that it is a symbolically important device that represents a change in our relationship with technology.  If the trajectory that glass represents is followed technology will become part of us, mediating every human decision and interaction for good or ill.

Glass, other wearables  (and in future implants) are designed with the intention of making technology both invisible and omnipresent by integrating it closely with the body. The integration of corporate cloud services, technical devices, and the human body has three major implications.

  1. Non-users cannot tell what user’s device is doing. 

Wearing a POV camera like Narrative Clip, Glass or Life Logger is not equivalent to having a smart phone in your pocket – and picking it up and using it. It is equivalent to constantly holding up your phone and pointing it at people.  The non-user has no real idea what the user’s device is doing and this leads to mistrust, unease and unfortunately in some circumstances confrontation.

  1. Users can feel devices are part of their extended body.

With a traditional device like a phone non-users could just make a behavioural request. If someone asked you to stop pointing your smart phone at them would you be offended?  Probably not. However with some wearable devices like Glass this leads to confrontation because there is a feeling that wearable devices form part of the extended self. Wearable devices are not temporary tools but rather deeply personal and individual.

  1. Individual becomes part of the platform.

The individual becomes a node in the network. They are personally monitored (for example location or activity data). They gather data about the world on behalf of the system (social sharing, rating, location, proximity to others). They are given suggestions and advice by the system such as recommendations, ratings nudges or incentives.

  • A Google Now like service provides you with suggestions.
  • A Name tag like service gives you a trip advisor like rating of the person you are looking at.
  • A tinder like service tells you who you should to talk to
  • A fitbit like service rewards you for certain ‘good’ behaviours

This applies to the web and smart phones as well as wearables. However as these systems “get out of the way”, and are increasingly integrated with ourselves they become a kind of outsourced unconscious that we become simultaneously more dependent on and less able to scrutinize.

This corporately controlled collective mind allows companies to exert a powerful and invisible influence. The algorithms seem objective – we trust them – but social assumptions, cultural values, expected norms and power structures are hidden and enforced by code.


Most of the discussion at #glassmeetup is naturally going to focus on Google Glass. However we should not fixate on the specific technical features of generation one of one particular device.  Yes the battery life sucks, they don’t constantly record and face recognition is currently banned, but battery life will improve, Life Logger constantly records and Google have not committed to a permanent ban on face recognition.

Rather we need to take a wider view and sensibly discus the social and political implications of current and potential technologies.  Technology has a powerful influence on society – yet there is typically a view that it is morally neutral, apolitical and inevitable. Nothing is further from the truth.

This discussion should not take place in a middle class tech bubble. Not everyone is a model citizen with a perfect credit record, positive social media profile and good health. Not everyone will have an equal chance in this gamified world we are constructing. Rather we need to consider how new technologies will impact marginalised groups and individuals.

Further we need to consider how technologies affect power structures. Do certain organisations, systems gain an unprecedented ability to monitor and influence human behaviour? Wither personal freedom when our every action is monitored and judged by social media and powerful and un-debatable automated systems?

Some say: “That it is too late. That we must accept that everything will inevitably recorded.  That the only response is transparency and mutual snooping.” This argument is initially seductive but it fundamentally misunderstands what modern surveillance is.

There is a place for transparency but it is systems that need to be transparent not people. Mutual snooping against individuals magnifies power disparities and perpetuates inequalities.

Modern surveillance is not primarily about an evil big brother figure watching from above. Modern surveillance is about classifying and sorting people. It is about ‘statistically rational’ automated discrimination, nudging and enforcement of norms. Modern abuse of power is not a cop beating someone up. It is about your life chances being determined by some correlation and that doesn’t show up on video.

Wishing you all the best for your #glassmeetup in Canberra.

Stop the Cyborgs


Annex: Defining the extended body

In a sense users of some wearable devices are claiming a cyborg identity. They are claiming that their self encompasses both their biological body and a technical device.

It seems reasonable to claim that one’s extended self can encompass technical parts. If a person has a hearing aid, a pace maker or a prosthetic limb then it would be unreasonable to ask them to remove their ears, their heart, or their leg.

Similarly though a little less convincingly it could be argued that there is no reason to make a distinction between medical devices (which restore “normal” function) and enhancements (which give people new abilities).

Thus it can be argued that excluding people because their self happens to include some technical part is a form of discrimination. Certainly we have a great deal of sympathy with this view point and support the rights of people who use assistive devices and indeed people who have enhancements like Neil Harbisson.

However while it seems reasonable to claim that a hearing aid or self-built extra sense is part of the extended self this extended self must have a boundary for it to exist as an individual self at all. This boundary if it is not the biological body must be defined by agency. That is an individuals extended body and extended mind comprises only those systems over which that individual and no-one else has control.

Devices which are networked or controlled by a corporation therefore cannot form part of an individual’s extended body.  Indeed a user of such a device cannot claim for instance that “they are not recording” because they have no idea what their device is really doing.  Certainly they have no idea what is happening to their data once it has been uploaded to the server – where it can be sold or subpoenaed.

This is of course true for phones as well as wearables or implantables. The key difference being that a compromised phone will sit in a handbag, rather than an ideal POV vantage point (wearables) or embedded into your flesh (implantables).

#Bigdata is really automated discrimination

Oscar H. Gandy Jr. Ethics and Information Technology
March 2010, Volume 12, Issue 1, pp 29-42

“Engaging rational discrimination: exploring reasons for placing regulatory constraints on decision support systems”

In the future systems of ambient intelligence will include decision support systems that will automate the process of discrimination among people that seek entry into environments and to engage in search of the opportunities that are available there. This article argues that these systems must be subject to active and continuous assessment and regulation because of the ways in which they are likely to contribute to economic and social inequality. This regulatory constraint must involve limitations on the collection and use of information about individuals and groups. The article explores a variety of rationales or justifications for establishing these limits. It emphasizes the unintended consequences that flow from the use of these systems as the most compelling rationale.


Download the Full paper here (if you have journal access) here (if you don’t)


Also see Big Data Is A Civil Rights Issue