4
Security and the Claim to Privacy
Keywords: security vs privacy
Louise Amoore
DOI: http://dx.doi.org/10.1111/ips.12044 108-112 First
published online: 1 March 2014
ArticleInformation & metricsExplore
PDF
When US President Barack Obama publicly addressed the data
mining and analysis activities of the National Security Agency (NSA), he
appealed to a familiar sense of the weighing of the countervailing forces of
security and privacy. “The people at the NSA don't have an interest in doing
anything other than making sure that where we can prevent a terrorist attack,
where we can get information ahead of time, we can carry out that critical
task,” he stated. “Others may have different ideas,” he suggested, about the
balance between “the information we can get” and the “encroachments on privacy”
that might be incurred (Obama 2013). In many ways, conventional calculations of
security weigh the probability and likelihood of a future threat on the basis
of information gathered on a distribution of events in the past. Obama's sense
of a trading-off of security and privacy shares this sense of a calculation of
the tolerance for the gathering of data on past events in order to prevent
threats in the future. In fact, though, the very NSA programs he is addressing
precisely confound the weighing of probable threat, and the conventions of
security and privacy that adhere to strict probabilistic reasoning. The
contemporary mining and analysis of data for security purposes invites novel
forms of inferential reasoning such that even the least probable elements can
be incorporated and acted upon. I have elsewhere described these elements of
possible associations, links, and threats as “data derivatives” (Amoore 2011)
that are decoupled from underlying values and do not meaningfully belong to an
identifiable subject. The analysis of data derivatives for security poses
significant difficulties for the idea of a data subject with a recognizable
body of rights to privacy, to liberty, and to justice.
Consider, for example, two years before the controversies
surrounding NSA and the PRISM program, when the US Director of National
Intelligence, James Clapper, testified before the joint hearing of committees
on the virtues of the algorithmic piecing together of fragments of data:
The most valuable national intelligence is the huge
collection of databases of routinely collected information that can be searched
by computer algorithm. An analyst may know that a potential terrorist attacker
is between 23 and 28 years old, has lived in Atlanta Georgia, and has travelled
often to Yemen. That analyst would like to be able to very rapidly query the
travel records, the customs and border protection service, the investigative
records of the State Department … we made important progress after the December
2009 attempted bombing of an aircraft over Detroit, but there remains much more
to be done. (US Senate Committee 2011:16)
What is sought in the director of the DNI's vision is not
the probable relationship between data on past activities and a future
terrorist attack, but more specifically, a potential terrorist, a subject who
is not yet fully in view, who may be unnamed and as yet unrecognizable. The
security action takes place on the terrain of a potential future on the
horizon, a future that can only be glimpsed through the plural relations among
data points.
Why does the analysis of fragments of correlated data
derivatives matter to our sense of privacy, both as a concept and as a body of
rights? Contemporary forms of security do not primarily seek out a complete
data set of information on a person, but rather to assemble sufficient
correlated data points to make inferences about risk or threat possible. The
question of what constitutes personal or private data thus becomes more complex
than identifying the accessing of private content. In the public debate on the
PRISM program, for example, the idea of “meta-data” mobilizes a distinction
between the spoken content of a telephone conversation and the meta-elements of
call number, location, time of call, and so on. In fact, though, the complete
content of a call event does not need to be captured in order for a security
action to take place. The derivative meta-data of associative links between
elements represent highly valuable data points for social network analysis
techniques that map so-called “patterns of life.” In a sense, the content of
the “dots to be connected” matters less to this kind of analysis than the
relations between them. The absence of a complete record thus becomes a
security virtue because it makes it more difficult to assume that content is
innocent—in effect, apparently innocent content can become suspicious at the
level of meta-data because it is subsequently associated with other things. For
the individual, who is both the subject of privacy rights and the data subject
of data protection laws, contemporary security appears indifferent to the
person as such, while attentive to the multiple links and associations among
plural people and things.
Put simply, contemporary forms of security are less
interested in who a suspect might be than in what a future suspect may become;
less interested in the one-to-one match of the watch list or alerts index
database, and more interested in the signals of real-time predictive analytics.
The one-to-one match with an identified individual gives way to what Gilles
Deleuze (1992:3) calls the “dividual”—a “torn chain of variables” dividing the
subject within herself, recombining with the divided elements of others. The
signature of the dividual that is sought by the security software does not
“belong” to a subject as such—it does not sign off on past events, it signals
possibilities. Thus, for example, an agreement to “depersonalize” data in a
database after a period of six months, as in the case of the EU-US agreement on
passenger name record (PNR) data, scarcely matters when what is left is a
dividuated signature that allows for the writing of new code. So, a subject's
specific and named journey on a particular date may no longer be attributable in
the database, but it persists as a source for new algorithmic code to act on
future subjects.
Among the implications of the data derivative, then, is the
capacity for little pieces of past data to become mobile, to detach themselves
from identifiable data subjects, and to become active in new security
decisions. Some dividuated data elements are disaggregated from the remainder
and become drawn into association with other elements derived from other
subjects. In this process of disaggregation and reassembly, the person appears
less as a singular body than as a plural set of variables which can have an
onward life as they connect with the lives of others. For example, the 2009 New
York subway suspect, Najibullah Zazi, the 2009 Northwest Airlines Detroit bomber,
Umar Farouk Abdulmutallab, and the 2010 Times Square bomber, Faisal Shahzad,
could be offered as examples of the failure of systems of data analysis to act
upon individuals. Yet, though none of the suspects were preemptively
intercepted on the basis of the analysis of their travel or financial data, the
fragments of data did reappear as a chain of variables for future algorithmic
searches. In the analysis of the US Attorney General, they had ceased to be
identifiable subjects and had emerged as “operatives with British or American
passports or visas who had visited South Asia and had returned to the US over
(redacted) time period” (2010:4). It is the chain of variables of their data
fragments that live on in the algorithmic rules of contemporary border security
analytics.
If the subject of data-driven security measures is a
fractionated subject whose missing elements are a resource to analytics
technologies, then can we meaningfully conceive of a right to privacy of the
dividual? Such a form of privacy would adhere not to identifiable individuals,
but to the associations and links between multiple pieces of multiple people's
lives. If what is collected, analyzed, deployed, and actioned in contemporary
security is not strictly personal data elements, but a “data derivative”—a
specific form of abstraction like a financial derivative, that can have value
that is decoupled or only loosely tied to its underlying coordinate—can privacy
have meaningful purchase on this slippery derivative? One response has been to
seek to restrict the use of personal data and to assert the right to be deleted
or forgotten (Mayer-Schönberger 2009). But the data derivative makes
associations and links that will always exceed a specified use, will travel,
and circulate with new effects and implications. Similarly, the design of new
“sovereign information sharing” systems that promise to protect the “integrity
of the database” appear to protect individual privacy while allowing their
derivatives to circulate (Agrawal, Rakesh, and Ramakrishnan Srikant 2010). By
running the analytics within separate databases (airline passenger name record
data, national authorities' alerts index data, and so on), and aggregating only
the high-risk scores and positive matches in a “master calculation,” the system
uses the associations between subjects without revealing the full picture to
the data owners. The right to privacy of the individual is thus inadequate to
the task of inhibiting the spread of unmoored and dividuated pieces of data
that are only partially attributable to a subject.
So, what might it mean to have data protection, or to limit
use to a defined purpose in a world of abstracted data derivatives? The
gathering of evidence of terrorist activities after the fact is seeping into
the preemptive search for indicators of intent capable of anticipating future
possible infraction. In this sense, for example, what is significant about the
UK's GCHQ generating 197 leads from PRISM in 2012 is not the 197, but how the
197 are arrived at. The 197 represent persons of interest that have come to
attention after a process of running large volumes of data through the
analytics software. This process is described by the industry as one of
“knowledge discovery” in which the algorithm “picks out the strongest
relationships” and partitions and segments the data on this basis. Because the
knowledge discovery process does not begin with a defined purpose or criteria,
it defies the logic of restricting the purpose of data analysis. It is the
analytics that govern what the elements of interest should be—this travel
associated with this financial transaction, associated with this presence in an
open source online community, associated with this presence on Facebook, for
example. So, put simply, the vast bulk of the filtering and analysis happens
before any named individuals or lists are identified. Can a data subject ever
give meaningful consent for their data to be analyzed in bulk, along with
terabytes of the data of other people's lives and transactions? Taking seriously
the implications of contemporary data analytics, it is not strictly the privacy
of the individual that is infringed by programs such as PRISM, but the harm is
in the violence done to associational life, to the potentiality of futures that
are as yet unknowable (de Goede 2012). In effect, the data do not have meaning
until sense is made of them by the relational structure of the analytics. One
could protect data very effectively and yet still the inferred meanings of the
analytics would limit life chances and close down potential futures.
Perhaps the most significant harm, though, lies in the
violence done to politics itself, and to the capacity to make a political
claim. Where politics exists because of intractability and irresolvability,
because of difficulty as Thomas Keenan (1997) puts it, contemporary data-led
security offers resolution through computation, promising to resolve the
incalculable and the intractable. The computational and algorithmic turn in
security has no tolerance for the emergent or half-seen figure at the edges of
perception, no interest in an unfulfilled future potential. “All possible
links” are wrought in the correlative relations that are drawn between bodies
as they are disaggregated into degrees of risk. There is a challenge for law
and human rights when a juridical sensibility of evidence and evaluation meets
a set of security measures that demand the projection of future possible events
yet to take place. Where the balance of probability may discount unverified
fragments, computation lends greater weight to the associated elements, such
that once assembled together their singular uncertainty is less easy to see.
So, is privacy as a body of rights adequate to the task of
protecting the capacity to make a political claim? If the politics of privacy
is to locate space to challenge the analysis of “big data” (Boyd and Crawford
2013), then, one step may be to make a distinction between privacy as a body of
rights and privacy as a claim that can be made. The body of rights, as Costas
Douzinas (2000, 2012) compellingly reminds us, embodies an imaginary unity of
the self, a clearly identifiable subject, while the rights claimant is
fractured and split. While the right to privacy itself is readily reconciled
within the technology—“anonymizing,” “masking,” or “depersonalizing” the data,
for example—the fractured subject falls beneath the register of recognizable
rights. What matters politically, then, is that claims to privacy are made,
even in the face of the impossibility of meaningful juridical redress. As the
layering of software calculations and automated security decisions become ever
more difficult to prise open and call to account, the claim to privacy
nonetheless illuminates the limit points of what can be made transparent or accountable.
Where the search for a definitive body of privacy rights will necessarily fall
short of marking a place for all future claims, the act of making a claim is a
political demand that can be made by those whose experiences and lives are not
registered in an existing body of rights.
© 2014 International Studies Association
References
↵Agrawal, Rakesh and Ramakrishnan Srikant. (2010)
Enabling Sovereign Information Sharing Using Web Services. SIGMOD, Paris,
France.
↵Amoore Louise. (2011) Data Derivatives: On the
Emergence of a Security Risk Calculus for Our Times. Theory, Culture and
Society 28(6): 24–43.Abstract/FREE Full Text
↵Boyd Danah, Crawford Kate. (2013) Critical Questions
for Big Data. Information, Communication and Society 15(5): 662–679.
↵Deleuze Gilles. (1992) Postscript on the Societies of
Control. October 59: 3–7.CrossRef
↵Douzinas Costas. (2000) The End of Human Rights:
Critical Legal Thought at the Fin-de-Siecle. Oxford: Hart.
↵Douzinas Costas. (2012) The Paradoxes of Human Rights.
Constellations 20 (1): 51–67.
↵de Goede Marieke. (2012) Speculative Security: The
Politics of Pursuing Terrorist Monies. Minneapolis: University of Minnesota
Press.
↵Keenan Thomas. (1997) Fables of Responsibility:
Aberrations and Predicaments in Ethics and Politics. Stanford, CA: Stanford
University Press.
↵Mayer-Schönberger Viktor. (2009) Delete: The Virtue of
Forgetting in the Digital Age. Princeton, NJ: Princeton University Press.
↵Obama Barack. (2013) Remarks at White House Press
Conference, August 9 2013. Available at
http://www.whitehouse.gov/the-press-office/2013/08/09/remarks-president-press-conference.
(Accessed November 29, 2013).
US Attorney General. (2010) Review of Passenger Name Records
Data. Washington, DC: Office of the Attorney General.
↵US Senate Committee. (2011) Ten Years After 9/11: Is
Intelligence Reform Working?. Washington, DC: US Senate Committee on Homeland
Security and Governmental Affairs.
View Abstract
No comments:
Post a Comment