silverman

[This piece originally appeared in the Full Stop Quarterly Issue #3. The Quarterly is available to download or subscribe here.]

Recently a friend related an eminently contemporary problem. Browsing Facebook one day, he received a notification. Facebook’s facial recognition algorithm had recognized someone he knew in a photo and wanted him to approve the suggested tag. Three issues immediately presented themselves: the photo was of his newborn child, he hadn’t uploaded the photo, and he didn’t want to contribute to his infant’s data trail. Yet it seemed that Facebook already “knew” who his daughter was, both in name and face.

There was a social component to this, too. Would he have to start telling friends not to post photos of his kid? What novel matters of propriety did new parents now have to negotiate? Was he giving into an uninformed, instinctual revulsion at the latest digital technology?

These questions weren’t easily dismissed, but something more complicated was at work — overlapping concerns about visibility, identity, a parent’s responsibility, the commoditization of everyday communications, and how difficult it can be to articulate the kind of harm being done here. In his dilemma, we can also see how privacy is no longer, as one popular definition has it, the ability to control what other people know about us. Instead, in recent years privacy has split along two lines: what humans know about us and what machines know about us.

The first kind of privacy, which you might call “personal privacy,” is the type with which we’re more familiar. This kind asks the question: what do other people know about me, especially people in my community? Personal privacy matters because we want to have some control over our identities, over our social lives, over our secrets and what is personal and meaningful to us. Privacy is not just about keeping information secret. It’s also about control and autonomy, about having the ability to disclose information to others when, where, and how you choose. This is a key part of intimacy, of having anything like a close relationship with another human being. A person could learn some of your most personal information by reading it in a digital profile, eavesdropping on a conversation, or rifling through your email. But this kind of information, sensitive or not, becomes more meaningful when it’s communicated between two people, in a shared moment of confidence. Disclosure is not just downloading personal data. It’s a social act that ties people together.

The other type of privacy is what I call “data privacy.” It asks the question: what do companies, governments, and their automated systems know about me? It also asks: what do they think of me? How are they judging me? These last questions are important — and while of course, we do ask these questions in more conventional social settings, increasingly it matters what distant databases and software programs think of us, what they have on us. Whether you’re marked as a threat to law enforcement, how much you pay for plane tickets, your level of insurance coverage, if you can get an Uber driver to show up at your location — all of these things now depend on data profiles tied to your real name and most intimate personal information. It is a reflection of what some scholars call your data self — the fragmented version of yourself that exists in all these data centers, spread between a thousand different interested parties and competing algorithms.

Which type of privacy matters more to you? They’re not mutually exclusive; rather, they’re closely bound together. But the first type is, understandably, the one most of us are familiar with. Even as many of us become used to exposing ourselves — whether deliberately by sharing personal information and photos online or simply by handing over access to our information for free Internet services — we still value some sense of autonomy. There’s a feeling that this hazy thing we call a self depends not just on a list of personal data and trivia but on having control over how we present ourselves, how we act as social beings.

Thanks to the surveillance society that has sprung up around us over the last twenty years, both of these types of privacy are currently imperiled. If you’re a woman or a person of color, if you’re a felon or a welfare recipient required to submit to invasive monitoring, then you’re probably already familiar with what it’s like to live in a sort of panopticon. In the west, surveillance has long been used as a tool of the powerful, often targeting the most vulnerable. In recent years, surveillance has become a method of entertainment (reality TV), identity construction (social media), and for managing, monitoring, and influencing large populations (the vast data collection apparatus represented by the Web and the Internet of Things).

The old saw, favored by the narcissistic and the paranoid, is true: we’re being watched. If you have a smartphone, if you spend any time on a computer, if you use a metro card, a credit card, an insurance card, or anything else that connects to an electronic system, you are producing digital records of yourself that are incredibly detailed and are being bought, sold, and analyzed in secret by data brokers, insurance companies, intelligence agencies, marketers, police forces, tech companies, tax authorities, and anyone else who has a reason to be interested. Our data selves have become commodities par excellence. Surveillance is the business model of the Internet, and business is good.

So it’s safe to say that some machine is always watching, if not someone. How should this make us feel?

Surveillance Culture, Public Anxieties

The writer Astra Taylor has linked the idea of the big Other to surveillance culture. Commonly associated with the psychoanalyst Jacques Lacan, the big Other is a symbolic other, a hovering presence that watches and stands in opposition to you. “We are never totally alone,” Taylor said. “We always have this sense of a gaze.” The big Other could be any number of things — a Freudian would say that the first big Other is one’s mother. But it could also be the law, language, history, God, Yahweh, Allah, the flying spaghetti monster. Through culture, belief, and the tools of the time, the big Other changes and takes novel forms. One day it is God, the next it is Google. As Taylor explained: “we are making our mythologies manifest through technology.”

By this measure, the big Other could also be our implied audience on social media — a vague mass out there, watching you, judging you — and the murkier databases into which our personal information flows. There’s a reason why we sometimes jokingly use the language of stalking in relation to social media. We have all become voyeurs, have all made ourselves available to be looked at.

For me, this big Other has become too prominent, too insistently present. It’s drained much of my digital life of any sense of control, privacy, autonomy, or spontaneity. There is a nagging sense of inauthenticity, even though the question of what’s authentic is a trap, left behind for us by Lacan’s postmodernist colleagues in crime. I know that everything I do is permanent, leaves a record, that it all goes back into the data mining mill. I feel a blanket surveillance anxiety, knowing that every recommendation or search suggestion or “person you may know” prompt is the product of opaque systems, which are allegedly acting in my best interests, but whose operations occur in total secrecy and which are likely geared towards keeping me clicking, producing useful data. Who I think I am doesn’t matter — what matters is what the algorithms of Google, my potential employer, my health insurer, and the Department of Homeland Security say I am. Where does their influence end, and my own free will take over?

All of this has ruined social media for me — at least the vast corporatized networks that now exemplify social media. The online climate seems to have devolved from something playful and freeform, even anarchic, to something far more structured and closed — constricting in a way that remains difficult to define. There is an ambient tension, the sort you feel when walking through a major city and realizing there is a CCTV camera on every corner. And just like the larger economy, the risks of our new media environment have been socialized, able to fall upon any one of us at any time in the form of punishing shame or costly data leaks, while the profits have been privatized in the hands of a new oligarchic class.

A surveillance culture is one in which you are always assessed, always rated. That’s why our Uber drivers review us on a scale of one to five stars and why social networks provide Like, Favorite, Retweet, and re-share buttons. These aren’t tools of communication; they’re tools of assessment. (They are also tools of surveillance — the Like buttons allow Facebook to monitor traffic on every website on which they’re placed.) These buttons are an indispensable part of a social network’s surveillance architecture. Without these feedback mechanisms, Twitter wouldn’t know which bullshit meme to declare trending, and Facebook wouldn’t know whose posts it should elevate in your feed and whose it should bury.

Measurement is the first step to instituting a system of bureaucratic control. Daniel Bell, the sociologist, wrote that industrialization “arose out of the measurement of work. It’s when work can be measured, when you can hitch a man to the job, when you can put a harness on him, and measure his output in terms of a single piece and pay him by the piece or by the hour, that you have got modern industrialization.” Today, we’re seeing novel forms of measurement being created — types of consumer scoring and reputation assessment far beyond anything contained in our credit reports. All of the data we’re producing is being used to measure and rank us in ways that will allow companies to extract maximum value from us, from our personal information, just as time-keeping and other forms of measurement helped Henry Ford boost his factories’ productivity. For workers, especially blue-collar workers, that means submitted to the Taylorism of granular, always-on surveillance. UPS trucks, for instance, are loaded with sensors that measure everything from when a driver puts on his seat belt to when he opens the door to how firmly he presses the brake.

The anxieties generated by surveillance capitalism have obviously not spread far enough to put a damper on industry monoliths like Facebook and Google. One of the notable shifts of recent years has been toward a new kind of public living. It’s not just that conceptions of privacy are changing, as Mark Zuckerberg argues. That’s partly true, though Zuckerberg himself as done more than anyone to try to make these changes come about. Public shaming, harassment campaigns, the sudden embarrassment, and long afterlife, of viral fame — together they offer evidence that the transparent society isn’t an intrinsically more equitable one.

An old expression — its origins escape me — says that when you speak, you put your business on the streets. In the UK, for example, the many common accents are tied to issues of class, geography, and culture, so when a English person speaks, she offers an impression of who she is and where she’s come from, just by the sound of her voice. A similar phenomenon now pervades daily life. When we sign up for social networks and digital services, when we use our devices or simply move around and log GPS location data, we put our business on the streets. We reveal and expose ourselves, perhaps in ways we might never realize.

All of this creates a problem for your local hermit, anti-surveillance activist, or social media malcontent. If so much of social life now occurs in the corporate commons of your preferred social network, what is left for those of us who are deeply uncomfortable with the status quo? Who don’t want invasive data mining to be the price of communicating with Aunt Edith? Who would rather not let billion-dollar companies develop faceprints of their children, essentially privatizing their identity?

As of now, there are few good choices. You can join the corporatized social networks, give in to relentless surveillance and exposure, maybe try to lock down your privacy settings or obscure personal information. Or you can opt out and deal with your FOMO in private. Sadly, for all of their talk about transparency and openness, our big social networks are fairly closed affairs, greedily confining activity to within their platforms or whatever they can surveil. It makes sense. If you’re Facebook, with more than 1.5 billion members, why would you want your users to go anywhere else? All of their data and activity should flow through you.

But this has made for a worse experience for the bulk of us. If you want to go to a small social network, one that doesn’t track you, that doesn’t fill its site with Like buttons and other metrics that only make us feel insecure about our relative popularity, then you’re stuck there. Facebook owns your data, which is to say that it owns you. You can download a copy of it, though it won’t be a complete record of what Facebook has on you, and you can’t take it anywhere else. Network effects dictate that a social network grows in value as it grows in members. The flipside of this arrangement is that small social networks, especially those confined to their own little walled gardens, seem useless. For a month or two in 2014, a lot of people thought Ello — with its promise to not collect user data — sounded great, and the site surged through a round of hype and media attention. It’s since withered, as users found that living in public depends on exposure. And if there aren’t many people to observe you on Ello, to offer feedback on your social broadcasts, then what’s the point of being there?

In a 2009 paper, psychology researchers from the University of Guelph, in Canada, offered one vision of how this new sociality works: “Identity is a social product created not only by what you share, but also by what others share and say about you,” they wrote. “The people who are most popular are those whose identity construction is most actively participated in by others.”

What this means is that identity is a public matter now, a form of public performance and consumption. It’s also a project, a perpetually unfinished activity that’s engaged in by many people at once. For the more liberated, identity construction is a series of flâneur-like acts of self-exposure; for those suffering from stage fright (or any kind of unwanted attention), it’s oppressive, a thing we must neurotically maintain to stave off our acquired fear of becoming invisible to our social networks.

As the psychologists described it, we need others to participate in our identity construction. Without that, our experience, our sense of self, might be impoverished. We accomplish this, then, by spending time with friends and also by producing content, information, personal data, or simply records of existence on social media. We don’t always post because we have something to say. Sometimes we just want to confirm that we exist. We want to post a photo and have someone (or many someones) click Like, which is another way of saying: I see you. I acknowledge you. Along the way, as Likes and comments accrue, we get a better sense of our popularity and how we are perceived by our peers.

According to the Dutch academic Jose van Dijck, “popularity and disclosure are two sides of the same coin.” Disclosing personal information helps generate the kind of social feedback that indicates popularity. And responding to others’ disclosures helps achieve reciprocity — you like my post, I’ll like yours. Without that, we’d simply be broadcasting into the void.

In this conception of social life, privacy is actually a potential threat to one’s own identity. As Joshua Cohen wrote in his recent novel Book of Numbers, “privacy had become loneliness again.” Preserving our privacy comes at the cost of making ourselves less available to others, where they can’t participate in our identity construction. The philosopher Zygmunt Bauman put it a little more dramatically: “the area of privacy turns into a site of incarceration, the owner of private space being condemned and doomed to stew in his or her own juice.” The compromise that so many of us make was spelled out by the narrator of Cohen’s novel: “nothing to do but submit to conditions” — a website’s terms and conditions, its rules governing our data selves — “if the cost of privacy becomes loneliness.”

In defining something he called Zuckerberg’s Law, Mark Zuckerberg once said that every year, people share twice as much information as the year before. This was not some peer-reviewed theory based on careful study. Mr. Zuckerberg, I think, did not consult with any sociologists before articulating his law. Instead he was describing a business plan: get Facebook users to share twice as much information each year. Personal data is Facebook’s lifeblood. The more that users share, the more Facebook can grow and profit.

But Zuckerberg, to his credit, may have realized something else. As one journalist put it: “at the heart of social networking is an exchange of personal information.” Social networking, in its current form, depends on disclosure, on presenting personal information for the enjoyment or use of others. By increasing data production and sharing, one increases connectivity — at least of a certain facile type — between people.

But this paradigm has made these sites into networks, not communities. The big social sites treat information like a commodity, rather than a part of the social or cultural fabric, and they are commercialized from first to last. We may take this for granted now: certainly our relationships with those who provide communications infrastructure and technologies have long been commercial. But AT&T doesn’t monitor our phone conversations and insert ads into the middle of calls. When you buy stamps, the post office doesn’t watch you write a letter or follow you on your way to the mailbox. (The USPS does take a photograph of the outside of every letter and package sent in the United States, capturing the parcel’s metadata.)

Our social networks, then, are less about communicating than about broadcasting. They are sites to signal taste, accomplishments, status, and desires. We produce updates to be deciphered by automated software that then sells these updates back to us, and people in our network, in the form of sponsored advertisements. In the end, the network’s entire structure, all of its varied protocols, support this advertising machine. As David Lyon says, “As consumerism reigns, so-called social media are rather limitedly social.”

Meta-dating the World

In this surveillance economy, it’s not enough to just watch everyone. The records we produce must also be in a form that computers can understand. One way of describing the ultimate ambition of both Silicon Valley and the U.S. intelligence community is that both want to make the entire world machine-readable. This is why significant advances in facial recognition in recent years have come from Facebook, which has the world’s largest repository of photos, and the FBI, which has a massive biometric database and which likely partners with the NSA to hack into similar databases or collect photos and videos en masse through services like Skype.

Facebook’s facial recognition software — which is designed to make it easier to tag friends in photos — already rivals the ability of human beings to recognize faces. The company’s software can also detect people’s faces when they’re partially obscured or turned away. It apparently — as my friend’s story demonstrated — can also recognize young children and publicly link them to their parents.

We already do a lot to help make our world machine-readable. We tag photos with names of friends, products, places. We geotag posts or use hashtags. We use other labels that describe a post’s content to make it easier to search for and categorize. All of these are forms of metadata — data about data, or data that helps locate or describe other forms of data. By appending metadata to our posts, we fold them deeper into the site’s architecture and make them easier for advertising algorithms to decipher.

Metadata can be just as revealing as content, but a machine-readable world means that computers need to be able to understand the content of messages, too. As of now, that’s still pretty difficult, but significant progress is being made.

Marketing companies now employ various forms of machine vision and object recognition. When you post a picture on your public Instagram account, companies automatically scan the photo and look for any recognizable brands and products. This information helps them establish a profile about what you like, so that they can advertise to you more directly in the future, and it also lets companies know how their products are being used in the world.

Another important effort is being made in sentiment analysis, also called natural language processing. Companies look at the social media profiles of influencers with big follower counts, and they often sign up these influencers to shill their products for them. But companies and governments also want to know what the millions of everyday social media users are thinking. That’s why recent years have seen major investments in technologies that would allow computers to understand natural language as it’s written and spoken. This is still a big challenge — tone and irony are difficult for computers to decipher. An improved version of this technology would allow computers to carry on more life-like conversations, and it would allow companies like Facebook to understand every one of your posts and to mine them ever more deeply for insights into your preferences and behaviors. High-frequency trading algorithms, which already scan news wires, social networks, and other online forums for breaking information, would be optimized to an even greater degree — an automated market of ultra-fast computers who compete only against one another.

A semantic, machine-readable web, then, would surely provide some interesting gains in how websites and apps interact with one another and in the relationships between humans and their machines. Our machines would “understand” us better, at least in the purely consumerist terms set forth by platform owners.

We would also face a world of perfect, total surveillance, where mass censorship could be implemented with a few keystrokes. Consider the example of China, where, according to one study, two million people work as paid online censors, doing the work that automated systems — which might block specific URLs or lists of forbidden terms — cannot accomplish. As a result, Chinese Internet users employ VPNs and other technological tools to bypass censorship, but they’ve also developed clever linguistic defenses, using metaphors, allusions, and jokes, which can’t be parsed by computers and may be missed by a human censor sorting through thousands of Sina Weibo feeds. As computers learn to understand how people speak, they will also learn how we use metaphors and abstractions to fool one another.

Weaponizing Data

We can learn something about the consequences of mass surveillance, and of making the world machine-readable, by looking at the U.S. government, which has taken the Silicon Valley playbook and ported it to the world of national security and foreign policy. You probably have some awareness of the Snowden revelations, of the many programs of mass, suspicion-less surveillance going around the world. Almost anything with a digital record passes through the systems of the United States and the members of the Five Eyes alliance. The U.S. government also records the full content of phone calls in several countries. GCHQ, the British intelligence agency and a close NSA partner, has a goal of tracking the activity of “every visible user on the Internet.” Another popular slogan sums up the intelligence community’s vast ambition: Collect it all.

The consequence of these practices? Information overload, for one thing. Military, police, and intelligence services are overwhelmed with data — haystacks upon haystacks. All of the various threat information coming in has led to thousands of innocent people being put on watch or no-fly lists with absolutely no notification or recourse. Thanks to the Patriot Act, the FBI has acquired unprecedented powers to conduct surveillance, particularly through the use of National Security Letters, which include a gag provision preventing those served from talking about ever having received the letter.

Yet the Bureau cannot cite any major plots uncovered through these programs, including the NSA’s bulk collection of Americans’ phone records. A study, funded by the Department of Homeland Security, concluded that large-scale data mining is ineffective and is “likely to return significant rates of false positives.”

Seven years after that DHS study, our drone forces now suffer from attrition, partly from being overstretched, partly from PTSD, and partly because pilots and other members of drone teams must interface with numerous informational inputs — video feeds, chats with commanding officers, sensor data, weapons systems, comm links with soldiers on the ground, and on and on. It’s an extraordinarily stressful, cognitively taxing job, monitoring and killing people from thousands of miles away.

But why are we collecting all this data? It’s not just that a permissive legal environment allows it or that we fear the specter of terrorism or that it’s easy. Bulk data collection has become the norm, in part, because we live in a new age of positivism, an almost superstitious faith in the ability to read people’s behaviors and intentions, as surely as advertisers’ computers read our status updates. The idea behind positivism, a mostly discredited, late-19th century sociological concept, is that there are certain basic, scientific facts that govern social life and culture. Any phenomenon — from social unrest to why people feel sad after rainstorms to the quality of a novel — can be understood through empirical study.

In an online Q&A last year, Mark Zuckerberg wrote that he was “curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we all care about.” This is a classically positivist position, reducing people, human behavior, and all the associated complexities to simple, rational numbers. It’s Durkheim’s “social physics” updated for the Facebook age. Besides this example, one might see the rise of positivist thinking in the premium placed on pop-neuroscience and evolutionary biology as the ur-explainers for human behavior, or in right-wing economics, which has granted certain essential, immutable features (including a supposed capacity for self-correction) to an abstract entity called “the market.” Positivism rejects any knowledge that isn’t empirical, that can’t be measured or quantified, which turns it into a kind of fundamentalism: the blinkered, amoral theology of technocrats.

Data mining and predictive analytics are the two great positivist disciplines of today. As Nathan Jurgenson writes, “The advent of Big Data has resurrected the fantasy of a social physics, promising a new data-driven technique for ratifying social facts with sheer algorithmic processing power.” These forms of data analysis promise to answer the question of whether someone is going to click on an ad or build a bomb. It’s about anticipating and hedging the future, using the past, using the vast informational record produced by millions and billions of human beings, as an analytic guide. And if you know someone’s tendencies, especially with the mathematical certitude offered by software, then you can better monitor and manage them and guide them to correct behaviors.

But in our case, this new digital positivism, or whatever one might call it, has been a total failure. Nearly every jihadist terrorist attack in recent years was committed by someone already known to western law enforcement. And yet, this knowledge has done little to pre-empt attacks or generate insights about the conditions that create them. We still don’t know, for example, where the Tsarnaev brothers built their bombs (or if they even built them themselves). We do know that Russian security services had warned the FBI about Tamerlan Tsarnaev and that he had contact with the FBI, but the Bureau refuses to disclose whether he was an informant.

Whether it is Paris, San Bernardino, Brussels, or the next city to become a shorthand for the latest terrorist spectacle, the post-attack message is always the same: We knew something but not enough. More powers plz.

The FBI, which acts as the domestic surveillance agent of the NSA, has in fact been very good at vacuuming up information about Americans, but it’s been very bad at predicting or disrupting attacks. Over the last decade, the FBI has arrested dozens of people who the Bureau said were on the verge of committing terrorist attacks in the U.S. At the same time, nearly every single one of these plots was guided from beginning to end by the FBI and paid informants, who often coerce marginalized people into participating in rigged plots. When these plots are inevitably wound up and the suspect arrested, the FBI likes to emphasize how it had control all along — the bomb was a fake and all that — and no one was really in danger.

Then why are we doing it? Why are people who would never have the means or desire to commit an attack on their own being convinced to participate in these charades? The answer is simple: it makes people feel better; it gives the illusion that something is being done. It allows FBI agents to add another closed case to their scorecard and for their informants to line their pockets. Closing the feedback loop, it provides tautological evidence for the utility of mass surveillance: someone went to jail, and these surveillance programs helped put them there; ergo, the system works.

Let’s consider also how mass surveillance and the new faith in data have affected foreign policy. Drone warfare represents the weaponization of surveillance. It’s surveillance and data analysis that ends in a hellfire missile. Drone warfare has helped turn the war on terror into something permanent, an enduring feature of the geopolitical landscape. We’ve killed the leadership of Al Qaeda ten times over now, in Pakistan, Afghanistan, Iraq, Syria, Yemen, and Somalia, and yet the kill list only grows. If drone attacks are so precise, if surveillance and sophisticated signals intelligence are supposed to let us target only the “bad guys,” shouldn’t we be done by now?

Instead, we’ve seen a massive expansion of drone warfare, far beyond anything George W. Bush proposed, and continued instability and suffering among these populations. The Obama administration has pioneered the usage of signature strikes, which are drone strikes based on a pattern of life exhibited by a target. In these cases, drone pilots don’t know whom they’re killing. But the target has shown certain characteristics — switching vehicles frequently, carrying weapons — that might indicate he is a member of a terrorist group. Operators aren’t collecting data about their subjects so much as they’re collecting metadata describing the contours of a person’s life. But even this information can be revealing and put toward a lethal purpose. As Michael Hayden, formerly the head of the NSA, once said, “We kill people based on metadata.” Indeed, just like tech companies use algorithms for targeting individuals with advertising, the government has an algorithm for targeting someone for extrajudicial assassination. It’s called the disposition matrix and is meant to institutionalize the targeted killing process, to provide an analytical framework for determining whether someone should die. What was once a complex legal process has been reconfigured through the positivist lens; now guilt or innocence is a simple matter of quantification.

But the process here is imperfect. It can always be made faster or more efficient, goes the industry logic. In some defense circles, this is known as “compressing the kill chain” — shortening the time between collecting sensor data, processing intelligence, developing a “targeting product,” and putting a missile in someone’s window. In June 2015, an Air Force general announced that airmen in Florida had used social media posts by an Islamic State member in Syria to target him for an airstrike. “It was a post on social media. Bombs on target in 22 hours,” Gen. Hawk Carlisle said. “It was incredible work.”

The Army Times also noted that the U.S. enjoys persistent aerial surveillance over IS positions, “though the view has been muddy and ‘being able to identify the enemy is a challenge.’” This is the troubling logic of mass surveillance and big data alike: omniscience with the delusion of omnipotence.

But given that the U.S. has now taken to killing people for propagandizing online for IS, there should be no shortage of potential targets, many of them bumbling armchair jihadists who give away their positions through bad comsec but who will be described in death as top Islamic State hackers or dangerous ideologues. As an anonymous government official told ABC News, this is “a war of ideas” and, more bizarrely, “we are the angel of death.” (Who’s the Twitter tough guy now?)

The current drone program — and the vast surveillance and analytical apparatus underpinning it — arguably has its origins in the Vietnam War, where counter-insurgency and data-mining met for the first time. In a 1969 article in the New York Review of Books, John McDermott described one such bombing program:

Intelligence data is gathered from all kinds of sources, of all degrees of reliability, on all manner of subjects, and fed into a computer complex located, I believe, at Bien Hoa. From this data and using mathematical models developed for the purpose, the computer then assigns probabilities to a range of potential targets, probabilities which represent the likelihood that the latter contain enemy forces or supplies. These potential targets might include: a canal-river crossing known to be used occasionally by the NLF; a section of trail which would have to be used to attack such and such an American base, now overdue for attack; a square mile of plain rumored to contain enemy troops; a mountainside from which camp fire smoke was seen rising. Again using models developed for the purpose, the computer divides pre-programmed levels of bombardment among those potential targets which have the highest probability of containing actual targets. Following the raids, data provided by further reconnaissance is fed into the computer and conclusions are drawn (usually optimistic ones) on the effectiveness of the raids. This estimate of effectiveness then becomes part of the data governing current and future operations, and so on.

There is much to criticize with this kind of program, not least the morally vacant policy underpinning it, but McDermott notes that “this is the most rational bombing system to follow if American lives are very expensive and American weapons and Vietnamese lives very cheap.” Tinker with the algorithms and the data inputs, and you have something like today’s bombing programs in Yemen and Pakistan, where locals’ lives are cheap and terrorism is seen as something that can be periodically rolled up with the proper application of aerial firepower (the Israelis call this “mowing the lawn”). The programs’ technical precision, its purported rationality and mitigation of risk, occludes any real discussion of policy or other non-technical concerns. The drone program becomes something that, while “imperfect,” is better than the “alternatives” — alternatives which are never seriously considered, partly because drone warfare offers the possibility of iterative change, of refining procedures to reduce waste, error, the messy reality of collateral damage. (And how to debate something that only nominally exists, that can barely be found beneath the layers of state secrecy and Glomar-style doublespeak?)

By bringing positivism to the realm of anti-terrorism, drone strikes have created an illusion of precise, methodical, efficient, consequence-free warfare. This illusion is so powerful that it must be preserved at all costs — even if it means changing the underlying data. As a report in The Intercept showed, the military classifies all people killed in drone strikes as EKIA (enemy killed in action), unless they are posthumously proven to be civilians. In the past, the CIA, which runs a targeted killing program in parallel to the U.S. military, has claimed that its drone strikes caused few, if any, civilian casualties — a distinction achieved only because the agency, it was later reported, classified all “military-age males” as enemy combatants.

The virtue of remote warfare is that you don’t have to be on the ground to reckon with the consequences. You don’t need confirm that the smartphone being targeted is being held by a Haqqani network commander and not, say, his daughter. From 10,000 feet up and 10,000 miles away, you can interpret the data however you want. Distance renders all bugsplats identical.

“Databases of Suspicion”

Let’s return to the U.S., where the technologies, tactics, and ideologies of the war on terror are returning home. Here, mass surveillance and predictive policing are all the rage — monitoring vast populations, anticipating threats, cracking down on petty offenses (as with the broken windows policy), designating whole neighborhoods as gang zones, requiring curfews or other special anti-crime measures. The result, as we can see all around us, are violent, deeply militarized police forces that treat populations like foreign threats rather than citizens deserving rights, respect, and protection. We also experience a permanent state of exception, in which SWAT teams are called at the drop of a hat, political protests are confined to free-speech zones, and people can be detained without charge.

Meanwhile, the metrics used to track and describe crime are invariably politicized, bandied by politicians or police chiefs eager to prove that violent crime is falling or rising, with credit or blame following forthwith. Sometimes they are simply trade secrets. In cities like Fresno, police officers on their way to answer calls are given threat scores describing a suspect or perhaps a person living at the address in question. The criteria informing the scores are proprietary, dictated by the corporations that create the software, so not even the police officers know whether a “red” threat level means a suspect has a criminal history or simply posted something on social media that an algorithm flagged as troublesome.

During the recent Black Lives Matter protests, one note that stands out is how many people of color who have been killed by police, or died in police custody, have long records of police encounters. This says little about the deceased and a lot more about how the police operate, as they seem to pick people up for the pettiest of infractions over and over again — so-called quality of life offenses or other ridiculous crimes, like failing to obey an order from a police officer. The problem with this form of policing is not just the daily humiliation and violence it produces. The psychic toll is confounding for anyone who has been stopped many times by cops, who can’t get between home and school without wondering if an officer is going to stop and frisk them. But another problem lies in how these incidents are tracked, how our technological systems — which embody the cultural and political environments that produce them — ensure that they will happen over and over again. Because if you’re stopped once and given a ticket — say, for something silly like drinking a beer on your front stoop — then you’re only more likely to be stopped again. Fines and infractions accumulate, compounding interest and guilt, until a person ends up in a position like Sandra Bland, who when she was pulled over for the umpteenth time in Texas last summer, was facing thousands of dollars in fines that she could never pay. When the police run a license plate and find a laundry list of citations and fees, it only encourages them to stop the person, detain her, and further criminalize her.

Sandra Bland, like so many others, had been selected, marked for further governmental attention in what have been called “databases of suspicion.” The same problem, for a while, plagued the filmmaker Laura Poitras, who is now suing the U.S. government for repeatedly being pulled aside and questioned at border crossings, with her equipment often confiscated. After he first contacted Poitras, Edward Snowden said that he chose her because she had already been “selected.” He was speaking figuratively and literally. “Selectors” are the terms given to keywords that signals intelligence analysts flag in systems like the NSA’s XKEYSCORE. Once Poitras had been selected, once her name had become a keyword demanding further investigation and suspicion, her freedom of movement had already been taken away from her. Somewhere in the opaque bureaucracies of the national security apparatus and the software it uses — software being its own kind of bureaucracy, reordering the world through code — a decision had been made to reclassify Poitras as potentially dangerous.

This is something that should scare us. Any of us can be selected — not necessarily for assassination, like the American citizens Anwar al-Awlaki and his son Abdul-Rahman. But we might simply be selected for further inspection, for further treatment that we don’t deserve. It could be at border crossings; it could be when a police officer decides to type your name into his laptop and see what pops out; it could be an automated order issued when a faulty license plate scanner mistakes your car for one that’s stolen. It could be an annoyance. It could ruin your life.

Philip K. Dick, the fabulously paranoid, drug-addled sci-fi writer, was consumed with questions of surveillance and what it meant about a person’s relationship to government and society. In his novel, Flow My Tears, The Policeman Said, the main character, Jason, is a popular TV host in a police state in which everyone must carry ID and in which celebrity is the ultimate currency. Suddenly, one day, everything changes. Jason loses his ID and, consequently, his very identity and place in the world. No one seems to know who he is anymore. In this society, that’s a problem. The people whose identities aren’t immediately known are those marked for suspicion. That’s when the police start taking an interest in you.

Here’s how Philip K. Dick described it: “Once they notice you, Jason realized, they never completely close the file. You can never get back your anonymity. It is vital not to be noticed in the first place.” (The italics belong to PKD.)

This seems to perfectly encapsulate what happened to Sandra Bland, what continues to happen to anyone put through the carceral system or added to a terrorist watch list. Once you’re noticed, the file never closes. You can be stopped years from now, made to account for something in your file that may have no bearing on who you are.

“It is vital not to be noticed in the first place.” That’s why you might hear a person of color say that he dresses nicely before going to the financial district, hoping to attract less attention from authorities. This is also why we will see anonymity and various forms of obfuscation become more important in the years to come. Artists are now creating pieces of clothing and makeup that help to fool facial recognition. An FTC commissioner has argued that citizens have a “right to obscurity.” Some activists, or simply concerned consumers, create fake data profiles or online identities for themselves. Privacy, and safety, can be a matter of not being noticed, of willfully hiding who you are.

In another novel, The Man in the High Castle, Dick remarked, “Whom the gods notice they destroy. Be small . . . and you will escape the jealousy of the great.” In his own way, he was touting the benefits of obscurity, of turning one’s self away from the algorithmic gaze. Being noticed is the first step toward harm.

Digitized, Privatized, Secured?

When we digitize more of life, we privatize and commercialize it as well. As Jose van Dijck noted, “all kinds of sociality are currently moving from public to corporate space. . . . Today, Facebook, Google, Amazon, and Twitter all own algorithms that increasingly determine what we like, want, know, or find.” They also help to determine who we are and how we relate to one another.

For much of the twentieth century, one’s identity was wrapped up in questions of consumerism — what you owned, what you wore, what you used to fill your apartment. Herbert Marcuse wrote that “the people recognize themselves in their commodities.” Many of us buy clothes or gadgets or pieces of art because we like what they say about us. Just like when we post on social media, we want to signal our taste. Or we find something strangely life-affirming. That new rug somehow reflects who we are; it really ties the room together.

Now, we recognize ourselves in our media. We are all creators and publishers of media now. We are endlessly engaged in cycles of media production and interpretation, parsing our friends’ text messages or status updates for latent meaning. At the same time, the media that reaches us is increasingly filtered, targeted, curated. We are prey to what Thomas de Zengotita calls the “flattery of representation” — media designed for us, that addresses us — while closing out the rest of the world.

Marcuse also remarked that “social control is anchored in the new needs” society produces. Once these new needs were material — for many people they still are. Steve Jobs famously said that he would show customers what they wanted, which is what allowed Apple to invent (and vigorously market) whole new product categories. Now, society has created a new need, the management of what might be called the data self. It’s everything discussed here: social media profiles, corporate and government database entries, online reputation scores, search results, digital labor, basic maintenance of our gadgets. We fear being embarrassed online, going viral for the wrong reasons, but we also fear being misclassified or becoming invisible, posting on our social networks and receiving no feedback in return. We are torn between feeling addicted to our devices and finding them — and their hits of information — impossible to put away.

We tend to our profiles and gadgets and online affairs with concern and energy, rarely taking into account the fact that a decade or two ago, none of this stuff existed. No one cared then about whether they had enough Twitter followers or if their passwords had the proper mix of alphanumeric characters and symbols. They had other needs, of course, but this contrast might show how absurd some of our present concerns are, how they’ve been foisted upon us by entities that have a financial interest in doing so. One of the remarkable achievements of mass surveillance, of surveillance as a business model, is how it’s turned so much of everyday life into productive labor — labor that benefits platform owners and the security state they feed, however unwittingly.

Lately we all engage in what the computer scientist Ian Bogost calls “hyper-employment,” producing tiny pieces of work every day for Google, Twitter, Apple, credit card firms, advertising networks, tracking companies. Here our data trails seem less like “exhaust,” the natural byproduct of digital activity, than like deliberately coded mechanisms designed to extract value from us. If an action doesn’t produce data, then it can’t be monetized. Why not then make digital, make machine-readable, as much of human life as possible?

One of the social costs of this arrangement is that your data self is perpetually out of date, always in need of a new status update, a revised profile photo, a fuller list of friends, another thing to click on or share. There is never any reason not to refresh the timeline and see what’s new. We live under the tyranny of the new.

This state of affairs might continue for a while. Our world will become more fully digitized, embedded with sensors and screens and smart gadgets everywhere. We will overflow with even more data, all of it mined for insights that seem to make a select class richer and more secure while leaving the rest of us to find that we have more work to do, more things to worry about. This data will supposedly reveal more about how we live, who we are, and what we want, but any positivist insights mined will be shared, if at all, among Silicon Valley executives and Beltway securocrats.

It’s all a shame — a waste of a lot of good brainpower and interesting technology. Mark Zuckerberg likes to say that Facebook helps make the world more connected, a better and more transparent place. But Facebook more accurately resembles a line from Marcuse: “a rationally organized bureaucracy, which is, however, invisible at its vital center.” In the case of Facebook, that vital center includes its News Feed algorithm — the main decision-making engine for the Facebook empire, offering a filtered world for each and every user. In the case of our government, the invisible vital center is the scrim of secrecy that shields our officials, and their supposedly pragmatic, data-driven policies, from any real criticism or accountability.

In both cases, we see that software itself has become a new kind of bureaucracy, a form of structuring and managing our complex society. But it’s a form of bureaucracy potentially more dangerous than what we have now, since this bureaucracy is largely automated, unregulated, and cloaked in secrecy.

Percy Shelley once called poets “the unacknowledged legislators of the world.” Now that role belongs to software engineers. It’s their products that have become the great mediators of our lives, shaping everything from our friendships to our country’s security. Into these systems go our personal information, our data selves, the faces of our children, everything that matters to us. If we care about what comes out, we’ll have to find a way to shine a light at the vital center of these systems, making them legible and accountable to all.

Jacob Silverman is the author of Terms of Service: Social Media and the Price of Constant Connection, and a contributing editor for The Baffler. His work has appeared in the Los Angeles Times, New York Times, the Washington Post, and many other publications.

Illustration by Goda Trakumaite.


 
 
Become a Patron!

This post may contain affiliate links.