tl;dr As seen through the rapid adoption of deepfake technology, our culture's obsession with technological innovation means we are beholden to the tech industry’s incessant pursuit of the new. Yet it is worth reminding ourselves that technology companies are not impartial, nor are their products always created in our best interest.
“We find ourselves on the cusp of a new world — one in which it will be impossible, literally, to tell what is real from what is invented.” As Jennifer Finney Boylan aptly stated in a New York Times piece titled, “Will Deep-Fake Technology Destroy Democracy,” what do we do when what we see can no longer be trusted as reality? The recent rise of the deepfake, or the melding of one person’s face onto another’s body, has caused a wave of deep public concern, only to be followed by little regulation or action by the government to protect the public. As the integration of deepfake features into popular social media apps continues to proliferate, it raises questions surrounding our relationship to truth, the validity of the image, and our complacency around technology. A pattern similar to countless other technological innovations emerges upon examining the evolution of our relationship to deepfakes. Deepfakes’ increasing integration into consumers’ everyday life is emblematic of the technology industry’s common practice of integrating and rebranding historically feared innovations in pursuit of profit and power, as seen through digitally manipulated images and surveillance.
Deepfakes’ increasing integration into consumers’ everyday life is emblematic of the technology industry’s common practice of integrating and rebranding historically feared innovations in pursuit of profit and power, as seen through digitally manipulated images and surveillance.
First, it is worth examining what constitutes a deepfake. As Mika Westerlund explains in The Emergence of Deepfake Technology: A Review, “Deepfakes are the product of artificial intelligence (AI) applications that merge, combine, replace, and superimpose images and video clips to create fake videos that appear authentic.” The term deepfake is most commonly used to refer to the melding of one person’s face onto another’s body, and was first coined as a combination of the phrases “deep learning” and “fake.” The first publicly distributed use of this technology was shared in 2017 by a Reddit user by the name of deepfake, who used it to superimpose popular celebrity faces onto other actors’ bodies in pornographic videos. We have seen a proliferation of applications since the introduction of this use of audiovisual (AV) technology to the consumer market.
Among the applications of deepfake technology, none are more troubling than the spread of digitally altered videos depicting political figures for the purposes of influencing public perception and opinion. A good example of this, as explained by Simon Parkin of The Guardian in “The Rise of the Deepfake and the Threat to Democracy,” can be seen in “a digitally altered video showing Nancy Pelosi, the Speaker of the US House of Representatives, appearing to slur drunkenly through a speech [that] was widely shared on Facebook and YouTube.” The subsequent distribution of this false video by President Trump to hundreds of thousands of viewers serves as a tangible example of the troubling implications of deepfake for democracy as a whole.
As Paris and Donovan report, “Currently, technologists, policymakers, and journalists are responding to deepfakes with calls for what scholars call technical and legal closures — that is, regulations, design features, and cultural norms that will determine the role of this technology” (8). Interestingly, the use of these political deepfakes is not only sharing false stories in a medium (video) that had previously been viewed as reputable; the more damaging aspect is the planting of distrust and doubt in media’s validity that ultimately calls into question what, if anything, can be trusted. On social platforms such as Twitter, where Pelosi’s video was shared, it is important to consider the voting-age audience that is consuming this factually incorrect content. As Karen Hao reports in the MIT Technology Review titled “The biggest threat of deepfakes isn’t the deepfakes themselves,” “Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake.” This context, as well as the difficult-to-measure, but real repercussions from incidents such as Pelosi’s, allows us to grasp the full scope of the damage caused to democracy, trust, and the distribution of facts.
While the potential for deepfakes to be used in the interest of large-scale public misinformation is clear, Paris and Donovan bring up a crucial point regarding addressing misinformation by labeling deepfakes: "Venture capitalists, technologists, and entrepreneurs in particular have called for new forms of technical redress, [such as] the automated identification of fakes… There is a risk that these technical and legal closures will be directed by those who already hold economic and political power" (9).
Paris and Donovan’s assertion offers awareness of the hierarchy of power that exists within the issue of deepfakes, while shifting our focus to those who are most vulnerable to the misuse of these manipulated videos — the consumer public. The public’s vulnerability lies in its inability to distinguish between true media and deepfakes. Interestingly, as Paris and Donovan explain, we will see, “the most accessible forms of AV [deepfake] manipulation are not technical but contextual. By using lookalike stand-ins, or relabeling footage of one event as another, media creators can easily manipulate an audience’s interpretations” (15). Echoing Paris and Donovan’s sentiment, Hao of the MIT Technology Review states that “The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.” Legitimized by the opaque yet resounding societal impact of these manipulated videos on deep-rooted institutions such as democracy, I find it interesting to examine the pollution of public channels with digitally-manipulated videos at a smaller scale.
One of the most obvious results of the distribution of politically-charged deepfake images is the division between people and communities. As Paris and Donovan state, “If video can no longer be trusted as proof that someone has done something, what happens to evidence, to truth?” (5). The overreaching effect of turning a once-trusted medium of communication, such as video, into one that requires incessant analysis of its validity tears the very fabric of communities and relationships, thus further fueling political division. As a society and democracy, we have always feared the spread of misinformation. As Cherilyn Ireton and Julie Posetti explained in the publication, Journalism, ‘Fake News’ & Disinformation, “An early record dates back to ancient Rome, when Antony met Cleopatra and his political enemy Octavian launched a smear campaign against him with ‘short, sharp slogans written upon coins in the style of archaic Tweets’” (15). Human beings have a strong inclination towards knowing, obtaining, and seeking truth in others and the world around us, which directly flies in the face of misinformation and deepfakes. Misinformation is nothing new, but the unprecedented speed and scale of our current media crisis makes it more frightening.
Deepfakes are an example of not just the spread of misinformation, but also a pattern in technology, where paranoia in response to innovation often gives way to and becomes complacency as time progresses. David Laporte of Slate presents a case in “Paranoids in the Age of Digital Surveillance,” that there is something to be said for the advent of “each generations’ paranoid fears… coming true… There are few places you can’t go these days without being photographed or caught on security camera.”
Deepfakes are an example of not just the spread of misinformation, but also a pattern in technology, where paranoia in response to innovation often gives way to and becomes complacency as time progresses.
We seem to frequently be much more afraid of the future than we are of the present, even when our present bears close resemblance to our past paranoias. This phenomena is likely the result of the breakneck pace of technological change, as evident in the meteoric rise of deepfakes and their visual realism. As our attention spans shorten, the pace of technological innovation continues to accelerate at a rate that is difficult for the average consumer to perceive. This dichotomy of fast and slow makes awareness of our current state a rarity and meaningful resistance almost unheard of. Thus, it is no surprise that today’s societal response, or lack thereof, is nearly nonexistent to the emergence of frightening technologies like deepfakes.
This dichotomy of fast and slow makes awareness of our current state a rarity and meaningful resistance almost unheard of.
This transition from paranoia to lack of societal response is partly the cyclical outcome of a system that is designed to profit from our complacency, a powerful weakness of humanity. Today, the primary viral quality that defines deepfakes overlaps with the technology industry’s profit model, where virality rules and content comes second. Inflammatory messages such as those shared by Octavian long ago continue to be the standard for achieving engagement at a large scale. So it should come as no surprise that divisive content such as deepfakes continue to proliferate, unobstructed by the structure or administration of these sharing platforms. We trust technology with our data and expect moral policing from corporations that profit from engagement and time spent online, subsequently increasing societal division among groups.
Meanwhile, in a recent online trust study by a London-based web hosting guide, Eileen Brown of ZDNews summarizes that “it discovered that almost half (47%) of Americans admitted that they did not believe they could be a victim of deepfaking. But almost nine out of 10 (88%) of people thought that deepfakes could cause more harm than good.” There is an obvious disparity between the presence of these social media features and public awareness of deepfakes. Perhaps as a result, there is little advocacy surrounding the increasing availability of deepfake technology. However, the inextricable and highly dependent nature of our relationship to technology platforms should be considered when measuring public awareness (or lack thereof). In today’s technological climate, it is difficult for the public to voice our concerns or affect meaningful action while we are thoroughly dependent upon technology’s monopoly on our attention and livelihoods.
Upon examining our evolving relationship to the deepfake, we begin to notice a pattern of adoption and complacency that is all too familiar in the world of technology. The consequences that we can observe so far as a result of this pattern is a society and economy that is firmly dependent on technological innovation for our livelihood, feeling of progress, and industry. As exemplified by the issue of privacy, this pattern can be seen to have a resounding impact on our pushback, or lack thereof, to the personal data being collected about us.
One might notice how the concept of privacy has gone through similar stages as have deepfakes, beginning with far-fetched paranoid visions of surveillance in films such as “Enemy of the State,” (1998). Despite the broad public sentiment that opposes tech’s lack of regulation, the technology sector remains unchecked by our government and issues such as the collection of personal data and the distribution of divisive viral content such as deepfakes continue to be used to increase the profits of technology corporations.
Recently, the most concerning aspect of the rise of deepfake technology is its integration into many popular social media platforms. As Michael Nunex of Forbes reported in “Snapchat and TikTok Embrace ‘Deepfake’ Video Technology Even As Facebook Shuns It,” “TikTok… reportedly built a deepfake-maker that will let users swap faces in recorded videos and will allow them to easily create their own deepfakes by scanning faces from multiple angles, then choosing from a collection of curated videos to superimpose their face into.” Being one of the most popular social media platforms today with 800 million monthly active users in 154 countries and growing according to Wallaroo, the significance of these features lies in their ability to globally normalize deepfakes on a public platform that is uniquely suited to push the bounds of what is considered entertainment (TikTok Statistics — updated February 2020). The accessibility of this manipulative technology is not limited to TikTok, as it includes other popular social media apps such as Instagram and Snap in its portfolio.
The pattern of desensitization and complacency in our relationship to technology is not an isolated incident. Our obsession with technological innovation for its own sake allows us to disregard our fears and adopt the innovations of technology companies before they have proven themselves worthwhile or trustworthy. As a result, we are beholden to the technology industry’s incessant pursuit of the new.
Our obsession with technological innovation for its own sake allows us to disregard our fears and adopt the innovations of technology companies before they have proven themselves worthwhile or trustworthy. As a result, we are beholden to the technology industry’s incessant pursuit of the new.
Lighthearted explanations made by technology companies like Snap in a press release titled “Introducing Cameos,” state “whether you’re feeling excited, exhausted, or just want to say hi, you can find the perfect Cameo [deepfake] for the moment.” These statements are weak at best, yet offer just enough grounds that our inclination towards complacency takes the lead and we remain heavily biased toward welcoming the next innovation. Even though technology offers us an abundance of tools (often free of cost), we should question our eagerness to participate in and inadvertently contribute to innovations that we don’t yet fully understand. At their core, massive technology companies such as Snap, Google, and Facebook are profit engines, despite their rhetoric about their pursuit of the betterment of humanity. It is our data, attention, and use patterns that are most valuable to these corporations- and this is also what we hand over most readily in exchange for the momentary pleasure of seeing what we would look like with Kim Kardashian’s face superimposed onto our own. And yet, simultaneously, some of these technological products and services also have a positive influence on many aspects of our lives.
Herein lies the complexity of the situation: it is possible for technologies to be both beneficial to us as individuals and teams, while detrimental to society and democracy as a whole. Often it seems that technology values the individual over the community or body of people, fostering division and leading to a culture of small tribes, ranging from monocultures of like-minded individuals to major political disillusion. This begs the question: whose responsibility is it to draw lines past which technology cannot cross?
This begs the question: whose responsibility is it to draw lines past which technology cannot cross?
As S.F. Anderson explains in Technologies of Vision, “Ultimate responsibility for the veracity of data … falls not to algorithmic processes but to cultural and social ones” (103). Is it the consumer’s job to vote with our eyes with regards to what we value and desire? Or is this asking too much, since the companies that create products such as deepfakes do so with the backing of vast amounts of behavioral data and psychological understanding about how to best take advantage of the reptilian instincts of humans.
The bottom line is this: Technology companies are not impartial, nor are their products always created in our best interest. As consumers whose livelihoods are directly tied to an economy that craves innovation, our role is merely that of one data point of millions, each with spending power to be influenced through advertising. From this frame, our data point selves are provided incentives in the form of connection, entertainment, instant gratification, and productivity in order to justify handing over our personal information and identities to the technology industry. Our cumulative personal behavioral data is collected at a monumental scale and stockpiled for further analysis, all in the interest of perpetuating the endless pursuit of near-perfect accuracy in targeting and selling consumers the maximum amount of stuff possible.
The bottom line is this: Technology companies are not impartial, nor are their products always created in our best interest.
At its core, issues around our lack of awareness in how we blindly adopt new technologies are rooted in the pitfalls of America’s late-stage capitalist economy and the bleeding of capitalism’s monetary valuation ideals into all aspects of modern life. As Holly Willis of Fast Forward: the Future(s) of the Cinematic Arts elegantly explains, in “a world in which to be human is to be culpable, to say the least; obsolete to say the worst; or posthuman, to gesture toward our suddenly seemingly mandatory ethical obligations in relation to the various sentient others around us” (9). In deepfakes and related issues surrounding our relationship to technology, misinformation, and complacency, we are reminded of the variety of systems that make it difficult to meaningfully oppose our capitalist system, as well as the broad range of contexts that make up our global world.
As Meng Jing of the South China Morning Post reports in “China issues new rules to clamp down on deepfake technologies used to create and broadcast fake news,” at the start of the 2020 year, the Cyberspace Administration of China initiated a new law in China that “requires that providers and users of online video news and audio information services put clear labels on any content that involves new technologies such as deep learning in the process of creation, distribution and broadcast.” Somewhat ironically, it is solutions such as those exemplified in China that allow us to look forward to a world in which we are able to rebuild our trust in what we see. In order to rebuild that trust, America will need to develop a comprehensive understanding of the systems and broader economic context that furthers the simultaneous blurring and tension between visual truth and falsity. This tension is further exacerbated by readily-adopted technological innovation with unproven value and little to no accountability to democracy or society as a collective. Deepfakes serve as a tangible example of how in succumbing to instant gratification and novelty, we lose our ability to tell what is true from falsehood, contributing to the decay of democracy and erosion of human values.