By: Merrin Overbeck

Twitter, Facebook, Together, Exchange Of Information

A rising issue in today’s society is that of deep fake videos, “. . . a portmanteau of ‘deep-learning’ and ‘fake,’ [deep fake videos] are audio or visual material digitally manipulated to make it appear that a person is saying or doing something that they have not really said or done.”[1] This technology uses algorithms to create incredibly realistic videos through the use of images or audio recordings of actual people.[2] These videos have been increasingly common on social media platforms, Facebook being one of the main locations where these videos are posted.[3]

These videos have become increasingly common because new technology allows average individuals to create deep fake videos without the need for specific experience or skill with technology. For example, in January 2018, a free user-friendly application was released that made deep fake technology available to the general public.[4] This application allowed any individual with access to the internet and images of a person’s face to create a deep fake video.[5] Not only has this technology become more easily accessible, but it also has grown so sophisticated in that “an average internet user by the 2020 election could create doctored videos so realistic forensic experts will have to verify whether the content is real.”[6]

While this technology can be used in harmless ways, individuals with access to this technology have unfortunately found illegal uses for it. One example is “deep-fake pornography,” which is related to nonconsensual pornography in which there is a disclosure of private, intimate images or videos of another person without that individual’s consent.[7] This illegal use of deep fake technology is harmful because individuals are turned into “objects of sexual entertainment against their will, causing intense distress, humiliation, and reputational injury.”[8] Another example is its use in white collar crime and financial crimes. This technology has been used by criminals to impersonate company executives to defraud businesses.[9] In order to understand how harmful this technology can be, consider a situation in which, right before a company’s Initial Public Offering:

a deepfake video . . . show[s] the company’s CEO soliciting a child prostitute or saying something he shouldn’t say in a way that upends the initial public offering. . . If released at a crucial time, a deep-fake video could destroy the marketplace’s faith in a CEO or company. Depending on the timing of the release, a deep-fake video can hijack people’s lives and companies’ fortunes.[10]

Another example of the illegal use of deep fake technology, and the example that the rest of this post is going to discuss, is its use to distribute false news and information by malicious groups. According to Europol, “little or no technical expertise is required to [create these videos], and recent developments in deep fake video and audio mimicking technology make it easier to spread disinformation and impersonate individuals.”[11] In this context, these videos are created in order to sway voters to prefer one political ideology over another.[12] This was a major issue in the 2016 presidential election, and the primary location where these videos were sent was Facebook.[13] Social media platforms such as Facebook is a popular platform for individuals to post these videos on because of the ability for individuals to post frequently about the topic of their choice.[14] The ease of posting these videos to Facebook has led to Facebook receiving criticism about its complacency in the issue of the spread of false information.[15]

After learning of the issues that deep fake videos have caused on its platform, Facebook recently implemented a new policy that bans some deep fake videos from its website.[16] In its official announcement regarding this new policy, Facebook noted that deep fake videos that were “meant for satire are still fine, along with any that have a more serious purpose.”[17] This means that videos that were edited to enhance the clarity or quality of the video are still allowed.[18] Facebook’s announcement is largely focused on just preventing videos that are part of “a misinformation campaign, what Facebook calls manipulated media.”[19] Facebook is able to do this because various technology companies have created means of detecting videos that are created using this technology. For example, one way of determining that a video is a deep fake is observing the lack of blinking in the subject of the videos.[20]

While this policy decision by Facebook helps to partially solve the issue of the harmful use and distribution of deep fake videos, there is still a wide variety of harm that these videos can cause. In order to be able to detect videos in which only a small portion is altered, technology companies are going to have to focus on the development of technology that keeps up with the fast-paced development of deep fake technology. This is an important consideration because circumvention “used to take years, [but] it can now occur in two- or three-months’ time.”[21] Unfortunately, this technology that was developed with the intention of helping to detect deep fake videos can also actually be used to create more convincing fakes.[22] An example of this occurred when German researchers created an algorithm that would be able to detect “face swaps”[23] in videos, but then discovered that this technology could actually be used to “improve the quality of face swaps in the first place- and that could make them harder to detect.”[24]

Therefore, while Facebook’s decision to start to ban harmful deep fake videos does help solve the problem partially on that specific platform, deep fake technology is a rising issue that technology companies, governments, law enforcement agencies, and society as a whole must be aware of. Facebook is just one example of entities becoming aware of this issue and responding.

 

[1] Mary Anne Franks & Ari Ezra Waldman, Sex, Lies, and Videotape: Deep Fakes and Free Speech Delusions, 78 md. l. Rev. 892, 893 (2019).

[2] See id. at 894.

[3] See Josh Brandon, Facebook Bans Deepfake Videos That Could Sway Voters, But Is It Enough?, Forbes, (Jan. 13, 2020) https://www.forbes.com/sites/johnbbrandon/2020/01/13/facebook-bans-deepfake-videos-that-could-sway-voters-but-is-it-enough/#4a6d6283476b.

[4] Nancy McKenna, Head to Head, 15 No. 7 Quinlan, Computer Crime and Technology in Law Enforcement, July 2019.

[5] See id.

[6] Olivia Beavers, House intel to take first major deep dive into threat of ‘deepfakes’, The Hill, (June 13, 2019) https://thehill.com/homenews/house/448278-house-intel-to-take-first-major-deep-dive-into-threat-of-deepfakes.

[7] Supra note 1, at 893.

[8] See id. at 893.

[9] Henry Kenyon, AI creates many new malicious opportunities for crime, Europol says, Congreessional Quarterly Roll call, 2019 WL 3244189 (July 19, 2019).

[10] Robert Chesney & Danielle Keats Citron, 21st Century-Style Truth Decay: Deep Fakes and the Challenge for Privacy, Free Expression, and National Security, 78 Md. L. Rev. 882, 887 (2019).

[11] Supra note 5.

[12] Supra note 3.

[13] See id.

[14] See id.

[15] See id.

[16] See id.

[17] See id.

[18] See id.

[19] See id.

[20] Supra note 5.

[21] See id.

[22] See id.

[23] See id.

[24] See id.

 

image source: https://pixabay.com/photos/twitter-facebook-together-292994/