By: Olivia Akl

Generative Adversarial Networks: The Tech Behind DeepFake and FaceApp

In a world that mass-produces conspiracy theories from the moon landing being fake to black helicopters coming to bring the US under UN control, it can occasionally be hard to tell fact from fiction. That’s only getting harder thanks to technological advancements like deepfake videos[1], hyper-realistic silicone masks[2], and soon-to-be smart contact lenses.[3] When we can’t trust what we see with our own eyes, what can we trust? What does this advancement of technological trickery mean for the reliability of eyewitness accounts and video evidence in courts?


A deepfake PSA produced by Buzzfeed in 2018 seemed to show President Obama warning people about the threat deepfakes presented.[4] It ended with the reveal that the person speaking was not President Obama, but rather Jordan Peele doing an impersonation of President Obama overlaid with President Obama’s image using FakeApp and After Effects CC.[5] This video opened many eyes to the power of deepfake technology and how convincing it could be.[6] However, deepfake videos are hardly the first videos to fool people into thinking one person is doing something, when it’s truly another individual.


Security videos analyzed by the FBI of a string of robberies in San Diego from 2009 to 2010 led the FBI to offer a $20,000 reward for information that led to the arrest of the so-called “Geezer Bandit.”[7] While at least one witness thought the robber was wearing a “Halloween-style old man” mask, the authorities felt confident in the many other eye-witness accounts that he was a 60-70 year old man[8] and the reward notice described him as such.[9] Surveillance footage from outside the site of the Geezer Bandit’s last robbery on December 2, 2011 showed the supposed 60-70 year old sprinting across a parking lot after a dye-pack exploded.[10] This led the FBI to update its reward notice to include the line: “Possibly wearing a synthetic mask and gloves to hide true physical characteristics.”[11] The “Geezer Bandit” was never caught.


These technologies may seem like something out of a Mission Impossible movie or science fiction, but they are real, and the technology is getting both cheaper[12] and better[13]. Another technology that seems straight out of science fiction is a smart contact lens that may be only a few years away.[14] Mojo Vision, a California-based company, has been working on a smart contact lens—the Mojo Lens—for five years.[15] While the idea for the Mojo Lens is a discreet product to replace a smart phone’s screen like a less obvious Google Glass, it’s not a big jump to see how this could alter the wearer’s perception of the world.


Deepfake technology has already proven capable of working in real time to overlay one person’s image over a live speech.[16] If a smart contact lens could be hacked or infected with malware that allowed access to the view that an individual sees, is it possible a deepfake could be created for the wearer’s eyes only, altering the wearer’s perception of the world in real time? Imagine a smart contact lens wearer witnesses a crime and describes the perpetrator to the police. In a world with these two technologies, smart contact lenses and live deepfakes, can that eyewitness account be trusted?


There is already a worry over the reliability of eyewitness testimony today, without any potentially hacked or malware-ridden smart contact lenses to muddy the waters.[17] Human memory is fallible and people are not often as perceptive as lawyers hope their witnesses are, yet “jurors place heavy weight on eyewitness testimony when deciding whether a suspect is guilty.”[18] In the future, these smart contact lenses will present new issues with eyewitness accounts, perhaps to the point where eyewitnesses will no longer be trusted on the stand.


Another new worry for the courts will be if video evidence can be relied upon due to hyper-realistic silicone masks and deepfake technology. If the technology gets beyond what analysis can reveal as false, could an innocent person be framed for a crime using this technology? Even if the technology can be recognized upon analysis of the video, could the analysis be prohibitively costly for a court or defendant to bear? If so, future courts will need to ask: can video evidence be relied upon when there is a lingering issue of its veracity, and how expensive can cases relying on video evidence be allowed to become?

[1] See Daniel Thomas, Deepfakes: A Threat to Democracy or Just a Bit of Fun?, BBC (Jan. 23, 2020),

[2] Matt Simon, Gaze Into These Hyperrealistic Masks and See a Troubling Future, Wired (Jan. 6, 2020, 2:15 PM),

[3] Juilan Chokkattu, The Display of the Future Might Be in Your Contact Lens, Wired (Jan. 16, 2020, 8:00 AM),

[4] BuzzFeedVideo, You Won’t Believe What Obama Says in This Video!, YouTube (Apr. 17, 2018),

[5] See id.

[6] See id.

[7] Reward of $20,000 Offered in “Geezer Bandit” Investigation, FBI San Diego (Dec. 15, 2010),

[8] FBI still seeking help catching ‘Geezer Bandit’; $20,000 reward offered, Los Angeles Times: L.A. Now (Dec. 15, 2010, 11:28 AM),

[9] See FBI San Diego supra note 7.

[10] Tony Perry, Geezer Bandit May Not Be a Geezer, Los Angeles Times (Dec. 23, 2011, 12:00AM),

[11] Darrell Foxworth, Reward of $20,000 Offered in “Geezer Bandit” Investigation, FBI (Dec. 2, 2011),

[12] See Simon, supra note 2.

[13] Pakinam Amer, Deepfakes Are Getting Better. Should We Be Worried?, Boston Globe (Dec. 13, 2019, 4:07 PM),

[14] See Simon, supra note 2.

[15] See id.

[16] Samantha Cole, This Program Makes it Even Easier to Make Deepfakes, Vice: Motherboard (Aug. 19, 2019, 11:50 AM),

[17] Hal Arkowitz, Why Science Tells Us Not to Rely on Eyewitness Accounts, Scientific American: Mind (Jan. 1, 2020),

[18] See id.


image source: