Post Authored By: Natalie Elizaroff
You may think you can trust your eyes and ears, but with the rise of deepfakes, it is becoming increasingly difficult to discern reality from fiction. Deepfakes, or synthetic media created using artificial intelligence, have become increasingly prevalent in recent years. While these technologies have exciting potential applications, they also pose significant challenges for the legal community. From frightening cases of misinformation to nonconsensual pornography, deepfakes are raising complex legal questions that require careful consideration. In this article, we explore some of the key legal issues surrounding synthetic media and propose steps that the legal community can take to address these challenges.
Misinformation and Deepfakes
Misinformation, defamation, and deepfakes are all interconnected issues that have become increasingly prevalent in today’s digital age. Misinformation refers to false or inaccurate information that is spread unintentionally, while defamation is the act of intentionally spreading false or damaging information about an individual or entity. Deepfakes, on the other hand, are a specific type of manipulated media that uses artificial intelligence and machine learning to create realistic but entirely fake content. Deepfakes can be used to create false or misleading information about individuals or organizations, which can be spread rapidly on social media and other digital platforms. The rise of deepfakes has raised significant challenges for the legal community, particularly when it comes to determining whether a deepfake video constitutes defamation.
For example, a deepfake video could be created to make it look like a politician said or did something scandalous, even if it never actually happened. This can be especially damaging during an election cycle or when public opinion about a person or issue is being formed. For example, in 2018, a deepfake video of former U.S. President Barack Obama went viral on social media.[1] The video featured Obama’s face and voice manipulated onto Jordan Peele’s body, making it appear as though he was delivering a speech that he never actually gave. While the video was not intended to spread misinformation, it raised concerns about the potential for deepfakes to be used to spread false information or manipulate public opinion.
Nonconsensual Pornography and Deepfakes
Another significant legal issue related to deepfakes is nonconsensual pornography. Starting as a Reddit trend involving celebrities, deepfake pornography involves creating pornographic videos or images that superimpose a person’s face onto someone else’s body without their consent. It is an unmoderated practice that can be particularly damaging to the individual whose face has been used in the deepfake, as well as to their reputation and privacy.
In a significant ruling in 2023, Patrick Carey was found guilty of creating sexually explicit deepfakes of women he knew from middle school and high school. Carey was sentenced to six months in jail and required to register as a sex offender. According to court documents, Carey posted fabricated images of almost a dozen women in sexual situations, accompanied by their personal information. He also instigated other users on the porn website to subject these women to harassment and threats of violence. Unfortunately, this conviction is a single drop in the ocean of users creating nonconsensual pornographic deepfakes. Likewise, looking at websites such as MrDeepFakes,[2] one of the most prominent websites in the world of deepfake porn, showcases just how expansive this issue is.
Legal Strategies to Approaching Deepfakes
To address these and other legal issues related to synthetic media, there are several steps that could be implemented.
- Develop new laws and regulations: One approach would be to develop new laws and regulations that specifically address the creation and dissemination of synthetic media. These laws could criminalize the creation or distribution of deepfakes that are intended to harm or defame individuals and establish penalties for those who violate these laws.
- Strengthen existing laws: Given that technology often advances at a rapid pace, while the law tends to move slowly and struggles to keep up with these changes, strengthening existing laws may be a solution. For example, laws related to defamation, intellectual property, privacy, and fraud could be updated to reflect the unique challenges posed by deepfakes.
- Create legal guidelines: Legal guidelines could be developed to help judges and juries evaluate cases involving synthetic media. These guidelines could help clarify the legal standards for determining whether a deepfake constitutes defamation or nonconsensual pornography, for example, and could provide guidance on how to assess the authenticity of evidence that includes synthetic media.
- Collaborate with the technology industry: In a world dominated by technology, the legal community could begin to bridge the gap and work closely with the technology industry to develop tools and standards for detecting and preventing the creation and dissemination of synthetic media. For example, legal experts could work with software engineers to develop algorithms that can identify deepfakes or collaborate with social media companies to establish policies for removing deepfakes from their platforms.
- Promote public education: Finally, the legal community could help to promote public education and awareness about the risks of synthetic media. This could include developing public service campaigns that educate people about the dangers of deepfakes, and creating resources to help individuals and organizations identify, report, and respond to instances of synthetic media in violation of the revised legal standards.
Conclusion
Deepfakes have the potential to revolutionize many aspects of our lives, from entertainment to education to journalism, however, these technologies also pose significant challenges for the legal community. To address these challenges, it will be necessary to develop laws and regulations that specifically address the creation and dissemination of deepfakes, as well as to establish legal guidelines to help the legal community evaluate cases involving synthetic media. With proactive work, the deepfake can ‘stay fake’, and we can ensure that these technologies are used ethically, responsibly, and safely.
[1] David Mack, This PSA About Fake News from Barack Obama Is Not What it Appears, BuzzFeed News (Apr. 17, 2018), https://www.buzzfeednews.com/article/davidmack/obama-fake-news-jordan-peele-psa-video-buzzfeed
[2] Kat Tenbarge, Found through Google, bought with Visa and Mastercard: Inside the deepfake porn economy, NBC News (Mar. 27, 2023), https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071.
About the Author:

Before pursuing a legal career, Natalie spent several years in the microbiology department at Evanston Hospital where she conducted comparative research studies, performed quality control testing, and worked on state-of-the-art medical device technology. After doing a swift 180 and finding law as her true calling, Natalie focused her efforts into intellectual property.
Natalie received a Bachelor of Science in Molecular Biology, with a minor in Biostatistics from Loyola University Chicago. She earned her law degree from UIC School of Law and she is currently working as an Associate at Advitam IP LLC, where she handles a variety of IP matters including trademark litigation, copyright infringement, and other IP-related disputes.