Upcoming Webinar: December 5, 2024 @ 12:30 P.M. (ET)  |  What’s new in Québec? Legislative updates and recent cases you should know about |  Register Today!

Serious insight for serious situations.

Serious insight for serious situations.

<< Back to all posts

A deep dive on deepfakes: Considering the intersection between investigations and deepfake technology

While you’re here, you may wish to attend one of our upcoming workshops:

Investigating Race-based Cases
12 Nov at
in Online
How do you investigate a case of racial discrimination or racial harassment? Have you struggled with navigating the unique challenges that arise in investigations involving racial harassment or racial discrimination? Do you know what information you need in order to make a determination or how to extract that information, without appearing biased?
Event is fullJoin waiting list

By now, you’ve probably heard about deepfakes. Maybe you followed the story of Taylor Swift, who was targeted by deepfakes that appeared to show her endorsing Donald Trump,1 or maybe you’ve seen concerns raised about how deepfakes of politicians could impact our next election here in Canada.2 “Deepfakes” are images, videos, or audio that have been digitally manipulated using artificial intelligence.3 In essence, they can convincingly show a person doing or saying something that they have not said or done. The discussion around deepfakes is part of a broader reckoning with how we will use (or misuse) and adapt to artificial intelligence, and this includes how we as investigators will respond to allegations that involve the use of deepfakes.

This is particularly relevant for investigators like me, who frequently conduct investigations in the education sector. In recent years, cases of widespread deepfake abuse — generally consisting of students creating sexualized deepfake images or videos of their classmates — have been uncovered at high schools and universities around the world. For example, after police in South Korea began investigating the creation and sharing of deepfake pornography depicting students at two of the country’s major universities, a South Korean journalist dug deeper and discovered that groups were also targeting students at high schools and middle schools. Eventually, over 500 schools and universities were found to have been targeted.4 At a high school in New Jersey, a group of male students created and shared sexual deepfake images of more than 30 of their female classmates.5 However, these cases do not just involve students. At a high school in Texas, a student was accused of creating and sharing sexually explicit deepfake images of a teacher at their school.6

Given their increasing prevalence, we can expect to encounter allegations related to deepfakes more frequently in investigations, especially in investigations in the education sector. These types of allegations pose unique challenges that are worth considering before they land on your desk. Below are some of the challenges to keep in mind, and some strategies to address them.

First, unlike allegations of harassment or sexual harassment that involve in person interactions where a respondent is easily identified, a complainant who alleges that they have been targeted by a deepfake may not know who created it. The image, video, or audio could be shared on social media or messaging applications using a fake profile. In cases like this, where the investigator has been asked to try to identify the respondent, they should consider what evidence they can gather from the complainant and witnesses that might point them in the right direction. For example:

    • Does the complainant have any idea who might have created the deepfake? If so, what are their suspicions based on?
    • Was the deepfake sent to or shared with anyone directly? If so, does that person have any idea who sent it?
    • Is there any indication that the material was shared over an organization’s own Wi-Fi? If so, are there any records that might assist in identifying the respondent?

Second, like any other digital evidence, the evidence related to a deepfake allegation is potentially ephemeral. Images, video, and audio shared on platforms like Snapchat or Instagram stories can auto-delete within a certain period of time, and images, video, or audio shared on other social media platforms can generally be removed or deleted by the person who created it. Investigators should therefore take care to preserve digital evidence of deepfakes with screenshots, screen recordings, or downloads. It’s also important to capture any date or time stamps or other metadata related to the digital evidence. Particularly when the respondent is unknown, evidence about when an image was posted or shared could assist in identifying a respondent.

Third, investigations related to deepfakes have the potential to involve many respondents and allegations, given the relative ease with which deepfakes can be created and shared. In the cases described above, for example, deepfakes were created and shared dozens or hundreds of times. As with any large-scale investigation, this poses challenges for investigators in terms of conducting a timely investigation and managing a large volume of evidence. Where an investigation related to deepfakes is expected to be large in scope, investigators should consider how they are going to organize the evidence so that it is easy to retrieve and review as necessary for the purposes of conducting interviews and drafting the report. This could include an index, a chart, or well-labelled folders and subfolders. Investigators should also recognize the time-commitment that large-scale investigations take. They should consider whether they have the capacity to conduct the investigation in a timely manner and whether they can involve colleagues, such as fellow investigators or assistants, to help manage, review, and summarize evidence in the interests of timeliness.

Overall, deepfakes are an example of how rapidly changing technology impacts on our work as investigators. By staying aware of trends and developments in technology and considering how our investigative experience can be adapted to these new situations, we can be prepared for any novel challenges that come our way.


1 Kat Tenbarge, “Taylor Swift deepfakes on X falsely depict her supporting Trump” (February 7, 2024), online (NBC News): <https://www.nbcnews.com/tech/internet/taylor-swift-deepfake-x-falsely-depict-supporting-trump-grammys-flag-rcna137620>.

2 Elizabeth Thompson, “Deepfakes, influencers will change dynamic of next election, experts say” (September 25, 2024), online (CBC News): <https://www.cbc.ca/news/politics/foreign-interference-artificial-intelligence-1.7334325>.

3 https://www.merriam-webster.com/dictionary/deepfake.

4 Jean Mackenzie and Leehyun Choi, « Inside the deepfake porn crisis engulfing Korean schools” (September 2, 2024), online (BBC): <https://www.bbc.com/news/articles/cpdlpj9zn9go>.

5 Kat Tenbarge, “Teen deepfake victim pushes for federal law targeting AI-generated explicit content” (January 16, 2024), online (NBC News): <https://www.nbcnews.com/tech/tech-news/deepfake-law-ai-new-jersey-high-school-teen-image-porn-rcna133706>.

6 Matthew Seedorff, “Houston-area student accused of creating ‘deep fake’ explicit photos of teacher, sharing them online” (April 13, 2023), online (Fox26): <https://www.fox26houston.com/news/houston-area-student-accused-of-creating-deep-fake-explicit-photos-of-teacher-sharing-them-online>.


Our Services

Our services recognize the human side and the legal side — equipping organizations with the insight they need to become healthier and more resilient.

Learn more about our services here