In early September 2022, more than forty deepfakes experts from across industry, law, art and entertainment, technology and human rights activism met in a virtual convening to identify critical paths to support the nascent power of synthetic media for advocacy, parody and satire, while confronting the gray areas of deception, gaslighting and disinformation.
The convening was led by the global human rights network WITNESS, convenor of the ‘Prepare Don’t Panic’ initiative on deepfakes and synthetic media, together with the Co-Creation Studio at MIT’s Open Documentary Lab.
Using co-creative methods and provocations we addressed the choices that companies, individuals, and platforms must make as they engage with satirical deepfakes and deception posing as humor. Based on the analysis in the in-depth Just Joking! Deepfakes, Satire and the Politics of Synthetic Media we grappled with questions of consent when creating and sharing deepfakes, disclosure and labeling of media, intent behind the creation and sharing process, content moderation, impact of deepfakes in different contexts, and policy-making needs and opportunities.
A number of big ideas and opportunities for potential action emerged.
From this discussion, a number of big ideas and opportunities for potential action emerged. We outline these further below. They particularly relate to:
- Leaning into labeling and disclosure of how media is made as a powerful positive dimension of contemporary media creation, not a stigmatizing approach to mis/disinformation;
- Revisiting conceptions of digital dignity and likeness rights;
- Ensuring that disclosure technologies can help us know where and how media was created, while protecting privacy and fundamental human rights;
- Meaningfully mapping issues around consent from the perspective of the most vulnerable and then implementing consent protocols across the synthetic media process;
- Building robust, consolidated takedown processes for non-consensual sexual deepfakes;
- Fostering global participation in emerging EU and US law that will set de facto law globally.
WITNESS and the Co-Creation studio lay out their own next steps below and will be leading further activities to push on labeling, disclosure, consent around non-consensual sexual deepfakes and global participation. We encourage others to join us or to pick up on other Opportunities for Action outlined below. Contact us at firstname.lastname@example.org or email@example.com.
Based on our findings, below we summarize
- A. 4 big ideas and opportunities for action
- B. Our focus for 2023
- C. Design of the September 2022 convening
For detailed notes from the discussion, gathered under Chatham House Rule, please see here
A. BIG IDEAS AND OPPORTUNITIES FOR ACTION
OPPORTUNITIES TO REFRAME AND RECONSIDER
- Reframe consent as an artistic opportunity in deepfakery, not a barrier.
- Think of labeling and disclosure of how media was made as an opportunity in contemporary media creation, not a stigmatizing indication of misinformation/disinformation.
- Consider disclosure of how media was made as an act of harm reduction that puts more pressure on harmful actors who try to manipulate and deceive.
- Develop a new conception of digital dignity to underpin our thinking on consent.
- Determine what it means to own and permit your deepfake likeness/avatar to be used in a range of contexts, from opt-in perspective.
USE NEW DISCLOSURE TECHNOLOGIES AND APPROACHES
- Invest in durable open-source disclosure technologies for where media comes from and how it was made, available to synthetic media developers and commercializers and across the pipeline of creation and distribution.
- Ensure these disclosure technologies protect anonymity or pseudonymity, recognizing their role in social criticism and satire, and in protecting vulnerable individuals.
- Complement these traces from individual creators and commercial technologists with traces generated via the shared open-source libraries used for creation.
- Drive meaningful global participation in emerging European and US deepfake and synthetic media laws.
- Meaningfully map the pipeline of consent and all the places we need to consider, and center the response on the most vulnerable.
- Promote a consolidated takedown and reporting process for malicious sexual deepfakes.
- Avoid automated systems for deepfake moderation that are ill-suited to global circumstances, poor at detecting synthetic media, and do not work well at identifying intent and satire.
- Look broadly at who has responsibility here for malicious deepfakes including expanding scope of attention to app stores, payment providers and cloud service providers.
DON'T GET HUNG UP ON INTENT
Intent is going to be hard (and intent shifts as media moves). But explore crowdsourced and decentralized smaller community-based assessment to detect, understand and assess intent as well as consequences.
B. WITNESS & CO-CREATION STUDIO 2023 FOCUS
WITNESS and Co-Creation studio intend to focus their own work in our Just Joking!? strand on the following areas. Please reach out to us if you are interested in collaborating, at firstname.lastname@example.org or email@example.com.
- Lean into Labeling and Showing How Media is Made: We will demonstrate via workshops and innovation how labeling and disclosure of how media is made, including synthetic elements, can be seen as an opportunity in media creation, not a stigmatizing indicator.
- Push for Disclosure: We will promote the need for durable open-source disclosure technologies for where media comes from and how it was made, available to synthetic media developers and commercializers and across the pipeline of creation and distribution. We will fight to ensure these disclosure technologies protect anonymity or pseudonymity, recognizing their role in social criticism and satire, and in protecting vulnerable individuals.
- Push for Shared Ethical Practices: We will collaborate with the Partnership on AI and other actors on a Code of Conduct for synthetic media creation and distribution, cutting across multiple concerns.
- Map Consent: We will lead a research process to map the pipeline of consent in synthetic media, centering the most vulnerable.
- Research and Alliances: In addition, we will research the most appropriate ways to support global allies working on solutions to consolidate processes for taking down and reporting malicious sexual deepfakes and look at how to drive meaningful global participation in EU and US deepfake laws.
C. ABOUT THE DESIGN OF THE CONVENING
The event organizers used co-creative methods to support global, inter-sectoral discussions across the diverse expertise present, and to identify and develop concrete actions, best practices, and agreements around the four areas identified as needing attention in the Just Joking! report: Consent, Intent, Labeling and Disclosure, Weaponization and Impact, and Content Moderation, Policy and the Law.
These areas formed the basis for four breakout rooms, each facilitated by two leads. Each room had 10-15 participants at a time, rotating to a different room every 20-24 minutes as they moved through three stages of conversation:
- Stage 1: Map the issues
- Stage 2: Prioritize from the issues mapped in Stage 1
- Stage 3: Identify responses to priority issues identified in Stage 2
Each stage, building on the previous one, was designed to lead to actionable next steps. The discussions were synthesized by the leads from WITNESS and Co-Creation Studio.
Co-founder, Artistic Director, Research Scientist Co-Creation Studio, MIT
Director, Programs, Strategy & Innovation, WITNESS
Media Technologist, Technology Threats & Opportunities, WITNESS
FACILITATORS & NOTETAKERS
Associate Director of Programs, Regional & Partner Engagement
Program Associate, Technology Threats and Opportunities, WITNESS
Africa Program Manager, WITNESS
Raquel Vazquez Llorente
Head of Law & Policy, Technology Threats & Opportunities, WITNESS
Archives Program Manager at WITNESS
WATCH THIS SPACE FOR UPDATES