2023 ACTION PLAN

FROM QUESTIONS TO ACTION

SUMMARY

In early September 2022, more than forty deepfakes experts from across industry, law, art and entertainment, technology and human rights activism met in a virtual convening to identify critical paths to support the nascent power of synthetic media for advocacy, parody and satire, while confronting the gray areas of deception, gaslighting and disinformation.

The convening was led by the global human rights network WITNESS, convenor of the ‘Prepare Don’t Panic’ initiative on deepfakes and synthetic media, together with the Co-Creation Studio at MIT’s Open Documentary Lab.

Using co-creative methods and provocations we addressed the choices that companies, individuals, and platforms must make as they engage with satirical deepfakes and deception posing as humor. Based on the analysis in the in-depth Just Joking! Deepfakes, Satire and the Politics of Synthetic Media we grappled with questions of consent when creating and sharing deepfakes, disclosure and labeling of media, intent behind the creation and sharing process, content moderation, impact of deepfakes in different contexts, and policy-making needs and opportunities. 

A number of big ideas and opportunities for potential action emerged.

From this discussion, a number of big ideas and opportunities for potential action emerged. We outline these further below. They particularly relate to:

  • Leaning into labeling and disclosure of how media is made as a powerful positive dimension of contemporary media creation, not a stigmatizing approach to mis/disinformation;
  • Revisiting conceptions of digital dignity and likeness rights;
  • Ensuring that disclosure technologies can help us know where and how media was created, while protecting privacy and fundamental human rights;
  • Meaningfully mapping issues around consent from the perspective of the most vulnerable and then implementing consent protocols across the synthetic media process;
  • Building robust, consolidated takedown processes for non-consensual sexual deepfakes;
  • Fostering global participation in emerging EU and US law that will set de facto law globally.

WITNESS and the Co-Creation studio lay out their own next steps below and will be leading further activities to push on labeling, disclosure, consent around non-consensual sexual deepfakes and global participation. We encourage others to join us or to pick up on other Opportunities for Action outlined below. Contact us at shirin@witness.org or kcizek@mit.edu

Based on our findings, below we summarize

  • A. 4 big ideas and opportunities for action
  • B. Our focus for 2023
  • C. Design of the September 2022 convening

For detailed notes from the discussion, gathered under Chatham House Rule, please see here

A. BIG IDEAS AND OPPORTUNITIES FOR ACTION

OPPORTUNITIES TO REFRAME AND RECONSIDER

  • Reframe consent as an artistic opportunity in deepfakery, not a barrier.
  • Think of labeling and disclosure of how media was made as an opportunity in contemporary media creation, not a stigmatizing indication of misinformation/disinformation.
  • Consider disclosure of how media was made as an act of harm reduction that puts more pressure on harmful actors who try to manipulate and deceive.
  • Develop a new conception of digital dignity to underpin our thinking on consent.
  • Determine what it means to own and permit your deepfake likeness/avatar to be used in a range of contexts, from opt-in perspective.

USE NEW DISCLOSURE TECHNOLOGIES AND APPROACHES

  • Invest in durable open-source disclosure technologies for where media comes from and how it was made, available to synthetic media developers and commercializers and across the pipeline of creation and distribution. 
  • Ensure these disclosure technologies protect anonymity or pseudonymity, recognizing their role in social criticism and satire, and in protecting vulnerable individuals.
  • Complement these traces from individual creators and commercial technologists with traces generated via the shared open-source libraries used for creation.

ADDRESS EXCLUSION

  • Drive meaningful global participation in emerging European and US deepfake and synthetic media laws.
  • Meaningfully map the pipeline of consent and all the places we need to consider, and center the response on the most vulnerable.
  • Promote a consolidated takedown and reporting process for malicious sexual deepfakes.
  • Avoid automated systems for deepfake moderation that are ill-suited to global circumstances, poor at detecting synthetic media, and do not work well at identifying intent and satire.
  • Look broadly at who has responsibility here for malicious deepfakes including expanding scope of attention to app stores, payment providers and cloud service providers.

DON'T GET HUNG UP ON INTENT

Intent is going to be hard (and intent shifts as media moves). But explore crowdsourced and decentralized smaller community-based assessment to detect, understand and assess intent as well as consequences. 

B. WITNESS & CO-CREATION STUDIO 2023 FOCUS

WITNESS and Co-Creation studio intend to focus their own work in our Just Joking!? strand on the following areas. Please reach out to us if you are interested in collaborating, at shirin@witness.org or kcizek@mit.edu.

 

  • Lean into Labeling and Showing How Media is Made: We will demonstrate via workshops and innovation how labeling and disclosure of how media is made, including synthetic elements, can be seen as an opportunity in media creation, not a stigmatizing indicator.
  • Push for Disclosure: We will promote the need for durable open-source disclosure technologies for where media comes from and how it was made, available to synthetic media developers and commercializers and across the pipeline of creation and distribution. We will fight to ensure these disclosure technologies protect anonymity or pseudonymity, recognizing their role in social criticism and satire, and in protecting vulnerable individuals.
  • Push for Shared Ethical Practices: We will collaborate with the Partnership on AI and other actors on a Code of Conduct for synthetic media creation and distribution, cutting across multiple concerns.
  • Map Consent: We will lead a research process to map the pipeline of consent in synthetic media, centering the most vulnerable.
  • Research and Alliances: In addition, we will research the most appropriate ways to support global allies working on solutions to consolidate processes for taking down and reporting malicious sexual deepfakes and look at how to drive meaningful global participation in EU and US deepfake laws.

C. ABOUT THE DESIGN OF THE CONVENING

The event organizers used co-creative methods to support global, inter-sectoral discussions across the diverse expertise present,  and to identify and develop concrete actions, best practices, and agreements around the four areas identified as needing attention in the Just Joking! report: Consent, Intent, Labeling and Disclosure, Weaponization and Impact, and Content Moderation, Policy and the Law

These areas formed the basis for four breakout rooms, each facilitated by two leads. Each room had 10-15 participants at a time, rotating to a different room every 20-24 minutes as they moved through three stages of conversation:

  • Stage 1: Map the issues
  • Stage 2: Prioritize from the issues mapped in Stage 1
  • Stage 3: Identify responses to priority issues identified in Stage 2

Each stage, building on the previous one, was designed to lead to actionable next steps. The discussions were synthesized by the leads from WITNESS and Co-Creation Studio.

CONVENING LEADS

Kat Cizek
Co-founder, Artistic Director, Research Scientist Co-Creation Studio, MIT

Sam Gregory
Director, Programs, Strategy & Innovation, WITNESS

shirin anlen
Media Technologist, Technology Threats & Opportunities, WITNESS

FACILITATORS & NOTETAKERS

Adebayo Okeowo
Associate Director of Programs, Regional & Partner Engagement

Jacobo Castellanos
Program Associate, Technology Threats and Opportunities, WITNESS

Nkemakonam Agunwa
Africa Program Manager, WITNESS

Raquel Vazquez Llorente
Head of Law & Policy, Technology Threats & Opportunities, WITNESS

Yvonne Ng
Archives Program Manager at WITNESS

WATCH THIS SPACE FOR UPDATES

WHAT'S HAPPENING NOW

PARTICIPANTS AT THE SEPTEMBER 8 CONVENING

Amelia Winger-Bearskin
Associate Professor of AI and the Arts, University of Florida

Ania Catherine and Dejha Ti
Artist duo and Co-Founders, Operator

Anna Bulakh
Head of Ethics and Partnerships, Respeecher

Ben Lee
VP and General Counsel, Reddit

Brian Clarke
Senior Manager of Misinformation Policy, Twitter

Bruno Sartori
Artist and Deepfakes Creator

Carl Bogan
Executive Producer, Mystery Giraffe

Catalina Moreno Arocha
Social Inclusion Coordinator, Fundación Karisma

Claire Leibowicz
Head of AI and Media Integrity, Partnership on AI

Chris Bregler
Director / Principal Scientist at Google, Google

Damar Juniarto
Executive Director, SAFEnet

David Luebke
VP Graphics Research, NVIDIA

Dima Shvets
CEO and co-founder, Reface

Halsey Burgund
Artist and Research Fellow MIT Open Documentary Lab

Henry Ajder
Independent Advisor and Researcher

Jepchumba
African Digital Art

Jessica Smith
Compositing and AI Lead, Metaphysics AI

Jennifer McDonald
Public policy manager, Twitter

Joshua Glick
Visiting Associate Professor of Film and Electronic Arts, Bard College

Justin Hendrix
Editor, Tech Policy Press

Francesca Panetta
Artist and Alternate Realities Curator Sheffield DocFest

Karen Hao
China tech reporter, The Wall Street Journal

Kelsey Farish
Media and Tech Lawyer, DAC Beachcroft LLP

Kip Wainscott
Head of Platform Policy, Snap INC

Karen Rebelo
Deputy Editor, Boom Live!

Lama Ahmad
Policy Research, Open AI

Matthew Ferraro
Counsel, Wilmer Cutler Pickering Hale & Dorr LLP

Manuel Parra Yagnam
Deputy Head, Data & Implementation, Facebook Oversight Board

Marcus Bösch
TikTok Researcher

Michael Vincent Co
Misinformation Policy Lead, Twitter JAPAC

Moira Whelan
Director, Democracy and Technology, National Democratic Institute

Morgan Quinn
External Relations Coordinator, Modulate

Mutale Nkonde
Leader, AI for the People]

Nina Schick
Author and Deepfakes Expert

Samaruddin “Sam” Stewart
Manager Information Quality, Google

Sara Giusto
Producer, Aww Inc

Sarah Wolozin
Director, MIT Open Documentary Lab

Shiran Mlamdovsky Somech
VP Impact, D-ID

Stephanie Lepp
Executive Producer, Center for Humane Technology

Ryan Khurana
Chief of Staff, WOMBO Studios Inc

Victor Riparbelli
Co-Founder and CEO, Synthesia

Wathshlah Naidu
Executive Director, Centre for Independent Journalism (CIJ)

Will Carlson
Health Misinfo Policy Lead, Meta

William Uricchio
Professor, MIT Open Documentary Lab

Zoe Williams
Misinformation and Platform Policy, TikTok