Examining Meta’s Oversight Board Response to AI-Generated Deepfake Images of Female Public Figures

nprssfetimg-91.png

# Investigating the Impact of AI-Generated Deepfake Images on Social Media Platforms

In recent news, Meta's Oversight Board has announced its plans to delve into the handling of explicit AI-generated images of female public figures on Facebook and Instagram. This move comes after two specific instances of deepfake nude images surfaced on the platforms, sparking concerns about the dissemination of manipulated content and its potential impact on online safety and privacy.

## The Incident

The deepfake images in question depicted female public figures from India and the United States, although their identities were not disclosed by Meta. The Indian woman's image was initially reported by a user for its pornographic nature, prompting Meta to close the report without review within 48 hours. Subsequent appeals were also automatically closed, leading the user to escalate the issue to the Oversight Board.

Following intervention from the Oversight Board, Meta acknowledged its mistake in allowing the explicit image to remain visible and promptly removed it. In contrast, the deepfake image of the American celebrity, which depicted a sexual assault scenario, was swiftly taken down and added to Meta's automated enforcement system for violating content.

## Addressing the Challenge of Deepfakes

The Oversight Board's upcoming assessment will focus on evaluating the effectiveness of current policies and enforcement practices in combating explicit AI images across Facebook and Instagram. Of particular concern is the need to ensure consistent enforcement of these policies globally to mitigate the proliferation of deepfake content on the platforms.

Notably, the prevalence of deepfakes extends beyond public figures, as evidenced by a Channel 4 investigation in the U.K. revealing thousands of artificial images, many generated through AI, of celebrities. A 2019 report by DeepTrace Labs highlighted the weaponization of deepfake technology against women, with female actors and South Korean K-pop singers being frequent targets.

## Public Engagement and Accountability

In a bid to gather insights and perspectives from the public, Meta's Oversight Board is soliciting feedback on various issues related to deepfake content, including the harms posed by nude and pornographic images, strategies for addressing the issue, and the challenges associated with automated review systems closing appeals prematurely.

One notable example of the impact of deepfakes on social media occurred in January, when a manipulated image of Taylor Swift went viral on Twitter, prompting the platform to restrict certain searches related to the singer's name. This incident underscores the need for proactive measures to combat the spread of misleading or harmful content online.

## Conclusion

The emergence of AI-generated deepfake images poses a significant challenge for social media platforms in safeguarding user safety and integrity. As Meta's Oversight Board conducts its review and solicits public feedback on addressing deepfake content, the broader conversation around online misinformation and digital manipulation continues to evolve.

For more information on this issue, please refer to the original article featured on Fortune.com.

| Country | Public Figure | AI Image Description |

|—————|—————–|————————-|

| India | Undisclosed | Deepfake nude image |

| United States | Undisclosed | Sexual assault depiction|