Meta’s Oversight Board has issued a critical report pointing out shortcomings in the company’s management of sexually explicit AI-generated images. The board, which operates independently despite being funded by Meta, found that the company’s current rules were inadequate in preventing and addressing such harmful content.
The report scrutinised two cases involving pornographic fake images of well-known women, which were posted on Meta's Facebook and Instagram platforms. These images, generated using artificial intelligence, were deemed to violate Meta's rules against “derogatory sexualised photoshop,” categorised as a form of bullying and harassment.
In one instance, involving a public figure from India, Meta failed to review a user report within the required 48-hour timeframe, resulting in the ticket being closed without action. Although the user appealed, Meta initially took no further action and only reversed its stance after the board’s intervention. In contrast, the image of an American celebrity was promptly removed by Meta’s automated systems.
The board criticised Meta for its narrow definition of prohibited content, suggesting that the term “photoshop” was too restrictive and that the policy should encompass a wider range of editing techniques, including generative AI.
Additionally, the board condemned Meta’s reliance on media coverage to update its removal databases, arguing that this approach leaves many victims, particularly those not in the public eye, with limited means to address the spread of non-consensual explicit images.
The board’s recommendations include broadening the scope of Meta’s content removal policies and improving practices for adding problematic images to automatic removal databases. Meta has stated it will review the board’s recommendations and provide an update on any policy changes.
(Inputs from Reuters)