New York: Meta’s Oversight Board announced on Thursday that the company’s guidelines are “not sufficiently clear” regarding the prohibition of sexually explicit AI-generated depictions of real people. The board called for amendments to prevent such imagery from circulating on Meta’s platforms.
The independent board, funded by the social media giant, issued this ruling after reviewing two pornographic fakes of well-known women created using artificial intelligence and posted on Facebook and Instagram.
Meta has stated it will review the board’s recommendations and will provide an update on any changes implemented.
In its report, the board identified the two women only as female public figures from India and the United States, citing privacy concerns. The board found that both images violated Meta’s rule against “derogatory sexualized photoshop”, which the company classifies as a form of bullying and harassment. The board criticized Meta for not removing the images promptly.
In the case of the Indian woman, Meta did not review a user report of the image within 48 hours, leading to the ticket being closed automatically with no action taken. The user appealed, but the company again declined to act, only reversing its decision after the board took up the case.
Also Read | American Airlines Slashes Profit Forecast After Sales Strategy Misfires
For the American celebrity, Meta’s systems automatically removed the image. “Restrictions on this content are legitimate,” the board stated. “Given the severity of harms, removing the content is the only effective way to protect the people impacted.”
The board recommended that Meta update its rules to clarify their scope, noting that the term “photoshop” is “too narrow”. They suggested the prohibition should encompass a broad range of editing techniques, including generative AI.
Also Read | NASA Streams 4K Video from Aircraft to ISS Using Laser Technology, Eyes Moon Next
Additionally, the board criticized Meta for declining to add the Indian woman’s image to a database that facilitates automatic removals, similar to the process used in the American woman’s case. According to the report, Meta relies on media coverage to determine when to add images to the database, a practice the board called “worrying”.
“Many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the board said.