Meta-funded regulator of AI disinformation on Meta’s platform is criticized: 'You're not any sort of check and balanced, you're merely a PR spin'
It was easy to tell, at first sight, that an AI-generated image was not real. It was easy to spot that an AI image wasn't real a few years ago. The edges of the items blended together, the proportions weren't right, there were too many fingers on people, and it never got cats correct. But now it's getting harder to tell. TechCrunch held a discussion with AI experts about AI disinformation, which is misinformation that is malicious and has a direct intent. Meta's self regulation policies were also in the firing lines.
The conversation about disinformation shifted to Meta's practices, because Pamela San Martin was one of the main speakers. She is the co-chair of Meta's Oversight Board.
According to its FAQ, The Oversight "is a group of experts from around world who exercise independent judgment and make binding decisions on what should be allowed on Facebook or Instagram".
Just a few pages down, it states that Meta has funded the board directly, with $280,000,000 in funding for the last five years. This declaration of independence coupled with the knowledge of funding implies a tension which the other members of panel picked up.
San Martin, while acknowledging the problems with AI and Meta's need to learn from them, praised AI for its effectiveness in combating AI misinformation.
"Most social content is moderated using automation, and automation uses AI to either flag certain content for human review, or flag certain content that needs to be acted upon."
She also said that the best way of combating disinformation was not always to remove it but to sometimes inform or label it properly. Imagine the X Community Notes function. She also said that public reports on disinformation are a good tool to inform public figures, but do little to prevent harm to private individuals.
She was rebuffed when she spoke of self-regulation, and specifically the regulation of boards.
"Regulation is needed." San Martin said, "I'm concerned about speech but I'm all for regulation when it comes to transparency and accountability."
Brandie Nonnecke (founder of CITRIS Policy Laboratory) responded to this claim by saying "I don't believe these transparency reports actually do anything".
The argument is that with the amount AI disinformation available, a report could show thousands of examples of disinformation being used without a full understanding of what is left out. They can give a "false sense that they're actually doing due diligence". It can be difficult to judge the intention of those reports when they are created internally.
Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDC), also criticizes Meta and Meta's Oversight Board as well as its incentives.
"Self-regulation does not constitute regulation because the oversight board cannot answer the five basic questions that you should ask anyone with power. What power do have, who gave it to you, in whose interest do you use that power? To whom are you accountable? And how can we get rid you if you don't do a good job?" If you answer (Meta), you are not a check and balance. You are just a bit PR spin."
San Martin was rebuffed when she said that Meta could not fire her for her reports.
Kyle Wiggers noted that the Meta oversight board was laid off in April of this year. She assured the panel, however, that the funding for her job is overseen by an irrevocable trust, which has received money from Meta.
It can choose not to extend the term of employees in this Fund so, while they cannot be fired, they could stop receiving funding. This touches on some of that weariness surrounding transparency reports and self regulation.
The Goodbye Meta AI Chain Mail shows that Meta's AI approach has been widely criticized. Self-regulation is not the best way to combat misuse of AI.
Nonnecke suggests transparency reports can, ironically obfuscate problems the report intends tackle. Questioning Meta's incentives for self-regulation feels like a necessity in gaining an intelligent and safe AI approach on its platforms.
Comments