Skip to main content

Meta’s Automated Moderation Is Raising Serious Concerns With Its Oversight Board After Controversial Instagram Post Allowed

 Using automated moderation means to handle a wide array of controversial decisions across Meta’s apps is being questioned by its own Oversight Board.


The news comes after the tech giant was slammed for leaving a post regarding the Holocaust denial on Instagram that many found shocking, including the board itself. As per the firm’s policies, such acts of denial are considered hate speech.
This questionable post showed a SpongeBob rolling out ‘true facts’ about the Holocaust and many were appalled at how it was still there, despite being so untrue or misrepresenting the real facts from history.

So many users mentioned that over the past several years, the post keeps popping up, and the fact that we’re seeing this since 2020 and Meta doing nothing about it has raised questions.

The system keeps claiming that it did not violate any rules of the company so they closed such cases through automated means.

Last year in May, we saw one person generate an appeal about how Meta’s choice to leave offensive posts on the app is just questionable and bizarre and why Instagram was allowing it. But again, it was ignored as they had other policies like COVID-19 automation in place so that’s when Meta’s Oversight Board was contacted.


A new assessment linked to the Holocaust denial was getting generated through Meta’s apps and that’s when it was revealed how the famous Squidward meme kept on being used for spreading narratives that were antisemitic in nature. It also mentioned how some of the apps’ users were preventing themselves from getting detected so they could spread fake denial content left and right.

They were doing this through smart strategies in place like denying the Holocaust by using alternate spellings of certain terms as well as cartoons of jokes.

They are very concerned about Meta’s acts in this regard, the board added and it hoped that the company would put an end to automated policies arising in May of last year after a wide array of conditions tried to justify the wrong act.

It just feels it’s no longer effective and useful and therefore must be removed for this reason. And the fact that human reviewers cannot label offensive content as the Holocaust denial so they are filtered into a bracket called hate speech.

The board asked to have more data on this front including the chance to keep hate speech enforcement at the top of the list through policies as it relies greatly on the use of AI tech for moderating content.

So now, the Oversight Board wishes for Meta to take the right technical steps to ensure it can measure the accuracy of forcing denial content regarding the Holocaust. This entails getting more kinds of granular data.
The board asked if Meta could validate transparently if it stopped all of its automation policies that were put into place during the start of the pandemic. It rolled out recommendations about Meta’s technical steps that should be considered to ensure measures are accurate and that it was enforcing the Holocaust denial there and then.

This includes a gathering of a lot of granular information, the board added which it feels needs to stop for Meta’s own better interests.

When requests were generated for comments, Meta explained to Engadget how there was a formal response generated in terms of transparency. The company says it did leave out all offensive posts so that they couldn’t be published but perhaps this was done in error. Thankfully, they did accept the mistake, if that’s what it feels it might be.

During that time, it vowed to get to the bottom of the matter and figure out what really went wrong. It would now be rolling out a review and comprehensive investigation on this front after taking parallel context into consideration.
If they feel that action has to be taken more stringently, then they will do so immediately. But for now, they’re good to go, and hope to review the matter in detail and issue updates on this front too.

Comments

Popular posts from this blog

Apple In The Hotseat After Reviewer Confirms Its Vision Pro’s Eyesight Feature Doesn’t Work

  For months, we’ve seen tech giant Apple speak about how its Vision Pro entails features that set it far apart from all others in the industry. Now, a reviewer is casting serious doubt on the iPhone maker’s claims after adding that one of the key features of the new Vision Pro Eyesight does not work. And that’s shocking considering how much Apple has marketed the product as one of the best in the industry. When you consider a wide array of real-life examples, you’ll find how Apple has always spoken about this technology being one of the best out there. But in reality, one reviewer says that’s far from the truth. CEO Tim Cook took out the time to argue about how AR is far more superior and entertaining than the world of VR. The former was better as it did not isolate individuals from the community arising around it. Moreover, this is where the entire EyeSight product range came into existence from this notion as it ensured users were well aware and engaged in everything in their su...

OpenAI Sets Eyes On New AI Project Worth Trillions As Sam Altman Begins Talks With Potential Investors

  OpenAI has made it very clear that it’s not coming slow in terms of its ambitions for the year 2024. Sam Altman is said to be in talks with leading investors including the UAE government for a massive AI project that’s said to be worth trillions. This would entail the production of AI chips as confirmed by the WSJ in a new report. The CEO has yet to unveil the curtain on what exactly the project is all about and how it’s only in the early stages. Meanwhile, the list of investors taking part in this ordeal is still unknown, the company explained. Sam Altman also held similar discussions regarding the raising of funds for plans such as the production of a new plant with Japan and UAE-based investors such as their leading tech giants which include SoftBank Group and G42. It’s not too shocking as we’ve heard the OpenAI CEO mention time and time again how the world needs more AI chips now than ever. He feels they are designed to better enhance performance and assist in running AI mode...

The Arrival Of Gemini 1.5 - Google Unveils Its Latest Iteration Of Its Conversational AI System

  Google just unveiled  Gemini 1.5 , its latest rendition of the conversational AI system. The product is said to entail a greater array of advances in better efficiency, long-form reasoning, and enhanced performance. The latest system was detailed in a post by Google’s AI head that entailed a large figure of architecture enhancements, ensuring the core model can perform on the same level as the big Gemini 1.0 Ultra endeavor, without using extra computing resources. This latter was rolled out in the past week. The biggest leap comes at a time when there’s a huge window for carrying out experiments that the company says have to do with long-form context comprehension. The standard model of Gemini analyzes several prompts within a small 128k token context. With this new upgrade, the model will have a large number of data to process which can now be done quicker than before. This huge leap arose at a time when we saw the firm’s CEO analyze and classify as well as summarize a huge...