Connect with us

Technology

Inside horror Facebook bug that led to MORE dangerous posts being shown to users for 6 MONTHS

A FACEBOOK bug led to the platform mistakenly showing users more harmful content for six months.

According to The Verge, content identified as misleading or problematic was prioritized in users’ feeds when it should have been hidden.

AFPA Facebook bug led to the platform mistakenly showing users more harmful content[/caption]

Internal documents show that the software bug was identified by engineers and took half a year to fix.

Facebook disputed the report, which was published Thursday, saying that it “vastly overstated what this bug.”

The glitch ultimately had “no meaningful, long-term impact on problematic content,” according to Joe Osborne, a spokesman for parent company Meta.

But it was serious enough for a group of Facebook employees to draft an internal report referring to a “massive ranking failure” of content.

Read more about Facebook

GOODBYE CONSOLES

Mark Zuckerberg promises metaverse plans including VR Grand Theft Auto

ZUCKER-VISION

Mark Zuckerberg says Meta has invented the gadget that will REPLACE iPhone

In October, the employees noticed that some content that had been marked as questionable was nevertheless being favoured by the algorithm to be widely distributed in users’ News Feeds.

The content was flagged by external media – members of Facebook’s third-party fact-checking program.

“Unable to find the root cause, the engineers watched the surge subside a few weeks later and then flare up repeatedly until the ranking issue was fixed on March 11,” The Verge reported.

But according to Osborne, the bug affected “only a very small number of views” of content.


That’s because “the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place,” Osborne explained.

He added that other mechanisms designed to limit views of “harmful” content remained in place, “including other demotions, fact-checking labels and violating content removals.”

Facebook’s fact-checking program launched in 2018 and aims to identify content that is harmful and misleading.

Under the program, Facebook pays to use fact checks from around 80 organisations, including media outlets and specialized fact-checkers, on its platform, WhatsApp and on Instagram.

Content rated “false” is downgraded in news feeds so fewer people will see it.

If someone tries to share that post, they are presented with an article explaining why it is misleading.

Read More on The Sun

‘SCUM!’

Jim Carrey slammed for ‘sexually assaulting’ Alicia Silverstone when she was 19

RAM-LEYS

Mum, 25, with son in car ran over toy shop worker & carried him on bonnet for 50ft

Those who still choose to share the post receive a notification with a link to the article. No posts are taken down.

Fact-checkers are free to choose how and what they wish to investigate.

Read all the latest Phones & Gadgets newsKeep up-to-date on Apple storiesGet the latest on Facebook, WhatsApp and Instagram

Best Phone and Gadget tips and hacks

Looking for tips and hacks for your phone? Want to find those secret features within social media apps? We have you covered…

How to get your deleted Instagram photos back
How to track someone on Google Maps
How can I increase my Snapchat score?
How can I change my Facebook password?
How can I do a duet on TikTok?
Here’s how to see if your Gmail has been hacked
How can I change my Amazon Alexa voice in seconds?
What is dating app Bumble?
How can I test my broadband internet speed?
Here’s how to find your Sky TV remote in SECONDS

We pay for your stories! Do you have a story for The Sun Online Tech & Science team? Email us at tech@the-sun.co.uk

Exit mobile version