In the wake of YouTube’s announcement this week that it would ban all vaccine-related misinformation, pressure mounted on other technology platforms that have allowed erroneous and potentially damaging information to flourish.
In a blog post, YouTube announced it was expanding its misinformation policies to include all vaccines, not just ones for COVID-19. Even though anti-vax sentiment existed in the U.S. long before the pandemic, there’s been a shift over the last 20 months in how policymakers and large tech companies view misinformation and its effects.
“We’ve steadily seen false claims about the coronavirus vaccines spill over into misinformation about vaccines in general, and we’re now at a point where it’s more important than ever to expand the work we started with COVID-19 to other vaccines,” YouTube wrote in the blog post.
The platform pledged to remove videos claiming that approved vaccines, such as ones that protect against measles or hepatitis B, aren’t effective or actively harm recipients. It will also banish videos that falsely link vaccine acceptance to autism, cancer or infertility. As a result, the channels of several well-known vaccine misinformation spreaders, including Joseph Mercola, Erin Elizabeth, Sherri Tenpenny and Robert F. Kennedy Jr., have been banned.
“People are searching and using online platforms for health information more than ever before,” said Real Chemistry president Elyse Margolis. “Factual and trusted health information has never been more important, and decisions like YouTube’s are another tool in the fight against misinformation.”
YouTube’s policy shift builds upon a July announcement that the platform would add features that bolster credible health information. It arrives in the wake of lawmaker calls for tech behemoths like Facebook and Twitter to crack down on misinformation more strictly. While the government currently does not require social media platforms to regulate the flow of misinformation, Sen. Amy Klobuchar (D-Minnesota) recently proposed a bill that would remove liability protections from platforms that allow inaccurate health information to spread.
Facebook announced in February it would remove posts with false vaccine information. But misinformation continues to be a significant problem on the platform, as well as several others. Recent studies have revealed that the misinformation continues to affect vulnerable communities disproportionately.
“Active targeting of audiences, especially young people, with misinformation is a real and urgent threat to public health,” Margolis said. “Online platforms, health companies, healthcare providers and governments all have a responsibility to join together to solve this growing challenge.”