YouTube relaxes rules about what creators can and can’t say in videos
YouTube is reportedly giving creators more leeway about what they say in videos, easing up on some of the rules it has set in the past. The user generated video platform owned by Alphabet has adjusted its exception rule, which will allow videos that might have been removed nine months ago for promoting misinformation to remain on the platform. The New York Times reports that if a video is considered to be in the public interest or has EDSA (educational, documentary, scientific, artistic) context, up to 50% of it can be in violation of YouTube’s guidelines for misinformation or showing violence, versus 25% before the policy change. That change, which was reportedly made about a month after Donald Trump was elected, but was not publicly announced, followed pandemic-era rules that saw a video of Florida Governor Ron DeSantis that shared some Covid misinformation removed from YouTube. The new rule change could benefit creators whose videos blend news and opinion. YouTube’s spokesperson Nicole Bell, in a statement, told Fast Company, “These exceptions apply to a small fraction of the videos on YouTube, but are vital for ensuring important content remains available. This practice allows us to prevent, for example, an hours-long news podcast from being removed for showing one short clip of violence. We regularly update our guidance for these exceptions to reflect the new types of discussion and content (for example emergence of long, podcast content) that we see on the platform, and the feedback of our global creator community. Our goal remains the same: to protect free expression on YouTube.” Free expression is the reason other social media companies have given in relaxing or eliminating their content moderation programs in recent months. X long ago handed over the responsibility of flagging inaccurate content to its users. Meta eliminated its fact-checking program in January, shortly after Trump took office. Trump and other conservatives have long accused social media sites of “censoring” conservative content, saying content moderation, as practiced by social media companies, was a violation of their First Amendment rights to free speech. YouTube said it regularly updates its Community Guidelines to adapt to content on the site. Earlier this year, it sunsetted all remaining COVID-19 policies and added new ones surrounding gambling content. Changes, it said, are reflected in its Community Guidelines Transparency Report. The new rules largely revolve around content that is considered in the public interest. This is defined as videos where the creators discuss a number of issues, including elections, movements, race, gender, sexuality, abortion, immigration, and censorship. The Times reported it had reviewed training material that gave examples of videos that might have been flagged and taken offline in the past that are now allowed. Included among those was one that incorrectly claimed COVID vaccines alter people’s genes, but mentioned several political figures, increasing its “newsworthiness.” (That video has since been removed for unclear reasons.) Another video from South Korea involved a commentator saying they imagined former president Yoon Suk Yeol turned upside down in a guillotine so that the politician “can see the knife is going down.” The training material said the risk for harm was low because “the wish for execution by guillotine is not feasible.” The policy change is, in some ways, a big shift for YouTube, which less than two years ago announced a crackdown on health information. That same year, though, it also said it would stop removing misinformation about past elections, saying the policy “could have the unintended effect of curtailing political speech.” YouTube has been criticized in the past for not doing enough to curb the spread of misinformation, ranging from everything from 9/11 “truthers” to false flag conspiracy theories tied to mass shootings. Some reports have even suggested its algorithm can lead some users “down a rabbit hole of extremist political content.” YouTube says it still actively monitors posts. In the first quarter, removals were up 22% compared to the year prior, with 192,856 videos removed for violating its hateful and abusive policies. The number of videos removed for misinformation was down 61% in the first quarter, however, in part because of the removal of COVID-19 policies.

YouTube is reportedly giving creators more leeway about what they say in videos, easing up on some of the rules it has set in the past.
The user generated video platform owned by Alphabet has adjusted its exception rule, which will allow videos that might have been removed nine months ago for promoting misinformation to remain on the platform. The New York Times reports that if a video is considered to be in the public interest or has EDSA (educational, documentary, scientific, artistic) context, up to 50% of it can be in violation of YouTube’s guidelines for misinformation or showing violence, versus 25% before the policy change.
That change, which was reportedly made about a month after Donald Trump was elected, but was not publicly announced, followed pandemic-era rules that saw a video of Florida Governor Ron DeSantis that shared some Covid misinformation removed from YouTube.
The new rule change could benefit creators whose videos blend news and opinion. YouTube’s spokesperson Nicole Bell, in a statement, told Fast Company, “These exceptions apply to a small fraction of the videos on YouTube, but are vital for ensuring important content remains available. This practice allows us to prevent, for example, an hours-long news podcast from being removed for showing one short clip of violence. We regularly update our guidance for these exceptions to reflect the new types of discussion and content (for example emergence of long, podcast content) that we see on the platform, and the feedback of our global creator community. Our goal remains the same: to protect free expression on YouTube.”
Free expression is the reason other social media companies have given in relaxing or eliminating their content moderation programs in recent months. X long ago handed over the responsibility of flagging inaccurate content to its users. Meta eliminated its fact-checking program in January, shortly after Trump took office.
Trump and other conservatives have long accused social media sites of “censoring” conservative content, saying content moderation, as practiced by social media companies, was a violation of their First Amendment rights to free speech.
YouTube said it regularly updates its Community Guidelines to adapt to content on the site. Earlier this year, it sunsetted all remaining COVID-19 policies and added new ones surrounding gambling content. Changes, it said, are reflected in its Community Guidelines Transparency Report.
The new rules largely revolve around content that is considered in the public interest. This is defined as videos where the creators discuss a number of issues, including elections, movements, race, gender, sexuality, abortion, immigration, and censorship.
The Times reported it had reviewed training material that gave examples of videos that might have been flagged and taken offline in the past that are now allowed. Included among those was one that incorrectly claimed COVID vaccines alter people’s genes, but mentioned several political figures, increasing its “newsworthiness.” (That video has since been removed for unclear reasons.) Another video from South Korea involved a commentator saying they imagined former president Yoon Suk Yeol turned upside down in a guillotine so that the politician “can see the knife is going down.”
The training material said the risk for harm was low because “the wish for execution by guillotine is not feasible.”
The policy change is, in some ways, a big shift for YouTube, which less than two years ago announced a crackdown on health information. That same year, though, it also said it would stop removing misinformation about past elections, saying the policy “could have the unintended effect of curtailing political speech.”
YouTube has been criticized in the past for not doing enough to curb the spread of misinformation, ranging from everything from 9/11 “truthers” to false flag conspiracy theories tied to mass shootings. Some reports have even suggested its algorithm can lead some users “down a rabbit hole of extremist political content.”
YouTube says it still actively monitors posts. In the first quarter, removals were up 22% compared to the year prior, with 192,856 videos removed for violating its hateful and abusive policies. The number of videos removed for misinformation was down 61% in the first quarter, however, in part because of the removal of COVID-19 policies.