YouTube mandates disclosure for AI-generated realistic videos

The platform has introduced a tool in Creator Studio that mandates creators to reveal if their content, which could be mistaken as genuine, has been altered or produced using synthetic media, including generative AI.

author-image
Social Samosa
New Update
YT

YouTube has unveiled a new policy requiring creators to disclose when their content, particularly if it appears realistic, has been generated using AI technology. This move aims to prevent viewers from being misled by synthetic videos that mimic real people, places, or events, which can be increasingly difficult to distinguish from authentic footage due to advancements in generative AI.

The platform has introduced a tool in Creator Studio that mandates creators to reveal if their content, which could be mistaken as genuine, has been altered or produced using synthetic media, including generative AI. This initiative comes amidst concerns raised by experts about the risks posed by AI-generated deepfakes, especially in the context of the upcoming U.S. presidential election.

yt

YouTube clarified that the policy doesn't apply to obviously unrealistic or animated content, such as fantasy scenarios, nor does it require disclosure for content utilising generative AI for script writing or automatic captioning. Instead, the focus is on content featuring realistic individuals, events, or locations that have been digitally manipulated, such as replacing faces or fabricating voices. 

Creators will need to disclose alterations that create the illusion of real events or places, like simulating a building fire or fabricating scenes of fictional major events. Labels indicating AI involvement will be displayed in video descriptions for most cases, with more prominent labels appearing for sensitive topics like health or news. These labels will be rolled out across all YouTube platforms, beginning with the mobile app and expanding to desktop and TV. 

YouTube plans to enforce this policy by considering measures against creators who consistently fail to use the required labels. In cases where creators don't add labels themselves, YouTube may add them, particularly for content that could potentially confuse or mislead viewers. 

Youtube Creators new policy AI technology AI-generated deepfakes