YouTube now asks creators to indicate whether their videos contain AI-generated or altered content that could be mistaken as genuine.
Google announced on its blog that disclosure labels will appear in the video description or on the player, with some exclusions for unrealistic or small changes.
The change is intended to promote openness and trust and help users understand AI’s expanding role in content development.
“And while we want to give our community time to adjust to the new process and features, in the future we’ll look at enforcement measures for creators who consistently choose not to disclose this information,” Google stated.
“In some cases, YouTube may add a label even when a creator hasn’t disclosed it, especially if the altered or synthetic content has the potential to confuse or mislead people,” the company added.
In addition, as previously disclosed, Google is working on a new privacy policy that would allow anyone to request the removal of AI-generated or other synthetic or altered content that resembles an identifiable human, such as their voice or face.
Ariel Ben Solomon is the Growth and Strategy manager at Ecomhunt. He is the host of the Ecomhunt Podcast. Can be followed on Twitter at @ArielBenSolomon