YouTube is giving with one hand, and taking with the opposite, with recent strategy statements from the corporate giving contradictory messages in regards to the use of AI on the platform.
In a communication to the YouTube community on Wednesday, YouTube’s chief executive, Neal Mohan, stated that a key a part of the corporate’s strategy for 2026 is an initiative to lower the quantity of low-quality, AI-derived content that appears in users’ feeds. “As an open platform, we allow for a broad range of free expression while ensuring YouTube stays a spot where people feel good spending their time,” he said.
The company’s hands-off approach to content moderation is to vary on the subject of AI-generated content, in an effort to preserve the usual of videos posted, he said, thus safeguarding viewers’ experiences. Despite what YouTube describes as its ‘openness’, Mohan said the corporate shall be “actively constructing on our established systems which were very successful in combating spam and clickbait, and reducing the spread of low-quality, repetitive content.”
Yet concurrently, YouTube is invested in AI-assisted content creation, with creator-focused features planned for 2026 including tools allowing creators to generate Shorts using AI models of themselves.
“AI will act as a bridge between curiosity and understanding. Ultimately, we’re focused on ensuring AI serves the individuals who make YouTube great: the creators, artists, partners, and billions of viewers seeking to capture, experience, and share a deeper connection to the world around them.”
Creators will still be obliged to declare once they publish artificially generated or modified media, and YouTube will mark content produced using its own in-house AI tools.
Over 1,000,000 channel owners used on-platform AI creation tools in December 2025 and there have been over 20 million uses of YouTube Ask by viewers in the identical month. In 2026, YouTube will roll out two latest features along with the creator likenesses generated for Shorts, namely using text prompts to create games and experimental music production tools.
The company can also be doubling down on protecting itself from criticisms over copyright infringing content on its platform. “We remain committed to protecting creative integrity by supporting critical laws just like the NO FAKES Act,” Mohan said.
“Ultimately, we’re focused on ensuring AI advantages those that make YouTube successful: the creators, artists, partners, and billions of viewers desperate to capture, experience, and share a stronger connection to the world around them.”
The company’s strategy seems subsequently to be to supply AI content creation tools to creators, and flag ensuing ‘creations’ as AI-generated so viewers are aware of their provenance. Creators are expected to declare whether or not they use AI of their work if using external software.
The company’s strategy, as embodied by the software that influences viewers’ decisions, will grow to be apparent over time. It’s not clear whether the YouTube algorithms will promote creators who use or don’t use AI, or select on-platform AI tools over alternatives. YouTube appears to be hoping that viewers reject what Mohan termed “AI slop” but accept media created by YouTube-endorsed AI tools. The latter videos, presumably, won’t thought to be “AI slop,” despite the undeniable fact that they shall be routinely flagged as AI-generated onscreen.
(Image source: “Rand Paul, Ai Generated Images, Public Domain, CC0” by MrScott+Ai Art is marked with CC0 1.0. )
Find out more in regards to the Digital Marketing World Forum series and register here.
Read the complete article here










