Google Trained Veo 3 Using YouTube Videos, Without Telling Creators

Google faces strong criticism from content creators after confirming it used YouTube videos to train Veo 3, its new AI video generator. The company says it only used a small part of YouTube’s data. Still, the disclosure has raised worries about data transparency and whether creators agreed to their work being used this way. Many fear that such AI systems could eventually replace the creators they learn from.

Veo 3 Trained Using YouTube Data: Google Confirms

In a statement to a news outlet, Google admitted that it used some of YouTube’s content to develop Veo 3, announced last month at the I/O 2025 developer conference.

Google clarified that it did not use all videos from the platform, only a selected portion of its video collection.

Creators Can Opt Out for Third Parties, But Not Google

However, creators on YouTube say they were never told their videos might be used to train an AI that could compete with them. YouTube’s policy allows creators to opt out of third-party AI training. Companies like OpenAI, Meta, and Apple have policies that respect those choices. But Google’s policy does not limit their use of creator videos for internal projects.

This loophole has angered many creators, who feel they have little control over how their content is used by Google. They see it as an unfair advantage that favors Google’s AI development at their expense.

What is Veo 3 and Why It Matter

Veo 3 is Google’s latest AI video tool. It can create clear, 8-second videos from text or images. It now also makes synchronized audio, a feature missing in rival systems like OpenAI’s Sora. The model is part of Google’s Gemini AI suite. Users on the cheaper plan, Gemini Pro, get limited access.

Those on the higher-tier, Gemini Ultra, pay $249 a month and get full use, including early updates and larger volumes. Recently, Veo 3 was made available in over 70 countries, expanding well beyond the U.S.

Ethical and Legal Questions Loom

This incident stirs old debates about the ethics of using publicly available data for AI training. In the US, tech firms often cite fair use or terms of service to justify their actions. Still, the moral question remains open: should they?

Some experts worry that lawsuits could follow if creators prove harm from their work being used without permission. Others say tech firms will need to be more open about their policies as regulations increase worldwide.

The Bigger Picture: A Changing Content Economy

At its core, this battle focuses on creators versus rapid advances in generative AI. From music and voices to art and videos, AI reproduces human work realistically. Creators fear these tools threaten their future, especially when their platforms once aimed to boost their efforts.

This isn’t the first case where a tech company used content in questionable ways. Last April, OpenAI faced criticism for training its models on copyrighted books and news stories without clear permission. That led to several lawsuits. Now, Google finds itself in similar trouble.

As creators push back and regulators begin to act, the question is simple: Can platforms like YouTube keep creators’ trust while building AI that may ultimately surpass their efforts?

What Happens Next?

Google has yet to say if it will change its policies or offer payments to creators whose videos were used. Still, the pressure grows. Some YouTubers want clearer options to opt out, not only from third-party developers but also from Google itself. Others want to be notified when their videos are used in training data or be allowed to earn money from that use.

For more daily updates, please visit our News Section.

Leave a Comment