Despite what science fiction might tell you, artificial intelligence doesn’t just spring into existence fully formed. It needs to be trained, taught and shown the world—what a horse or camel looks like, what Tokyo looks like, what a princess looks like. And only by analyzing countless examples and vast amounts of data can it gain the ability to generate accurate representations. The source of that data represents both a challenge and an opportunity for the kids TV industry.
Until recently, most AI models were trained using—let’s call it what it is—stolen content scraped from the web. After all, nearly everything can be found online if you know where to look. But that’s not the way it has to be. AI training could be a huge boon for content owners. The tech’s voracious need for data represents a potential new revenue stream—in other words, a market is springing up where there wasn’t one before.
Rise of the data brokers
Dave Davis is GM of media at New York-based data provider Protege, which bought up his data company Calliope Networks in December. Protege works with hundreds of AI clients, providing the data they need to train models. The company’s video catalogue has grown to 300,000 hours of movies, TV series, news, sports, YouTube content and more in the last few months, and Davis predicts that it should top a million hours by year’s end.
Like many members of his team, Davis comes from Hollywood studios—the content world—and says one of the main motivators for starting Calliope was his concern over copyright infringement. AI presents a stark choice for content owners, he adds. “They can ignore it, which doesn’t seem like a very good idea. They can litigate, which is OK for some, but that’s not the path we’re taking. Or they can engage. We’re in the engagement business.”
Davis sees engagement as the best way to ensure that content owners get paid and have some influence over how AI develops. “It’s going to be disruptive,” he acknowledges, “but whether they share their catalogues or not isn’t going to have any material impact on how disruptive it’s going to be.”
He compares it to trying to stop the massive boulder in Indiana Jones. “The question is: ‘Should I license my content to the boulder coming towards me?’ Licensing the content isn’t going to change the fact that the boulder is still coming, but at least you can get paid for it—and maybe, through engagement, you can alter the boulder’s course ever so slightly.”
The exact nature of the need varies depending on the client, Davis adds. For example, some AI models are designed for Hollywood and focus on lighting and camera angles, while others prioritize social media, robotics training or even autonomous driving. “There are a lot of companies that you wouldn’t think of, or that you might not have heard of, that are building AI models.”
While early conversations were often about picking up 50,000 to 100,000 hours of footage big-volume deals meant to train general models—Davis foresees more specific requests characterizing the future, such as people talking to camera or moving in specific ways, locations, etc. He sees the market starting to specialize, and expects that trend to continue.
Do the dollars make sense?
While Davis declined to share specific pricing, Bloomberg reported last year that Adobe was paying around US$3 a minute for video content, which falls in line with average industry rates.
Live action and animation are generally priced differently— live-action footage starts at US$1 to US$2 per usable minute, while animation typically ranges from US$2 to US$3 per minute.
The premium paid for animation reflects two realities—first, that fewer animation libraries have been willing to participate to date; and second, that the rights are simpler to clear.
And rights are a concern. The biggest challenge when it comes to AI training isn’t about money—it’s about trust.
Of course, there are safeguards for those who choose to engage. Agreements usually include restrictions—no exact frame or scene duplication, no output of character likenesses, and no access to music rights for training, for example. But while AI companies aren’t repurposing entire shows, they are absorbing the content’s DNA—and that’s a concern for rights holders.
“Initially, there was a lot of excitement, thinking this is the new gold rush,” says Sinking Ship partner Matt Bishop. “For people sitting on catalogues in a time and place when there are fewer broadcast pickups, this could be very beneficial.”
But he sees red flags. “Our content [features] kids. We feel like we need to do right by the people who were in our shows,” he explains. “Something you will probably hear me talk about a lot is Name,
Image and Likeness rights, [which the AI companies are] not interested in. But at the end of the day, these are our actors. These are our kids. These are our individuals who are on our shows. And so, as a custodian of that, it’s incredibly important for us to make sure that if our content is ever used, it’s done in an ethical way.”
With these concerns in mind, Sinking Ship is taking a cautious approach. “I want to know where those models are going,” says Bishop. “Are they open source? Are they closed source? What safety rails are in place? Without those assurances, I don’t know what our material is being trained for. Maybe we’re control freaks, but we want to know. We want to ensure that we don’t have any regrets about a decision, just to get a paycheque.”
After all, the decision is bigger than a single payday, because there’s no going back once that cheque gets cashed.
“The way I look at it, evolutions like 4K and HD were new costs imposed on all of us that we had to figure out,” says CAKE Entertainment CEO Ed Galton, adding that his company has been approached by a number of intermediaries offering small payouts for its content.

CAKE Entertainment has been approached by third parties looking to train AI from its library, but the company has refused because too many questions about rights and usage remain unanswered
“At least with this evolution, there’s a revenue stream that comes with it, so that’s a positive. But I worry that the market is going to be damaged by people who work with middlemen who are going to be selling their content really cheap. For me, that is the biggest issue, because if you’re selling this stuff, you can’t put a term limit on it. Once it’s learned, it’s learned. So it needs to be at a price point that makes sense for us to do it.”
As a distributor, Galton says CAKE has to balance the rewards against its responsibility to its production partners, but the landscape just isn’t resolved enough yet. “We have to find a comfortable way of explaining to the people we work with how this could be an interesting opportunity, and that it doesn’t really damage what they’re doing as a business. We’re still in that exploratory phase.”
You can’t stop progress
Hundreds of entertainment industry creatives recently voiced their concerns over AI and copyright to the highest levels of the US government, submitting an open letter to the White House Office of Science and Technology Policy (OSTP) as part of the US AI Action Plan, in response to claims by OpenAI and Google that US law allows AI training on copyrighted works without permission or compensation.
Prominent figures from film, television and music signed the letter, which argued for stronger protections for creators in the face of AI’s rapid expansion. Signatories included Guillermo del Toro, Phil Lord and Chris Miller (The LEGO Movie, Clone High), Paul McCartney, Aubrey Plaza, Ron Howard, Taika Waititi and many more. Their message was clear: AI should not be allowed to exploit creative works without proper acknowledgment and compensation.
But while the debate continues, time is passing, and the technology is progressing. Like it or not, the time to act is now.
Galton wonders, “If I don’t end up doing these deals, are we going to be left out of the race?” Surprisingly, he says his producers haven’t raised the issue—at least not yet—but that’s sure to change, and the market will have to adapt. Regardless of where individual producers stand, however, rights negotiations are always evolving, and licenses and agreements will change as AI continues to reshape the industry.
Similarly, Bishop is also looking at the bigger picture, noting that the recent industry downturn has helped push this latest evolution. While the downturn is “incredibly unfortunate,” the rise of AI has aligned perfectly with the industry’s latest lull. So Bishop decided to seize the opportunity.
“We had more and more systems that were idle,” he recalls. “So we thought, ‘How do we make the best of a bad situation?'” With 150 high-powered workstations sitting unused, Sinking Ship pivoted, leveraging its own infrastructure to train models using its existing library.
The results have been striking. The company developed two productions this year using AI for concept creation, which cut costs by 40% and reduced the carbon footprints of both by 60%, says Bishop. While AI’s environmental impact has raised concerns—especially the carbon footprint of the large-scale data centers it requires—he argues that properly trained, industry-specific models can be incredibly efficient.
“For example, we’re doing a dinosaur movie. We’ve done 11 seasons of dinosaur animation, so we had a plethora of information to pull from,” says Bishop. This allowed the studio’s custom AI to generate images directly in the established style of the franchise, accelerating the entire process.
“Each progressive element got faster and faster. Normally, the computer has to process everything. Now the AI was processing everything, and we were getting phenomenal results in a fraction of the time, and at a fraction of the cost.”
While AI may not be a perfect fit for brand-new concepts, Bishop sees its impact growing. And with budgets continuing to shrink—even as audience expectations remain high—efficiency isn’t just an advantage; it’s a necessity. “We have to continue to create innovative content and also develop new workflows that are better for the world and better for production,” says Bishop. “That’s what excites me about this phase more than just getting a cheque tomorrow.”
Regardless of how the technology progresses and how the rights issues are resolved, the boulder that Davis describes is already rolling. It’s up to the industry to decide how to react.
CAKE’s Galton concurs: “I think the worst thing for us to do would be to stick our heads in the sand and just say, “AI doesn’t exist and we’re going to resist it.’ We’ve got to figure out how to work with AI and use it to our advantage.”
Studio Ghibli, AI and the ethics of imitation
A controversy erupted around the use of artificial intelligence when OpenAI’s image generator started producing visuals that closely mimicked the Studio Ghibli aesthetic. The situation struck a nerve with fans and artists alike—especially because Ghibli’s co-founder, Hayao Miyazaki, has been outspoken in his criticism of AI-generated art. (In a widely circulated 2016 NHK documentary, Miyazaki dismissed the use of the technology, saying, “I am utterly disgusted…I strongly feel that this is an insult to life itself.”)
The backlash prompted OpenAI to tweak its free-tier access and restrict prompts referencing the Japanese studio’s style.

AI users quickly jumped on the Ghibli style image train—including Glitch Art Gallery founder Jon Cates, who used it to show how Hayao Miyazaki might feel about this trend, based on previous comments regarding the tech
According to Business Insider, OpenAI CEO Sam Altman defended the tool, arguing that AI-generated art represents a “net win” for society. Altman also suggested that AI increases creative access and lowers barriers for those who might not have the traditional skills to produce visual work.
This debate has led to a broader conversation about consent, ownership and the future of creative industries in an age where machines can convincingly imitate the world’s most beloved artists.
This story originally appeared in Kidscreen‘s Q2 2025 magazine issue.