COMMENTARY: AI’s free ride on creative labour is undermining the marketplace
Summary
Gerard Scimeca argues that major AI developers are training powerful models on copyrighted films, TV and other creative works without permission, using OpenAI’s Sora 2 — which can generate movie‑quality video from text prompts — as a recent example. He contends this practice amounts to creative arbitrage: companies reap commercial value from creators’ labour without paying for the inputs, eroding the market incentives that sustain artists, writers and independent studios.
The piece warns of broader harms beyond Hollywood: identity and reputation risks from easy face/voice cloning, a collapse of licensing markets, and a drift towards monopolies if creators can no longer monetise their work. Scimeca calls for accountability rather than heavy‑handed tech bans — enforce copyright against code, require licences or payment for training on protected works, and demand transparency about model inputs.
Key Points
- AI models like OpenAI’s Sora 2 reportedly trained on vast libraries of copyrighted film and TV without permission.
- Training on protected creative work without licence undermines copyright, devalues creative labour and weakens market incentives.
- Industry groups such as the Motion Picture Association and Creative Artists Agency warn Sora‑type models threaten jobs and revenues for creators.
- Face and voice cloning tools raise personal harm risks — impersonation, embarrassment or reputational damage for anyone, not just celebrities.
- Scimeca argues that existing copyright law should apply to AI training and outputs; developers should pay for access to protected works.
- He calls for transparency about training data and for treating unlicensed model outputs as derivative of copied material, not ‘just learning’.
- Unchecked appropriation of creative content by AI risks monopoly and a collapse of quality as creators lose the expectation of reward.
Content summary
The column opens with the core claim that “you can’t take what isn’t yours and sell it as your own,” and frames Sora 2 as symptomatic of a wider practice where AI firms mine copyrighted media to teach models ‘style’ and then monetise outputs. Scimeca rejects the tech industry’s distinction between ‘learning’ and ‘copying’, saying ingestion of copyrighted works is replication without permission.
He outlines tangible market effects: small studios and independent creators cannot compete with free, unlicensed copies; licensing revenue evaporates; consumer quality declines when incentives disappear. The author likens the behaviour to theft in other industries (pharma, car design) to stress that the logic would not be accepted elsewhere.
The recommended remedy is policy clarity and accountability: ensure copyright law covers AI training and outputs, require payment/licences for use of protected works, and enforce transparency so consumers and creators know when outputs rely on unlicensed inputs.
Context and relevance
This commentary sits squarely in ongoing debates about AI governance, copyright reform and creator rights. Courts and legislatures around the world are already grappling with questions about dataset provenance, fair use, and whether model training constitutes infringement. The piece is relevant to creators, legal teams, policymakers and platform operators — anyone with a stake in how AI is trained, monetised and regulated.
Recent high‑profile announcements (multimodal models that generate images and video, voice and face synthesis) make the piece timely: technical capability is racing ahead of clear legal and commercial frameworks, and that mismatch is the author’s central concern.
Why should I read this?
Short version: if you care about who gets paid for creative work — or about whether your image or voice can be cloned by an app overnight — this is worth two minutes. The column cuts through the tech hype and explains why unlicensed training matters, who loses, and what simple accountability measures could look like. It’s a brisk, pointed take that saves you reading a dozen legal briefs.
Author style
Punchy: Scimeca uses blunt analogies and industry quotes to make a forceful case that this isn’t nostalgia for old rules but a defence of market fundamentals. If you’re tracking AI policy or creator economics, the piece’s urgency is deliberate — it argues the next steps matter for whether creators can still earn a living.