
Sora’s Infinite Scroll: Why AI-Generated Video Breaks the Model (and the Law)
By Alexander Cole
OpenAI’s Sora—an invite‑only app that stitches endless 10‑second AI videos into a TikTok‑style feed—surged to the top of Apple’s U.S. App Store in October 2025.
OpenAI’s Sora—an invite‑only app that stitches endless 10‑second AI videos into a TikTok‑style feed—surged to the top of Apple’s U.S. App Store in October 2025. It’s not just another chatbot experiment; it recalibrates the technical, legal, and environmental calculus of generative AI and forces urgent questions about who pays, who’s harmed, and who governs.
Sora makes a hard bet: users will trade real video for synthetic novelty, and economic and regulatory systems will catch up. That wager matters because Sora combines hyperreal cameos, copyrighted characters, and unlimited, high‑compute video generation at scale—creating a vector for copyright suits, privacy harms, and a sudden surge in energy demand. The next moves by OpenAI, rights holders, and regulators will decide whether Sora becomes a new medium or a regulatory, financial, and ethical minefield.
How Sora changes the AI math
How Sora changes the AI math
Sora’s technical leap is deceptively simple: instead of text or static images, it serves a continuous feed of short, fully synthetic videos—each up to 10 seconds long—rendered on demand. OpenAI’s blog and reporting in MIT Technology Review explain that the app also offers “cameos,” photorealistic avatars of real people that can speak in their voice and be inserted into other users’ clips (see OpenAI’s October blog post and MIT Technology Review’s coverage).
Video is not a marginal increment on text; it multiplies compute and storage requirements. Rendering a 10‑second clip at 24–30 frames per second requires generating dozens of frames, stitching motion‑consistent audio, and enforcing temporal coherence—work that typically demands orders of magnitude more FLOPs and memory than a single ChatGPT query. OpenAI has acknowledged those costs: CEO Sam Altman wrote on October 3 that “we are going to have to somehow make money for video generation.”
The legal chain reaction: copyright, deepfakes, and cameos
That compute intensity has practical consequences. OpenAI is already investing heavily in data centers and energy supply, and video’s energy footprint makes Sora materially different from text‑first products. If Sora scales to millions of daily users, the aggregate emissions and electricity demand could be large enough to influence data‑center planning, corporate emissions reporting, and public scrutiny of AI’s carbon accounting.
The legal chain reaction: copyright, deepfakes, and cameos
Sora’s permission model is novel and legally combustible. Instead of asking rights holders to opt in, OpenAI reportedly told many copyright owners they must opt out if they don’t want their characters included—an inversion that, according to reporting, has triggered pushback and increases the odds of litigation.
Who pays for the compute—and who profits?
Copyright is only the opening act. Sora makes it easy to deepfake public figures and deceased celebrities, and to insert another user’s cameo into new contexts unless the cameo owner has locked down permissions. OpenAI moved quickly after early complaints: on October 5 the head of Sora, Bill Peebles, announced new granular controls allowing cameo owners to forbid political uses or certain words. But those controls are reactive, and technical limits on misuse remain porous.
Expect lawsuits and regulatory complaints. Rights holders will test whether an opt‑out regime satisfies existing copyright law. Individuals whose likenesses are used in harmful or defamatory ways will press false‑light and right‑of‑publicity claims. And privacy regulators in the U.S. and EU are likely to frame cameo misuse as a consent and biometric‑data problem, expanding the legal stakes beyond copyright into data‑protection territory.
Who pays for the compute—and who profits?
Fairness, detection, and technical defenses
OpenAI is not yet profitable, and hosting an endless stream of AI video is expensive. Sam Altman has acknowledged the company must find monetization for video generation; options include subscription tiers, in‑app purchases for premium models or cameo rights, and personalized advertising. The last option risks making Sora a highly targeted ad channel powered by synthetic content calibrated to engagement.
The economics are also geopolitical. Sora increases demand for GPUs and accelerator capacity, tightening the market dominated by Nvidia and its partners. Industry observers have raised concerns about circular deals and market concentration—where model developers, chipmakers, and cloud providers mutually reinforce one another’s valuations—potentially inflating infrastructure costs and creating opaque vendor lock‑in.
For creators and rights holders the revenue split is uncertain. If OpenAI monetizes Sora with ads or commerce, the company will face pressure to share revenue with parties whose IP drives engagement. Absent clear licensing, though, the platform may default into a model where platform value accrues to the infrastructure owner while third‑party creators and performers receive little compensation—reproducing familiar platform dynamics in a new, synthetic medium.
A narrow window to shape the medium
Fairness, detection, and technical defenses
Sora surfaces classical fairness failures in a new guise. Cameos and synthesized personas can entrench stereotyping: an algorithm trained on biased datasets will produce skewed accents, mannerisms, or gendered behaviors when generating “characters,” amplifying representational harms. These are not hypothetical; model behavior in image and speech generation already shows demographic gaps in quality and plausibility.
Detection and provenance tools are immature. Digital watermarking, model provenance headers, and cryptographic attestations are the leading technical proposals, but they require industry coordination and legal backing to be effective. Watermarks can be stripped or lost through transcoding; provenance systems need incentives for adoption and interoperable standards—things regulators might force but industry has resisted so far.
Sources
- MIT Technology Review — The three big unanswered questions about Sora (2025-10-07)
- MIT Technology Review — AI toys are all the rage in China—and now they’re appearing on shelves in the US too (2025-10-07)
- MIT Technology Review — The Download: carbon removal factories’ funding cuts, and AI toys (2025-10-08)
- VentureBeat — Vercel Security Checkpoint / AI context reporting (2025-10-08)