
Sora’s Infinite Scroll: Why OpenAI’s AI-Video Feed Is a Tech, Legal, and Carbon Test
By Alexander Cole
OpenAI’s Sora — an invite-only app that serves an endless stream of exclusively AI-generated videos — shot to the top of Apple’s US App Store days after its October 2025 debut. The product stitches short, hyperreal clips (each up to 10 seconds) that include deepfakable cameos and copyrighted characters, and it is already straining three fragile seams: compute costs, copyright law, and human trust.
OpenAI’s Sora — an invite-only app that serves an endless stream of exclusively AI-generated videos — shot to the top of Apple’s US App Store days after its October 2025 debut. The product stitches short, hyperreal clips (each up to 10 seconds) that include deepfakable cameos and copyrighted characters, and it is already straining three fragile seams: compute costs, copyright law, and human trust.
Sora arrives at a volatile moment for large-model AI: companies are chasing new user attention while shouldering vast infrastructure bills and mounting legal risk. The app’s popularity forces a simple question into the open — can a platform that generates video on demand scale economically, ethically, and environmentally? The answer will shape how companies price creative labor, where data centers are built, and whether courts or regulators rein in automated visual fabrication.
A product designed for endless engagement
A product designed for endless engagement
OpenAI released Sora in early October 2025 and allowed users to create short, loopable clips that can include a “cameo” avatar of a real person — an avatar that mimics voice and appearance. Bill Peebles, head of Sora, announced on October 5 that users could set restrictions on their cameos — blocking political content or certain words — but cautioned this was an early control layer as the app rolled out by invite only.
The format is pure attention engineering: vertical, snackable, and infinite. OpenAI told users in a blog post that video generation is a priority — “we are going to have to somehow make money for video generation,” Sam Altman wrote on October 3 — and the company has already tied Sora to broader platform plays such as in-app purchases and ads.
Why video is a different class of compute
That product strategy helps explain why Sora climbed to the top of Apple’s charts within days: the novelty of fully synthetic short-form video is a strong initial driver. But novelty does not equal sustainability. Sora’s feed model converts compute into session length: more bespoke, higher-fidelity generations lead to more time spent and more server cycles consumed.
Why video is a different class of compute
Text and single-image generation are comparatively cheap at the inference scale; video is orders of magnitude heavier because it requires frame-by-frame synthesis, temporal coherence, and often audio alignment. The MIT Technology Review’s analysis highlighted that video generation “dwarfs” the energy required for image or text models, and OpenAI’s internal roadmap includes investments in data centers and new power infrastructure to support heavier workloads.
Copyright, cameos, and the legal pressure cooker
Past studies have shown how large-model compute scales into substantial carbon footprints: researchers Emily Strubell and colleagues estimated in 2019 that training certain large NLP models could emit hundreds of thousands of pounds of CO2. Training and inference have different footprints, but the paper’s headline figure — 626,000 pounds of CO2 for one training run — illustrates the scale when models and datasets grow.
OpenAI has not published an explicit per-video energy or emissions metric for Sora. That opacity matters: as the company experiments with free, unlimited generation, each additional user session multiplies electricity demand, and that demand is concentrated where data centers and power plants are being commissioned — moves that have direct regulatory and political consequences for host communities.
Copyright, cameos, and the legal pressure cooker
Business incentives, concentration, and the climate of investment
Sora’s feed is rich territory for rights disputes. The app permits AI-generated uses of trademarked characters, copyrighted music, and deceased celebrities; OpenAI reportedly told rights holders they would need to opt out if they did not want their material included, a step that inverts common practice and has already provoked pushback from content owners.
Sam Altman acknowledged in his October blog post that rights-holders were demanding “more granular control” and warned of “edge cases” that might slip through. The company’s approach — default inclusion with opt-out — raises the odds of litigation because it places the burden on rights-holders after the fact rather than preventing reproductions in advance.
Cameos add a personal-rights vector. Although users can restrict who may insert their cameo and in what contexts, the enforcement model depends on policy filters and reactive takedowns. That creates predictable harm scenarios: political manipulation, defamation, sexualized misuse, or commercial exploitation of a person’s likeness that could prompt class actions or new state laws.
Business incentives, concentration, and the climate of investment
Sources
- MIT Technology Review — The three big unanswered questions about Sora (2025-10-07)
- MIT Technology Review — The Download: carbon removal factories’ funding cuts, and AI toys (2025-10-08)
- arXiv / EMNLP authorship — Energy and Policy Considerations for Deep Learning in NLP (2019-06-14)
- OpenAI — Sora launch and product notes (2025-10-03)