OpenAI’s New Sora App Lets You Deepfake Yourself for Entertainment
Summary
OpenAI has launched Sora, an invite-only iOS app powered by its Sora 2 video model that generates short, TikTok-style AI videos featuring realistic faces and voices. Users can create a digital likeness of themselves by recording a short head-turn and speaking a few numbers; that likeness can then be used in nine-second AI clips that include generated visuals and audio. Sora emphasises a social feed of bite-size, user-driven deepfakes and includes guardrails to block certain sexual, violent, self-harm, extremist and impersonation content, plus settings to control who can use your likeness.
Key Points
- Sora is powered by OpenAI’s Sora 2 video model and currently available on iOS by invite only.
- Users can create a personal digital likeness (head-turn + audio sample) that the app uses to generate short AI videos.
- The app produces nine-second clips with AI-generated script, visuals and sound — and a TikTok-like “For You” feed of such content.
- Creators can add other people as “cameos” so multiple likenesses appear in a generated video.
- OpenAI built safety guardrails restricting sexual content, graphic violence, extremist material, self-harm content and some impersonations.
- Users can set who may use their likeness (everyone, only you, approved people, or mutual connections), and you can view clips that use your likeness from your account page.
- Sora blocks some public-figure and third-party-similarity prompts, but in testing allowed some copyrighted or fictional characters (e.g. Pokémon), raising rights-management questions.
- Outputs can be strikingly convincing but still show rough edges; the app’s ease of use and realism raise clear risks around bullying, deception and misuse.
Context and Relevance
Sora arrives amid a rush of AI-generated short-video features from big platforms. It marks a step beyond static-image deepfakes by adding convincing audio and an endless scroll of personalised clips — turning synthetic likenesses into a mainstream entertainment format. That increases pressure on content moderation, consent controls and copyright handling, and it heightens social risks (harassment, misinformation and identity misuse) even as it creates new creative possibilities. Regulators, rights holders and platform operators will be watching closely.
Why should I read this?
Because if you scroll social apps, this is the next thing that could be clogging your feed — and it’s shockingly easy to make. Sora shows where AI entertainment is going: quick, personal deepfakes that look and sound real. Read this to know what the tech does, what it blocks, how you can control your own likeness, and why you might want to worry (or play) before everyone else does.