All News

Captions Rebrands to Mirage to Lead AI Short-Form Video

Captions, a VC-backed AI video startup valued at $500M, is rebranding as Mirage to focus on multimodal foundational models for short-form video. The move unifies its creator app and Mirage Studio for brand ads that generate video from audio and selfies. The company touts natural-looking avatars but faces deepfake and labor-impact concerns.

Published September 4, 2025 at 12:09 PM EDT in Artificial Intelligence (AI)

Captions becomes Mirage in push to redefine short-form AI video

Captions, the AI-powered video app known for helping creators edit and produce short clips, announced a rebrand to Mirage as it expands into research and model development for short-form video platforms like TikTok, Reels, and Shorts.

Backed by more than $100 million in venture capital at a $500 million valuation, Mirage says the new identity reflects a broader ambition: to build multimodal foundational models tailored specifically to the constraints and formats of bite-sized social video.

  • Multimodal models focused on short-form video formats and native platform behavior.
  • Mirage Studio: generate ads from an audio file, create AI avatars from selfies, and synthesize backgrounds without stock clips.
  • Business plan at $399/month for 8,000 credits, with a 50% first-month offer for new users.

Mirage positions itself against competitors like D-ID, Synthesia, and Hour One by emphasizing models trained for short attention spans, native motion and expression, and by avoiding reliance on stock footage or simple lip-sync tricks.

“The real race for AI video hasn’t begun,” CEO Gaurav Misra told TechCrunch, framing Mirage as both a product company and a frontier AI research lab aimed at redefining the video category.

  • Brands can scale short ads quickly and without large production budgets.
  • Creators gain new tools for rapid content iteration and avatar-driven storytelling.
  • Advertisers can experiment with localized, personalized spots at a fraction of traditional costs.
  • Creative workers face displacement risks as AI reduces the need for on-set talent and low-budget production crews.
  • Deepfake and misinformation risks grow as synthetic faces and speech become harder to distinguish from real footage.
  • Ethical and legal questions about consent, likeness rights, and ad transparency will increase pressure on platforms and brands.

Mirage has acknowledged these risks in a company blog post and says it enforces moderation rules to prevent impersonation and requires consent for likeness use. It also argues that design alone won’t solve the problem, calling for a new kind of media literacy where audiences treat videos with the same skepticism they give news headlines.

The rapid adoption of generative video tools is a cross-cutting challenge: marketers want scale and efficiency, creators worry about jobs, platforms must police misuse, and governments will look to regulation. Think of Mirage as compressing a full production house into a single upload—efficient, powerful, and also a potential vector for misuse if safeguards lag behind innovation.

For organizations deciding how to engage, the practical first step is measurable pilots and risk assessments: run small campaigns that model creative outcomes and test moderation before scaling. That balance between experimentation and guardrails is exactly the kind of systems-thinking QuarkyByte applies when helping teams translate AI capability into responsible business outcomes.

As Mirage pivots from captioning tools to foundational multimodal models, the industry gets a fresh reminder: AI video is moving fast, and the winners will be those who combine creative flexibility with technical controls and clear policies. Expect more startups and incumbents to follow—and more debate about what synthetic media should look like in a world where seeing is no longer believing.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte can run model-risk audits and simulate ad pilots that use synthetic avatars to predict cost, reach, and brand safety. We help brands and regulators design detection pipelines, moderation policies, and media-literacy playbooks so AI video scales responsibly and measurably.