Moments Lab Secures $24 Million to Redefine Video Discovery With Agentic AI


Moments Lab, the AI firm redefining how organizations work with video, has raised $24 million in new funding, led by Oxx with participation from Orange Ventures, Kadmos, Supernova Invest, and Elaia Partners. The funding will supercharge the corporate’s U.S. enlargement and assist continued improvement of its agentic AI platform — a system designed to show large video archives into immediately searchable and monetizable belongings.

The center of Moments Lab is MXT-2, a multimodal video-understanding AI that watches, hears, and interprets video with context-aware precision. It doesn’t simply label content material — it narrates it, figuring out folks, locations, logos, and even cinematographic parts like shot sorts and pacing. This natural-language metadata turns hours of footage into structured, searchable intelligence, usable throughout artistic, editorial, advertising and marketing, and monetization workflows.

However the true leap ahead is the introduction of agentic AI — an autonomous system that may plan, purpose, and adapt to a person’s intent. As a substitute of merely executing directions, it understands prompts like “generate a spotlight reel for social” and takes motion: pulling scenes, suggesting titles, deciding on codecs, and aligning outputs with a model’s voice or platform necessities.

“With MXT, we already index video sooner than any human ever might,” stated Philippe Petitpont, CEO and co-founder of Moments Lab. “However with agentic AI, we’re constructing the following layer — AI that acts as a teammate, doing every part from crafting tough cuts to uncovering storylines hidden deep within the archive.”

From Search to Storytelling: A Platform Constructed for Velocity and Scale

Moments Lab is greater than an indexing engine. It’s a full-stack platform that empowers media professionals to maneuver on the pace of story. That begins with search — arguably probably the most painful a part of working with video right now.

Most manufacturing groups nonetheless depend on filenames, folders, and tribal information to find content material. Moments Lab adjustments that with plain textual content search that behaves like Google in your video library. Customers can merely sort what they’re searching for — “CEO speaking about sustainability” or “crowd cheering at sundown” — and retrieve precise clips inside seconds.

Key options embrace:

  • AI video intelligence: MXT-2 doesn’t simply tag content material — it describes it utilizing time-coded pure language, capturing what’s seen, heard, and implied.
  • Search anybody can use: Designed for accessibility, the platform permits non-technical customers to look throughout hundreds of hours of footage utilizing on a regular basis language.
  • Prompt clipping and export: As soon as a second is discovered, it may be clipped, trimmed, and exported or shared in seconds — no want for timecode handoffs or third-party instruments.
  • Metadata-rich discovery: Filter by folks, occasions, dates, areas, rights standing, or any customized side your workflow requires.
  • Quote and soundbite detection: Routinely transcribes audio and highlights probably the most impactful segments — excellent for interview footage and press conferences.
  • Content material classification: Practice the system to type footage by theme, tone, or use case — from trailers to company reels to social clips.
  • Translation and multilingual assist: Transcribes and interprets speech, even in multilingual settings, making content material globally usable.

This end-to-end performance has made Moments Lab an indispensable accomplice for TV networks, sports activities rights holders, advert businesses, and world manufacturers. Current purchasers embrace Thomson Reuters, Amazon Adverts, Sinclair, Hearst, and Banijay — all grappling with more and more complicated content material libraries and rising calls for for pace, personalization, and monetization.

Constructed for Integration, Educated for Precision

MXT-2 is skilled on 1.5 billion+ knowledge factors, decreasing hallucinations and delivering excessive confidence outputs that groups can depend on. Not like proprietary AI stacks that lock metadata in unreadable codecs, Moments Lab retains every part in open textual content, making certain full compatibility with downstream instruments like Adobe Premiere, Closing Reduce Professional, Brightcove, YouTube, and enterprise MAM/CMS platforms through API or no-code integrations.

“The true energy of our system isn’t just pace, however adaptability,” stated Fred Petitpont, co-founder and CTO. “Whether or not you’re a broadcaster clipping sports activities highlights or a model licensing footage to companions, our AI works the way in which your workforce already does — simply 100x sooner.”

The platform is already getting used to energy every part from archive migration to dwell occasion clipping, editorial analysis, and content material licensing. Customers can share safe hyperlinks with collaborators, promote footage to exterior consumers, and even practice the system to align with area of interest editorial kinds or compliance tips.

From Startup to Normal-Setter

Based in 2016 by twin brothers Frederic Petitpont and Phil Petitpont, Moments Lab started with a easy query: What should you might Google your video library? In the present day, it’s answering that — and extra — with a platform that redefines how artistic and editorial groups work with media. It has change into probably the most awarded indexing AI within the video business since 2023 and reveals no indicators of slowing down.

“After we first noticed MXT in motion, it felt like magic,” stated Gökçe Ceylan, Principal at Oxx. “That is precisely the type of product and workforce we search for — technically sensible, customer-obsessed, and fixing an actual, rising want.”

With this new spherical of funding, Moments Lab is poised to guide a class that didn’t exist 5 years in the past — agentic AI for video — and outline the way forward for content material discovery.

Leave a Reply

Your email address will not be published. Required fields are marked *