The second wave of AI in media: What VCs should know

In this article, Kabir Kochhar of Audacity Venture Capital, explores how generative AI is evolving into agentic media workflows, driving faster production, smarter monetisation and new opportunities in voice, video and orchestration across MediaTech.

author-image
Social Samosa
New Update
10 (3)

Generative AI has already changed how media is made and consumed. On the consumer side, it shows up as quick news explainers, search helpers, talking game characters, chat companions, and creator tools that turn ideas into social content. Inside media companies, AI now touches editing, captioning, translation, quality control, and compliance. That first wave proved speed and scale. It showed that AI can move work faster and widen its reach.

Opportunities of present

The shift underway is from single features to agentic workflows. These are systems that plan steps, call the right tools, and finish tasks with little or no hand-holding. They work only when they can use a company’s data and software safely and in a structured way. Secure interfaces and policy layers are turning AI from demos into dependable B2B products.

Across Media Tech, the change is visible. Adtech and martech use AI to generate new ad variations, find the right audiences, and improve yield. Production uses AI to prep footage, edit, localise, and version content. Discovery uses AI to tag assets and improve recommendations. Data teams use AI to clean messy catalogues and spot patterns worth programming against. Sports workflows use AI to auto-clip highlights and surface coaching insights. These are line-item improvements that add up.

Market potential is meaningful. Estimates suggest about 176 billion dollars of opportunity tied to AI in MediaTech so far, with room to grow toward 1.3 trillion dollars as more deployments show clear return. These numbers indicate direction, not guaranteed revenue, and they justify a more disciplined approach to capital.

Two areas are especially mature today. Voice is ready. Dubbing and localisation have moved from months to days, opening new viewers and new ad inventory. Monetisation is the filter. In ads and marketing, tools that increase fill, improve relevance, and speed creative testing get budgets. The same rule applies to subscriptions and commerce, where conversion and churn are the metrics a CFO already tracks.

Specialist teams are winning deals because they know where value sits when AI is everywhere. They focus on asset-level profitability, measurable personalisation, and versioning that passes creative and legal checks. They also pick problems AI can finally solve now and design systems end-to-end.

What “agentic” looks like in practice is a closed loop anchored to outcomes. Ingest a season’s footage, auto-tag shots, and map scenes to brand-safe categories. Generate multilingual cuts and ad variants that meet local rules. Test across audiences, shift spend in near real time, and report lift in yield and conversion. Feed results back so the next batch starts smarter. No single model does all this. Orchestration layers for routing, policy, and human-in-the-loop controls make the pieces work together. The investable product is the workflow and the proof it delivers, not the model alone.

Trust and safety determine adoption. Role-based access, audit logs, safe data connectors, brand and compliance guardrails, and clear pricing against outcomes unlock budgets. Where policy risk is high, explainability, provenance, and watermarking are becoming standard.

What does the future hold?

Video generation is the next step. As models improve and the cost per minute falls, tools will cross from demo to daily use. The opportunity is to back products that turn this capability into revenue. Think automatic trailers, short-form cutdowns, and localised versions that meet creative and compliance needs without extra headcount. Synthetic presenters for news, sports roundups, and commerce demos can fill time slots that do not justify full crews and can run around the clock.

Spatial computing is another curve to watch. Labs are working on wearables, AR and VR, and interactive media. If a breakout moment arrives, new formats will open quickly: glasses that overlay live stats at a match, a kitchen assistant that watches and guides while you cook, an interactive concert that blends the room with the stage. Teams that already handle localisation, rights, and dynamic rendering will be best placed to commercialise these formats. Look for practical monetisation: sponsorship overlays, shoppable scenes, and premium access.

For investors, the lens should stay simple. Back teams that use voice and video generation to create new revenue lines, not only to cut costs. Back orchestration tools that make enterprises trust and adopt these systems. Back agentic workflows in the monetisation stack where the KPI is clear. Lift in yield. Lift in conversion. Lower time to version. Faster localisation. These proof points win the budget inside media companies.

The first wave produced eye-catching moments. The second wave must show steady improvements that appear on a P&L. Consumer tools and enterprise tools are both real. Agentic workflows are the bridge between them. Spatial computing will add new surfaces later. Value will flow to products that either create new revenue or move a core metric that the finance team already believes in. That is where venture capital should focus now, and that is where the next leaders in MediaTech will be built.

This article is penned by Kabir Kochhar, Managing Partner at Audacity Venture Capital.

Disclaimer: The article features the opinion of the author and does not necessarily reflect the stance of the publication.

generative ai ai in media