Imagine sitting in a darkened cinema, the smell of popcorn in the air, waiting for the latest thriller to unfold. The lights dim, the opening credits roll, and there he is: Val Kilmer. He looks exactly as you remember him: the sharp jawline, the piercing eyes, that effortless charisma that made him a global icon in the eighties and nineties. But there is a catch. Val Kilmer passed away before he could step foot on this particular set. What you are watching isn't a long-lost reel found in a dusty vault; it is a sophisticated, generative AI recreation of a man who is no longer with us.
This isn't science fiction anymore. It is the reality of the modern entertainment industry. The news that Val Kilmer’s likeness would be used to complete his role in the upcoming film As Deep as the Grave has sent shockwaves through Hollywood and beyond. While we have seen digital de-aging and brief posthumous cameos before, this marks a significant turning point. It is the first time a major actor has been "resurrected" using generative AI for a substantial role in a feature film. It raises a massive, complex question that we are only just beginning to grapple with: just because we can bring our icons back from the dead, should we?
At NowPWR, we spend a lot of time thinking about how technology reshapes the way we create and consume content. This latest development feels like a threshold moment. It’s about more than clever pixels and voice cloning; it’s about legacy, consent, and what we even mean by “a performance” in 2026.
The ‘posthumous’ return of a Hollywood star
The headline idea is simple and a bit mind-bending: a new film can feature Val Kilmer even if he never stepped on set for it. Not a blink-and-you-miss-it cameo, but something closer to a full role built from generative AI, existing footage, and the kind of production pipeline that didn’t really exist a few years ago.
Supporters frame it as finishing what Kilmer started. If he’d agreed to a project and the tech can help deliver it, why shouldn’t the audience get to see that last turn? In that version of events, AI is more like a digital stunt double: a tool used to complete a creative intention rather than rewrite it.
The uncomfortable bit is that the cinema experience doesn’t come with a warning label. Audiences walk in expecting acting — human choices, human timing, human chemistry — and instead they may be watching a performance assembled from data. If it looks right and sounds right, most people won’t even notice in the moment. That’s exactly the point, and exactly the problem.
The ethics of AI in our cinematic legacy
Consent is the centre of the entire debate. It’s one thing for an actor to approve a de-ageing shot or a bit of motion-capture while they’re alive. It’s another for a studio to treat a person’s likeness as an asset that can be re-used, re-mixed, and re-deployed long after they’re gone.
Even when families and estates sign off, it raises a bigger question: whose decision is it, really — the performer’s, the heirs’, or the rights-holder’s? And what happens when the incentives get messy? An estate might be under financial pressure. A studio might push for “just one more scene” because the model can do it cheaply. The line between tribute and exploitation can move fast once the tech is good enough.
There’s also an industry-wide knock-on effect. During recent disputes in Hollywood, performers have been clear about the fear: today it’s “honouring legends”, tomorrow it’s scanning background artists, and next week it’s using a digital double because it’s quicker than scheduling reshoots. AI doesn’t just change how films are made; it changes who gets paid, who gets credited, and who gets replaced.
Where does the human actor end and the code begin?
Here’s the tech-savvy question that sits under all the drama: what is the “actor” in an AI-built performance? Is it the original person whose face and voice trained the model? The body actor on set supplying movement? The editor shaping timing? The prompts and guardrails? Or the model itself, spitting out choices that look like instinct?
Acting has always had a bit of illusion baked in — lighting, lenses, ADR, stunt doubles, even deepfake-style face replacement in limited cases. But generative AI makes the illusion scalable. Once a studio has a convincing model, it can iterate endlessly: different line readings, different facial micro-expressions, different emotional intensity — like turning performance into a settings menu.
The risk isn’t only the uncanny valley. The bigger risk is that the valley disappears, and nobody can tell where the human ends and the code begins. If audiences stop being able to distinguish between a living performer and a synthetic reconstruction, the cultural meaning of “star power” changes. The industry may gain a new kind of control, but it could lose something harder to measure: the messy, unrepeatable humanity that makes a scene land. The next few productions will show whether this is a one-off exception — or the start of a new normal.