
Most agencies treat AI as a shortcut. We treated it as a cinematographer.
The brief was simple and enormous in equal measure: create a cinematic documentary about David Attenborough, his childhood, his BBC beginnings, his decades of discovery, his warning to the world, and his legacy, without a single camera crew, without a single location shoot, without a single frame of real footage.
Six minutes. One life. Thirty-nine shots. Every image generated from a prompt.
The subject demanded nothing less than BBC Planet Earth production quality. Anything short of that would have been an insult to the story. So that became the only acceptable standard.
We built the documentary the way a real director builds a film, before a single image was generated.
We wrote a visual grammar first. Every section of the documentary was assigned its own camera language, colour world, and emotional register. The childhood section shoots on simulated 16mm grain with warm amber fields. The BBC section drains to cold institutional black-and-white. The Discovery section floods back into saturated jungle colour. The Warning section progressively bleaches toward near-monochrome. The ending returns to warmth, but tentatively, as if something fragile is trying to grow back. The colour grade tells the entire emotional story before a word of narration lands.
We designed a 39-shot arc with deliberate visual callbacks. Shot 01, Earth from space, cold and vast, returns as Shot 29, and again as Shot 38. Same composition. Three completely different emotional meanings, earned by everything between them. The child's muddy hand holding a fossil in Shot 09 echoes in the old man's hand on ancient oak bark in Legacy Shot B. The vibrant coral reef of Shot 26 returns as the ghost-white bleached reef of Shot 33, same architecture, all colour drained. Every callback was designed months before generation began.





We engineered prompts at the level of a cinematographer's brief. Not descriptions but instructions. Every prompt specifies lens focal length, depth of field choice, colour grade temperature, sensory anchors (what the image should smell and sound like), camera movement for Kling 2.6 animation, and the precise emotional register required. The result is images that don't just look beautiful, they mean something specific in the sequence they inhabit.
We used Omni Reference character locking for the two Legacy shots featuring Attenborough himself. A single portrait reference anchored his face geometry, skin tone, silver-white hair, and distinctive clothing across both shots placing him at the edge of an Atlantic cliff and at the treeline of an ancient English oak forest, in images that have never existed.
We built a 35-second mystery-reveal teaser that withholds his identity completely, fossil, silhouette, hands, cropped eyes only, until his name appears over Earth from space in complete silence. No music. No fanfare. Just the name and the planet.
The entire production, 39 primary images, 10 b-roll frames, 2 Omni Reference character shots, a full teaser trailer, and complete Kling 2.6 animation direction for every shot, was delivered without a single flight, location permit, or camera hire.




A six-minute cinematic documentary that moves through eight decades of one man's life, from a 1930s English field to the deep ocean to the edge of the Antarctic ice shelf, with a visual language sophisticated enough to carry the story without leaning on the subject's fame to do the work.
A trailer that makes people ask "who is this?" for 28 seconds before the answer makes them want to watch the full film.
A production methodology that proves AI image generation is not a replacement for creative direction, it is a new instrument that rewards exactly the same skills: a clear visual mind, a strong emotional point of view, and the discipline to know what every single frame is for.
The work exists. It is six minutes long. Watch it from the beginning.