

The AI Code of Conduct below grows out of these reflections. It is not a final statement, but an evolving set of principles that will develop alongside our practice.
The principles are: Control, Consent, Clarity, Credit, Confidentiality & Climate.
As a film house exploring new methods of production, we continually reflect on our role as human creators working with emerging technologies.
CONTROL
AI is used as a tool for dialogue and co-creation, not as a replacement for human creativity or authorship. At every stage of development, we ensure that human creators set the direction, make key decisions, and approve the final result. AI may assist, inspire, or help iterate — but it never decides. We take active steps to identify potential bias in the models, and use known debiasing methods to balance outputs.
Humans always create the first draft and have the final cut.
1
2
We actively protect personal
rights and integrity
CONSENT
Whenever we clone, simulate, or modify the contribution of a performer or any creative, we always secure informed consent and ensure fair compensation. No likeness, voice, or creative work is used without clear permission. We respect the legal and moral rights of individuals, and we aim to go beyond the minimum legal requirements to uphold trust and fairness.
CLARITY
3
All performers, collaborators, and contributors are informed and consulted about any use of AI in the production. We clearly explain the methods, intentions, and potential outcomes.
We document all our processes and techniques throughout development and production. Where permitted, we share this documentation with relevant stakeholders — ensuring accountability and enabling meaningful dialogue.
We believe in openness and transparency
CREDIT
4
We are clear about who did the work, including when and how AI tools were involved.
Human contributors who guide or supervise AI processes are credited appropriately, such as through the role of “AI Supervisor” or similar titles. We acknowledge both human creativity and the tools used, fostering a culture of transparency rather than mystification.
We make authorship visible.
CONFIDENTIALITY
5
We enter into a data management agreement for every project, specifying how data is stored, secured, and accessed. We take active steps to ensure that our final outputs are not released as training material to third parties without explicit permission. By controlling the data lifecycle, we protect contributors, maintain creative integrity, and safeguard against misuse.
We manage data responsibly and with care.
CLIMATE
6
We minimise environmental impact, proportionate to creative need.
We treat carbon as a design constraint: where a GenAI solution can replace travel, builds, or reshoots with no loss of integrity, we choose the lower-impact path. When AI is used, we prefer energy-efficient models, greener cloud regions, and providers with strong clean-energy sourcing; we measure and cap compute so experimentation doesn’t sprawl. We report through the Green Producers Tool.