At the end of 2024, we embarked on a year-long journey to explore the possibilities and responsibilities of adopting Artificial Intelligence (AI) and develop solutions that support the work we do, enrich the datasets we are trusted with, and support organisations and individuals in the cultural sector with their AI adoption. We began from a place of care and caution, choosing to experiment thoughtfully, not rushing to implement. For us, the responsible adoption of AI isn’t just a governance issue, we have our AI Impact Assessment for this, it’s a human one. It asks us to question our assumptions, face those uncomfortable conversations, and be okay with step-by-step working through ethical AI adoption together.
Starting with Ethics, Not Hype
As we work through our approach we have four simple, human-centred questions:
- Why are we doing this? To support our work and that of the cultural sector with access, insight, and creativity whilst not losing what makes our work human.
- Are we doing it the right way? With a foundation of fairness, inclusivity, and transparency, we are developing curated tools that collaborate with our teams and external practitioners that will support their work.
- What could go wrong? We acknowledged not only legal and data risks, but also emotional, reputational, environmental and cultural impacts.
- Who is responsible? We prioritise shared governance and “human-in-the-loop” systems, ensuring decisions about AI were not made in silos.
These principles have quietly shaped the tools we are developing and the partnerships we are building. We believe that innovation must start with integrity if it’s to be genuinely useful and ethics and experimentation can work together. Below are a few examples of some of our projects to date:
-
Let’s Get Real AI: supporting ethical AI Adoption
Let’s Get Real: AI is the twelfth edition of this sector-leading programme, supporting a fantastic cohort of cultural organisations in their adoption of AI and building confidence in using AI. Running from April to December 2025, it blends hands-on experimentation with expert guidance on ethics, governance, and strategy. Participants explore real-world use cases, trial AI tools in their own settings, and adapt policies for responsible adoption, supported by experts including Jocelyn Burnham, AMA, and Bloomberg Connects. The programme culminates in a final report and conference in early 2026.
-
AS Genie Beta: A support tool for Audience Spectrum,
One of the first outcomes of this journey is AS Genie, a GPT-powered support chat tool designed to make Audience Spectrum insights more usable in the day-to-day work of cultural professionals. It’s built on our own knowledge base and refined through extensive internal testing.
But instead of launching big, we’re launching small and listening to our users. We’re asking how tools like this fit into existing workflows, where they’re useful, and where they fall short. This isn’t just user testing; it’s values testing. It’s a commitment to shaping tools with the people who will actually use them, and never mistaking convenience for care.
-
EU Horizon HAMLET: Embedding Creative Values in Digital Transition
Internationally, our involvement in the EU Horizon HAMLET project brings these same principles to a European stage. Together with 11 partners, we’re co-developing AI “enablers” designed specifically for the Cultural and Creative Industries, focusing on collaboration, accessibility, and sustainability.
Our role includes piloting tools in the UK, supporting communications, and anchoring the work in the needs of real practitioners. HAMLET’s ambition is not just to produce tech, but to protect and promote the values that matter most to the sector: creative autonomy, equity, and connection. The HAMLET Collaborative Community Hub within the project is a space to share learning and lower the barriers to AI adoption, especially for small or under-resourced organisations.
From Ethical Frameworks to Everyday Practice
Throughout our work, it is clear that ethical AI isn’t something you adopt once and move on. It’s an ongoing conversation, a daily decision and a conscious choice in how we approach our use of this technology. It asks us to listen more, rush less, and be honest about risks and transparency.