Let's Get Real: AI | Exploring issues and opportunities around AI in a supported, playful and collaborative environment.
In 2011 we ran the first LGR collaborative action research programme. Since then, 150 organisations have taken part across 11 cohorts. Across those cohorts, the speed of technological change has not slowed and even those of us who feel digitally confident are struggling to know what AI means for the cultural sector, in practice and strategically.
For this phase, which started in April 2025, we focus on AI by exploring tools, use cases, governance and strategy, all in the context of what it means to be a cultural organisation trying to get the best from technology. We are exploring AI and the culture sector with playfulness, vulnerability and in ways that offer genuine room for innovation. By taking part in the programme, organisations are supported on a journey of positive change and transformation.
As this cohort nears its completion Jocelyn Burnham, Stephen Miller and Alec Ward offer their thoughts and reflections on the programme so far >>
Jocelyn Burnham, LGR Collaborator:
I’d reflect that a central theme of the cohort is the practice of becoming comfortable with a broad range of outcomes for AI experiments, especially when these experiments might ‘fail’ in some significant way.The ambition to develop a working and helpful prototype or process at the first try is of course a natural one, but I’d suggest it often turns out that we learn a tremendous amount from the ‘failure’ cases – and in fact, a well-documented experimental failure can sometimes provide more richness, discussion and ideas for future experimentation than a ‘working’ process which we might feel resistant to continue evolving once we have evidenced its value.
Ultimately the process of AI experimentation is a practice, and one which becomes more effective and specific to the context of your organisation the more often you do it, as lots of smaller experiments build on top of each other to point towards a particular direction. By becoming comfortable with experiments giving unexpected results, we can begin to build up an objective set of measurable indicators about where AI may, or may not, provide some value or innovation to our work, which gives us assurance that our decisions are backed with good data and ideally are less influenced by our initial biases (positive and negative) about its capabilities.
Stephen Miller, Chief Technology Officer:
Looking back at the programme so far, LGR AI has become a practical and safe space for cultural organisations to explore artificial intelligence together. It’s not about hype, it’s about learning in a structured environment where governing boards, leadership teams and staff have a safe place to try things out, ask questions, and experiment without pressure. That freedom to test, play and sometimes get things wrong has been important, because it’s often how useful ideas emerge and anxiety reduced.
We’ve seen a shift from simply trying to understand what AI is, to thinking seriously about how to use it responsibly and in a way that supports people and purpose. Participants are now more confident having internal conversations about ethical adoption and when AI is, and isn’t, appropriate. The focus has been on real organisational needs, not chasing trends.
One of my main reflections from the programme is that working with AI brings out human reactions - excitement, doubt, fear of change, curiosity. Acknowledging this is essential to support our learning and transparency. AI should support our work, not override judgment or erase the context and emotion that sit behind real insight.
Overall, LGR AI is building AI-literate organisations that approach innovation thoughtfully, test ideas safely, and stay rooted in integrity and people.
Alec Ward, Consultant Lead for Digital Content and Skills:
People within your organisation are likely already using AI. Probably through personal accounts on platforms like ChatGPT, or even through work accounts on Gemini or Copilot. It’s an interesting reflection from chats with members of the Let’s Get Real cohort about organisational use of AI.
You can’t bury your head in the sand when it comes to AI use. You and your colleagues may have the best of intentions, but it only takes the uploading of one spreadsheet with the wrong email addresses and that’s then in the AI either for good. So if your organisation doesn’t have guidelines in place, be it a full policy or a set of guardrails, it’s definitely time to start the conversation.
Don’t forget that AI is more than just your large language models (like ChatGPT) – it’s in image editing tools, it’s in transcription tools and assistants. So when you’re talking about it, try to think about the bigger picture. It’s the bigger picture that we’ve been exploring in Let’s Get Real, focusing on both the strategy and the practice. So get talking! Bring it up at your next team meeting and keep banging the drum for guidance and agreement on how your organisation will, and will not, use AI tools.