A woman sits atop a computer console from the 1950s in a vintage computer lab. One window possesses columns from Nevile’s Court at Trinity University at Cambridge University, and the other window depicts six potted flowers and sunlight. The background is black and white, and the image of the woman and the sunny window are warm tones.

Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Pas(t)imes in the Computer Lab / CC-BY 4.0

Jocelyn Burnham is working with The Audience Agency on the upcoming Let’s Get Real: AI programme running from April to December 2025. She brings her expertise and approach to this collaborative action research, supporting the 20 organisations who sign up to carry out AI experiments internally and consider implications for governance and business development.  

The nature of vulnerability

Vulnerability is usually uncomfortable to think about. 

To me, being vulnerable implies an acceptance of ourselves as emotional creatures, capable of being hurt, confused and overwhelmed. It implies an appreciation that we won't be able to effortlessly grasp certain ideas, or books, or pieces of technology, without asking openly and repeatedly for help. And mostly, to me, it implies that we're probably going to get a lot of things wrong; maybe most things, in fact, before we stumble on a good idea. 

While some find it cathartic, vulnerability is not necessarily a pleasant thing to experience or practise if you aren't used to it. Being the only person in a meeting to raise your hand and admit that you aren't following an idea being expressed can make you feel isolated, or as if you've just publicly identified yourself as someone who's not particularly bright. You don't always walk away afterwards feeling full of confidence. It costs something to be vulnerable. 

There’s a payoff

But with that said, there is an enormous benefit to it. When discussing innovation, vulnerability enables you to bring your full human self into the process, instead of a very particular version of yourself which you have fine-tuned for a professional environment. It allows for more honest, richer discussions, and also invites us to reconsider processes, tools, or professional norms which we'd otherwise assume were inevitable and unchangeable. This process, while sometimes awkward, unlocks observations we might never otherwise be able to have. 

Vulnerability-led AI innovation

In my own independent work with arts and culture organisations, I'm often involved in discussions about how we might innovate with AI to create (or better share) meaningful value within the sector. While there are lots of perspectives around this (and towards AI more generally), I have observed a recurring theme emerge from those who have become most comfortable experimenting with and adapting AI towards their own projects: they don't begin by considering what AI is already recognised as being able to do and considering how to apply it wholesale towards their use case, but instead they begin by recognising a vulnerable human experience and then investigating how that could be explored further with AI. 

Real-world applications

This is an approach I have found effective in my own projects. An AI project I am working on with the Arts Marketing Association (which recently secured £250,000 in development funding from The National Lottery Heritage Innovation Fund) was conceptualised through a year-long process of innovation workshops with multiple cohort groups. That process often centred around vulnerable conversations with our co-creators about what made work feel most meaningful, and what factors contributed to unhappiness within their current working lives. While we discovered many things in that process which contributed towards it, one of the most pivotal moments was when we discussed the topic of loneliness, which some participants reported as being on the rise following Covid and the increased practice of remote working. Following those discussions, we reimagined the project to give greater weight towards how it might better connect people together within the sector, and better address a real, vulnerable human experience. 

Taking back agency in AI development

When we see AI products and "solutions" being released by major technology companies, I believe it serves us well to remember that those creating these products likely don't know our sector particularly well, and they certainly don't know our individual emotional context towards the work that we do. Although some of these products may (or may not) benefit us with their 'off the shelf' features, to truly maximise any ability we have to leverage this technology for our own benefit, we must become comfortable accepting that our professional and personal context is unique; and that means we must become comfortable adapting this technology in a similarly unique way, and one which responds best to our full, human selves. 

In this sense, I see AI innovation in culture as a form of taking back control and taking back agency. It's a process of declaring that we understand our own needs best, and we are therefore best placed to use the ever-changing tools at our disposal to address these most effectively and build what we believe should be built. 

It’s also not just limited to AI. I believe that a continuing practise of vulnerability, collaboration and innovation has tremendous benefits for our organisations more widely, extending far beyond the individual benefit. When our teams feel empowered to continually reimagine existing concepts and tools in ways which better reflect the unique values and goals of our organisations, we inspire a culture of confidence and a culture of creation. This energy is a potent one – it’s exciting, and inspires tangible ambition towards projects and possibilities beyond AI. 

It can feel messy. It can feel rebellious. And above all, it can feel vulnerable. But when we begin practising this type of human-centred AI innovation, it can also feel incredibly liberating and exciting. 

Go further

I’m particularly looking forward to exploring these ideas much further in The Audience Agency’s Let's Get Real: AI programme, which runs from April - December 2025. In this programme, we’ll be discussing and testing how we might work with AI to address different needs and motivations emerging from our own unique positions within the culture sector, and I suspect that many of our perspectives will only be enriched by embracing our individual vulnerability and using it as a catalyst to more deeply embrace our innovative practice.