Even the most confirmed Luddite would have struggled to miss the AI Summit taking part in Bletchley Park in the UK this week. It’s featured acres of media coverage, big political and technology hitters in town, the Prime Minister hosting Elon Musk for a 1-2-1 and even King Charles recording an address.
But did the Summit deliver?
Well, yes and no.
Let’s start with the yes.
Whether you believe that AI really is one of the biggest threats to humanity or not, it’s obvious that the riskiest forms of AI clearly need extremely careful management – and that requires the most powerful nations working collaboratively to ensure its progress continues in a safe way. So, the Bletchley Declaration, which establishes a shared understanding of the opportunities and risks posed by frontier AI, is very welcome.
One of the Summit’s goals has undoubtedly been to strengthen the UK’s position on the global AI stage. The calibre of the Summit’s attendees is undeniably impressive, made of global leaders and senior figures from the world’s biggest tech firms – the ones creating game-changing AI tools and putting them in the hands of the masses. Bringing these names together on British soil to shape the future of AI and to try to tackle some of the biggest challenges ahead represents a real coup – a statement that we mean business on a hugely important issue. And, despite vocal calls to the contrary, the decision to invite China was in my opinion the right one. As Rishi Sunak said, “There can’t really be a substantive conversation about AI without involving the world’s leading AI nations. China is indisputably one of those.”
It’s also been, indirectly, a chance to shine a light on the UK as central to AI innovation, with some incredibly innovative homegrown businesses developing AI technologies.
But it could have been better.
The discussions were far too centred on the existential risks from super sentient AI; not nearly enough attention was paid to the communities and workers most affected by AI. More than any other technology for the last 30 years, AI is fundamentally disrupting the lives of people across the globe. Clearly the pace of change is impacting some industries, jobs and economies faster than others. But we are all being affected in some way and, in my view, this is where really needs focus. What type of jobs will AI destroy? What are the new jobs AI will create? How will certain jobs evolve through AI?
“It’s more likely to replace you in your job or discriminate against you in an insurance quote than it is to try to kill you (at the moment!)” tweeted the BBC’s Zoe Kleinman and she’s absolutely right. A declaration on AI’s riskiest potential was definitely required, but the current concerns are far more pressing. It wasn’t totally ignored – Rishi Sunak talked in one of Thursday’s sessions about how education reforms, including boosting adult education, will help build the skills required to work with AI as a co-pilot. But there was little tangible output.
While there are very valid reasons that its march demands careful attention and guardrails so it doesn’t become more than we can handle, in the short term AI’s use should be encouraged and accelerated. This is where the UK can stand out. Because, as BT’s Daniel Wilson points out, while the UK government rightly champions the fact that our AI startups secured more funding than our peers in Germany and France combined, the actual deployment of AI by UK firms is lower than both countries (as well as Italy and Spain too).
Though the presence of most of the attendees made sense, in the view of many people, it was too heavily weighted towards frontier AI and the large tech firms developing Gen AI tools. A letter signed by over 100 individuals and organisations working for civil society organisations (across the political spectrum) such as the Alan Turing Institute, Amnesty International and several universities expressed very valid concern that “Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.”
One area the event also didn’t cover was the environmental impact of AI – it was mentioned in the discussion paper but it’s a struggle to find any meaningful outputs. Tackling concerns around bias was also noticeably missing.
From purely a comms perspective, however, it’s gone well. Certainly, while the announcement of the US’s executive order took away a little of the attention, the summit has undoubtedly strengthened the UK’s association with AI. It’s been a chance for Rishi Sunak – who recognised relatively early that AI will drive the next wave of innovation and growth for tech firms – to put his name to a good news story. And, by and large, none of the visitors have derailed the focus on the Declaration with outlandish predictions or controversial one liners, bar perhaps Elon Musk’s remark that “there will come a point where no job is needed” and his comments on Joe Rogan’s podcast about the environmental movement having gone way too far. The message has been well controlled throughout, even in Rishi Sunak’s interview with Elon Musk. The interview itself was a slightly odd spectacle – Politico’s Tom Bristow and Dan Bloom dubbed it a “love-in” and Sky’s Sam Coates called it “one of the maddest interviews I’ve ever covered”. But maybe the political leader and tech titan coming together in this way shouldn’t be that surprising in an era when several senior government figures are able to host shows on a news channel.
This week has been a crucial one in moving our regulation of AI forward. But arguably it’s the US’s executive order and the forthcoming EU AI Act that will have a bigger and more tangible impact on our use of AI in the near term. And that’s what’s really needed.
—
André Labadie, Executive Chair, Business and Technology