The new AI Safety Institute – how and why you should be engaging now

The Government has now provided more detail about the new AI Safety Institute, set up in the wake of the Bletchley Park Summit in November 2023.

With a mission to “minimise surprise to the UK and humanity from rapid and unexpected advances in AI”, the Institute is led by a former senior adviser to the Prime Minister. Its creation signals an intent from Downing Street to demonstrate that there is substance behind its desire to position the UK as a global leader on AI ethics and regulation.

That intent makes it critical that AI businesses, and those operating in related sectors, know how and when to engage with the new Institute.

Read on to understand what your organisation should be doing to share your expertise and guidance with the AI Safety Institute as it begins its work, and in doing so, help shape the UK’s approach to AI in the years ahead.

Building a safe foundation

The Institute will initially have three functions:

  • To develop and conduct evaluations on advanced AI systems;
  • To drive foundational AI safety research;
  • To facilitate information exchange between the Institute and other partners both at home and abroad.

Until now, the UK’s approach to AI regulation has been to task existing sector regulators with responsibility for overseeing AI within their industry. But the Institute’s creation suggests that a more cross sector, collaborative approach is now on the agenda – good news for those looking to engage holistically on AI regulation.

The Institute will be led by a former close adviser to the current and two former Prime Ministers – Oliver Ilott was Deputy Principal Private Secretary to the PM for a year and a half, taking in the premierships of Boris Johnson, Liz Truss and Rishi Sunak. A small but important team of civil servants has taken shape under him, with more appointments in the works.

Taking action now

As the full shape of the Institute becomes clear, businesses will be wondering what this means for their work on AI, and their ability to operate in the UK market. Even more so, if a business wishes to influence the direction the Institute takes, what can it do at this stage?

There are three things that we recommend all businesses with an interest in AI development should be building into their communications and stakeholder strategies:

  1. Immediate engagement
  2. Shaping the Institute’s paradigm for AI safety by being a model of best practice
  3. Being a thought leader on AI and AI safety

Let’s look at each in turn.

1. Immediate engagement

New in role, the Institute’s leaders will be keen to meet with industry, and other stakeholders, to begin forming the relationships that will underpin much of their work. This makes it a perfect time to begin a dialogue with those leaders. Meetings at this stage should focus on establishing your and your organisation’s credibility, and lodging yourself in the Institute’s mind as a valuable resource for insight and expertise as its work programme develops.

Having a clear narrative about your organisation and its approach to AI will be key to making that meeting a success, and will lay the groundwork for future engagement around your industry’s needs.

2. Shaping the Institute’s paradigm for AI safety by being a model of best practice

As part of your engagement with the Institute, you should share a model of what you consider best practice when it comes to balancing AI safety with the need to drive innovation. Not only will this underline your position as a source of expertise, but at this nascent stage of the Institute’s development, you have an opportunity to help it cut through the noise and identify the issues on which it really needs to focus. The model of best practice should work for your business, but it should also provide a tangible and realistic approach for achieving the Institute’s aims.

3. Being a thought leader on AI and AI safety

While direct engagement with the Institute is crucial, you will inevitably be one voice amongst many. By sharing thought leadership on AI – in the media, online and through speaking at events – you not only reinforce your core message and ideas, but you influence the views of others, who can in turn become advocates for your position. That way, the messages that you deliver to the Institute, and the politicians who oversee it, in private, will be reinforced in public as well.

The earlier you engage, the better chance you have of building strong relationships and helping to set the direction that the Institute takes. Now is the time to be considering your approach.

Lots of organisations will be planning their engagement with the AI Safety Institute in the weeks ahead. For an informal conversation about how Brands2Life Public Affairs can help you shape your message, develop thought leadership and facilitate engagement with the AI Safety Institute, or more widely with stakeholders driving AI policy and regulation, email [email protected].