Three takeaways from AI@Oxford

Last week’s AI@Oxford event brought together some of the UK’s hottest AI companies which have spun out of the University of Oxford. From driverless cars, through to healthcare and software development, the range of ways in which AI was put to use was diverse and fascinating.

Despite the wide range of industries represented at the event, there were some clear themes which stood out which no doubt will ring true to anyone involved in AI; whether they’re in communications or product development.

Here’s some of the takeaways from the sessions we attended:

We need to be realistic about what AI is and what we are trying to achieve

As Michael Wooldridge, Professor of Computer Science at the University of Oxford discussed in the opening session, there’s a lot of media attention around Artificial General Intelligence; conscious machines and a level of super intelligence that’s beyond human. But there has been no real progress made in this area. His opinion was that most of the discussions here are controversial and in a lot of instances – hype.

The industry needs to focus on the feasible, near future use cases of AI – i.e. Narrow AI. Getting machines to carry out single tasks that currently require human brains is a reality – and the best use case of AI. A robot which can think, feel and respond like a human being, is not.

AI should always augment people, not replace

Sir Adrian Smith, Institute Director at The Alan Turing Institute emphasised this point. Narrow AI – whether that’s Siri on your iPhone, IBM Watson, or AI driven data analytics – should focus on reducing the cognitive load on people, replacing more mundane tasks and be used to support critical decision making. It should not replace human decision making altogether.

The event demonstrated tens of examples of this form of AI in practice, from Oxford Brain Diagnostics, which is now starting to use AI and machine learning to provide clinicians in the future with a robust diagnostics tool to enable early diagnosis in Alzheimer’s, through to Sensyne Health, a digital healthcare organisation now listed on the London Stock Exchange which analyses anonymous patient data from the NHS to inform decisions around drug development.

AI can – and will – make mistakes, at least for now

Nathan Korda, Research Director at Mind Foundry asked what it would take for us to totally trust a piece of software. The answer: quite a lot – we’re not there yet at all.

Korda said that while AI is not a person, it is fallible. Algorithms make assumptions and sometimes they are wrong; and unlike a human, computers will not communicate to tell you what assumptions have been made. It can also be increasingly complicated to identify biases in your data which can significantly impact the results of AI.

It was fascinating to hear from, and speak to, so many organisations making huge breakthroughs in the industry. You can read more about the themes from the event in a guide published by the University featuring insights from many of the event attendees here.

As a communications professional, the event emphasised the importance of communicating clear and compelling stories around AI. Stories which articulate the value proposition for ‘narrow AI’ and cut through the ‘sci-fi’ narrative. Stories which help build trust by demonstrating real-life uses cases of AI; and stories which demonstrate how AI can enhance people’s jobs rather than remove them altogether. We’re working to unpick the full scope of this challenge and will be sharing some new insight on it soon. Watch this space.

Written by Kate Smith, Practice Director

No Comments

Sorry, the comment form is closed at this time.