The AI Summit – a range of perspectives
One of the great things about the conference was the variety of perspectives on offer, from the technology vendors and integrators, through a range of businesses across sectors, to the public sector and legislators. So, what kinds of things did all these different people talk about when they got together?
In short, regulation, responsibility, application and adaptation; but before answering the question more fully, it is probably useful to take a step back to consider how such a conference in 2024 might differ from one held in 2023. A year ago, the AI community was still grappling with the enormous interest in ChatGPT that had arisen since its release in November 2022. There was plenty of excitement about this new technology, with the public suddenly becoming aware of previously pretty niche technologies such as (large) language models, but often within the community there was a general lack of clarity in its depth of understanding.
Therefore, one of the striking things about this latest conference compared to similar events held last year was the sense of consolidation of understanding, combined with taking a step back to evaluate AI (and in particular generative AI) technologies in a more measured and cool-headed manner.
What is it good for? How does it perform? Where is it going? What are the challenges? And, most importantly, what are the risks? This is where the different perspectives proved to be particularly useful.
Responsibility
A related topic that was covered by both technology practitioners and policy experts was the responsibility of organisations for the AI models that use in their systems, even though they may have been developed by a third party such as OpenAI or Google. Their point was that the business remains fully liable for any behaviour (expected, unanticipated, or otherwise), and that it is the business’ responsibility to check and validate any behaviour in production.
After noting the difference in approaches to regulation in the UK/EU (standards enforced by regulation) versus the USA (standards enforced by litigation or class action lawsuits), the possibility was raised of organisations being held responsible for any shortcomings of a third party model that their solution uses (e.g. it being trained on unauthorised material, employment conditions of data annotators, etc.).
Class actions have been used against companies in other contexts, and so may be applied to generative AI technologies, which was certainly a sobering prospect for anyone engaged in the development of applications using them. For more detail on this topic, take a look at this Point-of-View from GFT’s Simon Thompson. A similar point was made by the UK advertising regulator, that ultimately an advert is the responsibility of the agency publishing it, regardless of whether or not it was generated by AI.
Applications
Turning to the application of AI technologies, a number of sessions presented lessons learned by practitioners from their experiences of developing AI technologies in their business; these ranged from ‘how to incorporate the generation of images for a cosmetic business’ and how to do so in an ethical way (e.g. do not use generated images to demonstrate the benefits of your product), to ‘improving a chatbot in banking’ by focusing on rarer customer questions. However, in this respect, we believe our GFT Intelligent Banking Assistant offers even greater benefits by being able to safely and securely interact with the customer’s account data.
Adaptation
A universal view seems to have settled on the opinion that generative AI models such as LLMs are rarely suitable for use ‘out of the box’. One approach, and certainly the most do-able, is to apply various techniques to tune an ‘off the shelf’ model, which was described in a discussion covering applications in financial services. Another talk covered the process of creating a large language model (LLM) from scratch – giving an optimal model for a specific use case, as well as complete control over the process and data used to create it – however, it does come at a very substantial cost and may therefore not be within the reach of many, if not most, organisations.
To sum up…
In summary, the AI Summit London was a refreshing event which presented a range of perspectives on AI. It was particularly encouraging to see the field of AI mature as it has deepened its experience and grappled with some of the challenges of implementing and evaluating recent AI technologies. I’m already looking forward to next year’s event, and would encourage you to come along too if you want to learn more about the latest developments in this innovate area.