December 5, 2023

AI: Robots in the boardroom


AI: Robots in the boardroom

By James Sterling, Head of Claims

In today's fast-paced, technology-driven world, management teams already face the daunting task of keeping up with existing demands. Now comes the new challenge of navigating the complex landscape of the AI (Artificial Intelligence) revolution. In this blog, I aim to simplify what Directors and Officers (D&O) need to be attuned to, including from an insurance purchasing and risk profile perspective.

What is AI?

AI traces its origins back to 1956, when computer systems were first used to simulate humans, such as for learning and problem-solving. This means AI has been around for nearly 80 years, with manufacturing businesses in particular increasingly benefiting from robotic assistance on the factory floor, as well as supporting significant advances in the automobile industry. My personal highlights over the decades include a (Gary Kasparov endorsed) ‘super-computer’ chess game and the driving saviour that is the automobile Sat-Nav system!

Until recently, AI design progress had been slow. However, the last decade witnessed a funding surge following key advancements in deep learning. We are now seeing a shift in focus from conventional Administrative AI to Generative AI, with the majority of funding flowing in this direction.

Generative AI typically involves the creation of text, audio or images, usually in response to human requests, which can then credibly assist with data-based decision-making (i.e., a step up from purely administrative tasks). Funders and developers aspire to create a fully automated Generative solution: in theory, an AI platform that can reliably operate in a company’s workplace, independently of humans and even participate on the board.

While a few known instances exist of companies incorporating autonomous robots into their executive team, such occurrences are rare. In reality, I suspect the day when robots consistently appear on a company’s board is decades rather than years away. Until consumers can fully trust the data outputs, human input and oversight will remain integral. Instead, the primary focus remains on maximising output from Administrative AI, while simultaneously extracting value from Generative AI (including assisting a human board with their decision-making).

A Generative AI D&O example

To give an example of how a human would interact with Generative AI in the D&O context, I downloaded the ChatGPT app onto my smartphone (one of a multitude of “chatbot apps” currently available). I then asked ChatGPT to list the advantages and disadvantages of a company embracing AI. The results, distilled below, appeared credible, albeit voluminous and somewhat duplicative:  

Competitive Advantage:

  • Pros: streamlining operations, improving the customer experience, attracting talent, and developing innovative products/services.
  • Cons: initial implementation costs can be high, no guarantee of reducing the long-term expense base (especially if dependent on third-party IT providers), lack of human interaction can upset staff and customers, and unforeseen consequences when rolling out new products/services (such as AI unintentionally writing code).


  • Pros: analysis of large data-sets to assist with speedy, accurate and informed decisions, making best use of big data, and due diligence tools on new potential deals.
  • Cons: legacy human bias built into AI decision-tools, unreliable data, lack of executive buy-in, and watch out for AI ‘hallucinations’ (false information, such as the lawyers who recently relied in a court filing on incorrect case law generated by ChatGPT).


  • Pros: risk management tool and to help mitigate security threats such as cyber-attacks and data breaches.
  • Cons: mis-using data can lead to privacy breaches and legal consequences (damaging a company’s reputation) outpacing of the overarching regulatory and legal environment (for instance, most Regulators do not currently allow robots to sit on a company’s board).

D&O implications

As with most things, there is no one size fits all. Whilst investing funds in a cutting-edge AI strategy may work for some corporates (e.g., certain start-ups and larger public companies), this may not be the answer for others, such as more established smaller outfits. Some companies may look to target niche AI opportunities, whilst a lot may also depend on the type of industry the company is operating in (e.g., manufacturing/healthcare vs. professional services/sport) and where they sit in the AI product chain (e.g., designer vs. intermediary vs. buyer).

From a D&O risk profile perspective, as we alluded to in our recent ESG article, it can sometimes be a case of damned if you do and damned if you don’t. For example:

  • Missing an AI opportunity vs. misrepresenting/wasting costs on an ill-advised AI foray.
  • Making an ill-advised board decision in the absence of AI-supported data or reliance on erroneous data.
  • Failing to stave off a cyber-attack vs. misusing private/company sensitive data.

Painting such a gloomy picture is partly deliberate to underscore the theory that we are some way off a trustworthy final AI solution (according to ChatGPT, Queen Elizabeth II is the current monarch of England, caveated with the comment that the latest version of Chat GPT’s ‘knowledge [of current affairs] only goes up until September 2021’).

Also, some solace can potentially be taken from ChatGPT which advises, “To mitigate these pitfalls, companies should have a well-defined AI strategy, ensure transparency in AI decision-making, prioritise ethical considerations, invest in data governance, provide ongoing training for employees, and carefully assess the costs and benefits of AI adoption.”

Clearly, there is a need for proper governance of a company’s AI strategy, whatever its size and industry. However, what ChatGPT appears to overlook is my point above about there being no one size fits all, i.e., the need for a proportionate approach depending on the company’s profile. This again aligns with the conclusion that AI remains some distance away from being the finished and fully automated article, and, for some companies, it may never be the right decision to allow robots into the boardroom (albeit perhaps ChatGPT’s own management team, at its parent company OpenAI, could have benefited from the input of robots during its own recent high-profile example of humans falling short when making key decisions in the boardroom).

Insurance implications

So, what does all this mean from the perspective of D&Os looking to ensure their company is purchasing the right scope of insurance cover, both D&O and other products? Fortunately, my view is that the evolving AI landscape should not have a material impact on insurance purchasing decisions, so long as the company in question already purchases a suitably broad set of Cyber, Media, Casualty and First Party coverages to sit alongside its D&O, Crime and EPL products. That said, clients should increasingly expect pre-placement questions from underwriters regarding an insured’s use of AI. Customers should also keep a close eye on any applicable exclusions: for example, any that seek to distinguish between losses caused directly by humans and those linked to AI (noting the lines are becoming increasingly blurred). And let’s not forget there may be some other culpable parties when defending any claims, not least the AI designer/supplier.

Kayzen’s concluding thoughts

At Kayzen, we underwrite each risk based on its merits. We embrace value-add technology as part of our continuous improvement journey and proportionately scrutinise each risk presented to ensure the best customer outcomes. Contact us if you have any questions arising from this article, including regarding your insurance needs.

Proud About Claims

Proud About Claims

April 23, 2024

Elevating the delivery of Management Liability for better client outcomes

Elevating the delivery of Management Liability for better client outcomes

March 21, 2024

The birth of an MGA

The birth of an MGA

March 11, 2024