Jaymin Kim
Senior Vice President of Emerging Technologies with Marsh's Global Cyber Insurance Center
-
United States
Harnessing technological progress means balancing potential risks and rewards. In terms of technology, 2023 has been the year of generative artificial intelligence (AI), with ChatGPT, Bard, and DALL-E becoming household names at a remarkable speed and scale of adoption. For example, ChatGPT acquired some 57 million monthly active users in its first month; in comparison, TikTok took nine months to achieve the same user base.
Generative AI represents a type of artificial intelligence that is capable of creating new and believable content, such as highly technical text, realistic audio, and lifelike images. Some estimates say that by 2025, 10% of all data will be the result of generative AI creations.
With generative AI’s rise come a number of questions for risk and insurance professionals and their organisations. What unique risks and opportunities does generative AI pose, if any at all? Why is there heightened attention around generative AI, when other forms of AI have been around for decades? What risks will insureds, brokers, and insurers need to navigate as generative AI evolves and interacts with other emerging technologies?
While headlines tout both generative AI’s extreme risks and opportunities, a balanced perspective is critical to making informed decisions and managing risks responsibly. To stay relevant and competitive, companies will need to learn how and when to leverage generative AI to optimally achieve objectives, such as realising operational efficiencies, increasing customer satisfaction, and developing new products and services.
Companies will need to strategically assess how and when to adopt generative AI systems, partner with vendors, implement appropriate governance and risk management protocols, and train employees with new skillsets, such as prompt engineering.
Many risks associated with generative AI are extensions of existing, familiar risks, such as data privacy, which has been a concern for decades. Misuse of technology to generate harmful content has long been associated with social media platforms. Potential intellectual property rights infringement from content generation is a familiar risk that many industries, from music and publishing to software development, have grappled with historically. Technological errors have existed since the advent of technology.
These risks may become more concentrated or surface in new circumstances as generative AI is applied to increasing and diverse use cases, but they remain extensions of existing, familiar risks, which may generally be addressed by existing casualty, media, cyber, and first party insurance products, among others.
However, new risks may emerge from generative AI in two primary ways:
As generative AI continues to develop, its creators, service providers, and users need to determine how and when to use the technology, and to proactively anticipate and manage its risks. Companies should continue to verify any outputs when using generative AI, as with all technologies. For instance, when generative AI models produce nonsensical, erroneous outputs — popularly referred to as “hallucinations” — the burden remains, as ever, on human users to verify the accuracy and contextual relevance of such outputs before using them.
As new categories of risks emerge from evolving capabilities and technological convergence, the insurance sector should take a thoughtful, methodical approach to underwriting, pricing, and developing products with the end customer in mind. The lack of historical claims data and legal precedent creates a need to develop proxies to inform product development. It will also be important to develop feedback loops to monitor and anticipate the risks as they emerge and evolve.
The insurance sector will play an indispensable role in shaping how companies balance the unique risks and rewards of generative AI. This includes providing companies with coverage analysis to help understand what risks associated with generative AI may be covered under their current insurance policies, or where coverage may be limited.
Generative AI’s opportunities and risks, while complex, are within our control. Our human agency to understand and navigate should be at the heart of all discussions about generative AI's future, including how to manage its risks and benefit from its opportunities.
Senior Vice President of Emerging Technologies with Marsh's Global Cyber Insurance Center
United States
Managing Director, Cyber Risk Practice
United States
Cyber Broker Leader, Pacific
Growth Leader, Cyber
This publication is not intended to be taken as advice regarding any individual situation and should not be relied upon as such. The information contained herein is based on sources we believe reliable, but we make no representation or warranty as to its accuracy. Marsh shall have no obligation to update this publication and shall have no liability to you or any other party arising out of this publication or any matter contained herein. Any statements concerning actuarial, tax, accounting, or legal matters are based solely on our experience as insurance brokers and risk consultants and are not to be relied upon as actuarial, accounting, tax, or legal advice, for which you should consult your own professional advisors. Any modelling, analytics, or projections are subject to inherent uncertainty, and any analysis could be materially affected if any underlying assumptions, conditions, information, or factors are inaccurate or incomplete or should change. LCPA 23/329