There is no escaping the adoption of Artificial Intelligence (AI) to perform daily business tasks across a range of corporate industries. Software like ChatGPT is reported to have reached 100 million users in a mere two months as compared to other platforms users love such as Netflix (10 years), Google Translate (six-and-a-half years) Instagram (two-and-a-half years), and Tik Tok (nine months). However, the increased use of such technology, specifically in businesses, can introduce significant risks that inevitably raise questions of potential liability.
The next thought for many is how to defray the risks, including by relying on current insurance policies to provide coverage. Despite the critical role AI plays in modern business functionality, policyholders may need to be prepared to explain why AI-related loss and claims are covered. Most existing policies do not mention AI, nor do they mention other types of software. Some insurance companies have rolled out language specifically addressing AI, as if it were not covered unless specifically mentioned. This tactic is a bit akin to an auto insurance company arguing that a car accident is not covered because it involved a blue car, and the policy does not expressly state it covers accidents involving “blue cars.”
The reality is that depending on the cause, occurrence, claim or other basis for a loss or liability, there may be coverage. Coverage may be available under existing policies, including where AI is involved. Such of these policies include Commercial General Liability (CGL), Directors and Officers (D&O), Errors and Omissions (E&O), Employment Practices Liability Insurance (EPLI), and Media Liability and Privacy Policies.
The extremely rapid adoption rate of AI, and specifically next-generation AI, coupled with the less understood risks involved, raises potentially significant loss and liability risks. The following discusses some of the uses, risks and insurance responsive thereto.
AI-Based Products
AI has revolutionized products across industries. Running with the auto theme, auto manufacturers have garnered media attention with self-driving cars. A number of them have deployed vehicles with AI-enhanced transportation systems that utilize route optimization and predictive tracking. The result being vehicles operating fully autonomously to transport passengers. Several rideshare apps have already begun beta testing services that create a driverless experience for passengers. It generally is marketed as being at no additional cost.
Among the commonly cited risks are potential system malfunctions or technical errors that may result in accidents, causing injury to passengers or other property. Such unfortunate accidents would raise multiple layers of coverage issues, including whether the resulting bodily injury and property damage is covered by a CGL policy and by product liability policies.
If the driverless auto is being provided through a ride share app that is providing ride share services, but may or may not own the vehicle, there likely are additional layers of legal duties, potential liability and insurance policies. As the claims still would likely be alleging bodily injury and property damage, CGL policies most likely would respond. A key takeaway from the example is that the harm of bodily injury and of property damage is more determinative of the policy that applies than the fact that AI was involved. The same concept applies to other scenarios and other policy types.
Decision Making
Financial institutions have reportedly considered and in some cases deployed AI in several ways, including for tasks such as customer engagement, credit scoring, lending decisions and predictive analytics. However, a stemming controversy regarding banks' implementation of AI is, when it is used for the selection of loan approval applicants. AI models evaluate a potential borrower's behavior based on the data input into the system. AI systems may be relied upon to reach conclusions about potential lending risks, employment prospects and creditworthiness. In addition to being efficient, there are arguments that AI enhances the accuracy and efficiency of processes like loan decisions.
But AI systems can perpetuate and, in some cases, exacerbate existing biases if trained with biased data. It could lead to the unfair treatment of certain groups of applicants. Whether the bias is perceived or actual, claims and regulatory investigations could follow. The claims from such a scenario would likely be from applicants and possibly customers. E&O liability coverage, or for banks, Banking Practices Liability (BPL), coverage might be the most likely to respond to such claims. Other policies, such as EPLI, may provide coverage for third-party claims of discrimination. The involvement of AI does not eliminate coverage. The allegations of bias and discrimination fall within the policy and whether the policyholder was using AI or other software does not define coverage.
Beyond the financial industry, an array of companies reportedly use AI-powered tools to screen and select job applicants and make other personnel decisions. Many companies build the tool based on a model applicant for a given position. The AI toll then is supposed to recognize the similar traits or qualities of other candidates or personnel during screening questions and interviews. Unfortunately, the “model candidate” can be limited to reflecting a specific group of people. To the extent the elimination of candidates treads over legal boundaries, there could be liability. Again, the policies that could most likely respond might be EPLI policies. They typically provide coverage for acts or omissions alleged to have been discriminatory within the employment context. Claims by employees, prospective employees and former employees often fall under these policies. The use or involvement of AI should not create an impediment to coverage.
Advertisement and Marketing
Business utilization of AI for advertising and marketing purposes has drastically increased. Generative AI provides additional means to create more targeted advertisements to reach specific audiences. Generative AI systems have the capability to create content that includes images, music or videos based on input. The output can mimic human creativity. Businesses also use generative AI to develop “contextual advertising.” It utilizes data gathered over time to present and create advertisements based on a customer's browsing history or preferences. Additionally, generative AI can be used to predict customer responses to various marketing strategies, allowing for business to adapt to consumer response in real time.
In order to accomplish these specific and targeted advertising approaches, the generative AI system is both gathering information from third parties and generating output directed at third parties. Moving in real time, the output to one third party may have been gathered from another third-party customer. The output may also consist of data input by the entity that developed the AI tool and the entity deploying the AI tool, which may not be the same entity. Risks of violating data privacy law can lead to legal complications. AI systems could also violate privacy laws by reallocating personal data or using data inappropriately.
Similarly, a risk of AI is that the output is based on input that was not entirely accurate. Such “inaccuracy” related output problems could lead to claims of false statements or representations. Misinformation risks are a growing theme in generative AI. In the same vein, relying on inaccurate information gathered about a third party could itself be problematic and the likely incorrect output would be a compounding problem. Misinformation can be created in a variety of ways and lead to output that creates potential liability.
Where the gathering or the output leads to data privacy issues or claims, a cyber policy likely provides coverage for such claims. Many cyber policies provide coverage for claims of privacy violations and data privacy violations. Similarly, Tech E&O policies, designed to provide coverage for companies providing high-tech services, may also provide coverage.
In addition, where the output involves advertising or other published materials, media liability policy may provide coverage. Such policies cover statements made in the public context, including on social media and other platforms. Alleged harm from such publications, including claims for defamation and other harm, is covered.
Also, under CGL policies there is a coverage part for “Personal and Advertising Injury” coverage. It commonly covers a host of tort claims, including defamation and certain statutory violations. Already, a number of courts have found that such coverage applies to digital publications and social media. A number of courts also have found coverage for data privacy statutory violations, such as violations of the Illinois Biometric Information Protection Act (BIPA). There is no stretch in applying that coverage to AI-related publications that may violate data privacy laws.
Given the complexity of generative AI, when problems arise they can be expected to be confusing and require detailed investigations. Policyholders should remember that cyber policies provide coverage for costs associated with investigating incidents of data breach, failed security measures, data manipulation and data privacy failures.
Conclusion
There are many other instances of generative AI being put to use, including publicized uses in healthcare, law firms and retailers. And other risks include regulatory compliance, intellectual property infringement and malpractice, to name a few. While businesses cannot afford to ignore generative AI and it brings significant new and complicated risks, the good news for policyholders is that insurance policies already are designed to respond to a number of those risks. Insurance policies should help defray the risks and, in some cases, transfer them entirely to insurance companies. Policyholders may need to be prepared to overcome initial resistance and press for coverage.
Sources:
- Ethan Lee, “Top 8 Examples of Profitable AI Business Ideas,” Tech Bullion, January 27, 2025, https://techbullion.com/top-8-examples-of-profitable-ai-business-ideas/.
- Pedro Palandrani, “Generative AI, Explained,” Global X, March 3, 2023, https://www.globalxetfs.com/generative-ai-explained/.