ChatGPT and its successor, GPT4, are having a moment. Every college student is using them for research, every naturally occurring intelligence is talking about them. They have secured a massive funding stream from tech giant Microsoft, and millions upon millions of people have signed up for the service. The system is even finding a dedicated userbase who are substituting ChatGPT for therapy.
It is, however, only part of the story. Behind the headlines and the success lie some alarming, and unaddressed issues, which threaten to undermine public confidence in AI and bring substantial regulatory scrutiny on professional users.
Late last month, ChatGPT suffered a substantial data breach caused by a system bug allowing users to view one another’s private conversations with the chatbox. This has led Italy to ban ChatGPT from use in the country, and other European data regulators to take a very close look at the operation of the system. It cannot be denied that these are substantial setbacks for the nascent system.
In the European context, ChatGPT is a processor to which all elements of the GDPR and associated Member State legislation applies, likewise the U.K. GDPR. They are required by Article 32 to “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk.” Certainly, at the very least – given the recent breach, the Italian authorities are unhappy that this obligation has not been met. However, the obligations placed upon controllers and processors goes beyond that, there must also be a lawful basis for the processing of Personal Data. It is this latter point that has raised the most concern.
Many users of ChatGPT are deploying the system for content creation in all kinds of fields, including journalism, law and marketing to name but a few. These uses, as well as more casual uses, may result in the processing of Personal Data and Sensitive Personal Data.
The GDPR defines Personal Data as any information that relates to an identified or identifiable living individual. The innocuous request to ChatGPT “Create invitations to my CFO, John Smith’s, retirement party, in Paris” would necessitate the processing of Personal Data. If John Smith has not consented to this processing, a GDPR breach has occurred. This breach, of course, is trivial, and unlikely to cause any person trouble. It merely gives a flavor of how easy it is to accidentally process data unlawfully using an AI platform.
However, the real concern arises in the case of Sensitive Personal Data. This is data which reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, data concerning health or data concerning a natural person’s sex life or sexuality. Any attempt to use any AI system to deal with data such as this should be avoided at all costs, until the regulatory regime has caught up.
The issues should raise red flags for any business that is using an AI system for its content creation, including but especially medical professionals, lawyers, financial services businesses, or any industry that routinely processes Personal Data and Sensitive Personal Data. While these concerns may seem like that are EU and U.K. centric, there are U.S. analogues that may cause similar problems at a federal and state level. These include HIPAA, FERPA, CPRA, CCPA, and the Bank Secrecy Act to name a few. None of this even touches on the potential intellectual property concerns that exist within the system.
AI technology has advanced very quickly over the past six month, and it is undoubtedly true that the regulatory regime hasn’t yet found its footing. While these tools may seem like huge time savers, the potential regulatory downside is massive. GDPR fines are no joke, and the increased U.S. enforcement in relation to privacy from the FTC and other federal and state agencies should give anyone cause for worry. It will not be sufficient to lay the blame on the AI system for failures. The user themselves will bear as much responsibility as controllers of the data for requesting its processing by these untested systems.
As the technology matures, so too will the legislation and regulation relating to our new robot overlords. Until then, however, approach all such systems with caution, and hope that the ghost of Isaac Azimov will protect you.