Review data protection, security and secondary use of user data

Generative AI

Beyond the unlawful collection of data point identified above in respect of training data, data protection and security in respect of user input data is a major concern with generative AI since user data is often stored and reused for model training. There is ubiquitous misuse of publicly available data every day, but generative AI has shone a spotlight on this issue. There is a concern that reuse of user data could be used to answer a prompt inputted by someone else, resulting in the possible exposure of confidential/proprietary information to the public.

This concern was the main factor that pushed the Italian data protection regulator to temporarily ban ChatGPT in April 2023.

It is of course possible to implement generative AI solutions which do not expose user data in the same way to such risks, but it may limit your options. Data protection/confidentiality is likely to heavily impact the purpose for which generative AI solutions can be implemented within a company. Companies should ensure they have implemented operational measures to avoid personal/confidential/proprietary information accidentally being disclosed (and ensure personnel are aware of and received training in respect of such measures).