Emma Woollcott, Partner in Mishcon Private and Head of the Reputation Protection and Crisis Management group, was quoted in an article by The Times discussing the reputational risk to individuals that inaccurate AI content poses.
Speaking to The Times in response to news that ChatGPT falsely accused a law professor of sexually harassing one of his students, Emma warned that online readers may wrongly assume that ChatGPT's content is always accurate.
She commented: “Because it is generating content from sources that are available online, readers may well assume it is true. Whilst OpenAI warns that its new technology may generate ‘inaccurate or misleading’ results, there is a real risk that people will rely on it, and that defamatory allegations created by AI will cause serious harm to reputations.”
Emma further noted that the professor would face difficulties as a British citizen if he were to sue. "The Speech Act prevents the enforcement of foreign defamation proceedings which are not consistent with the US’s First Amendment," she explained.
Read the full piece (subscription required).