Fake news stories are increasingly common. With a growing awareness that readers cannot believe everything we see in the news and on social media feeds, new initiatives are being devised to tackle the problem. In Germany, hefty fines are now levied on social media companies that fail to remove fake news and hate speech. Facebook, meanwhile, has announced a new system to fact-check content and warn users against sharing it.
Lawyers are being proactive in protecting their clients from the consequences of fake news which crosses the line from mere misinformation to defamatory content which has the potential to cause serious reputational harm.
Although they may seem like the obvious legal tool, injunctions in relation to defamatory content are incredibly rare. Whereas the courts will act quickly to prevent the threatened publication of private or confidential information, judges take the view that, as a general rule, publishers should only be forced to take down false and defamatory stories after lengthy legal proceedings, as damages and a corrective statement later would be an effective remedy. It is generally almost impossible to persuade a court that the need for an urgent interim injunction outweighs the public interest in freedom of speech. There is an added challenge where the source of news is, for example, an anonymous blogger, who may be difficult to identify and target with legal action.
Nevertheless, in July, Mishcon de Reya stopped a concerted campaign of false and doctored stories against a British businessman which had been posted on social media and fuelled by targeted emails sent to key shareholders. Even though the author had taken steps to avoid detection, we secured a defamation injunction against 'persons unknown'. Once in place, the injunction – which was served and acknowledged by email – made it a potential criminal offence for anyone to share the material going forwards and, crucially, was secured before the false and malicious content went viral.
This recent success, thought to be a legal first, suggests a new front against online abuse, and the possible revival of a tool that, in the right circumstances, has a place in our digital age. Action to prevent reputational harm, particularly in relation to fake news and misinformation, must be swift and decisive in order to be effective.