Mishcon de Reya page structure
Site header
Menu
Main content section
Abstract AI lights

Using AI in workplace investigations: What employers need to know

Posted on 5 March 2025

The greater focus on culture and governance in the workplace in recent years, along with enhanced regulatory scrutiny around whistleblowing, has led to a sharp increase in investigations into employee complaints and conduct. 

At the same time, AI has started to infiltrate the world of work, due to its ability to increase productivity. 

Employers are therefore naturally starting to deploy AI and automation when conducting workplace investigations, exploiting their ability to streamline administrative tasks and provide decision-making support. But as with any use of AI, the potential benefits also carry legal risk, which is amplified in the context of a workplace investigation, as they are often a precursor to employment claims. 

The key ways in which AI can be used to assist with a workplace investigation are as follows: 

Automated notetaking 

Documenting interviews is central to the evidence gathering process in an investigation. Whilst investigation meetings will typically be attended by a notetaker, it is not a legal requirement to have a notetaker present and we are increasingly seeing clients using automated notetaking technology for these meetings. The potential benefits of this include: 

Freeing up resources and administrative time, as only one individual needs to attend the meeting and the note is automatically transcribed. 

The note can be distributed automatically, removing the need for the ancillary process of agreeing the content of the note and inviting the interviewee's comments. This can be very time consuming, particularly on large investigations involving numerous interviews. 

An automated note is perceived as more objective than a human notetaker, lowering the level of antagonism in the investigation process. As a 'verbatim' note, it is likely to reduce the likelihood of discussion and/or dispute over the content and/or allegations of bias by the notetaker. It should also remove the need to deal with employee requests to record the meeting. 

However, it is important to bear in mind the following: 

  • Automated notetakers may misinterpret speech, particularly if a speaker has a strong accent, there is background noise, or technical jargon has been used. 
  • Depending on the quality of the AI technology, the automated note will always require a degree of human review to amend any errors. Rather than just spelling errors, fundamental details like names are often mis-transcribed, which significantly reduces the use of the note as an accurate record. 
  • As with all workplace recording, employers will need to obtain consent from the employee before starting any recording. The usual considerations around data privacy and confidentiality also apply to any transcription. 

Document review 

Workplace investigations tend to be document heavy, requiring an employer to collate, review and manage a large volume of documents in a short timeframe. AI and automation technologies offer unparalleled efficiency in being able to sift through large volumes of data, including: 

  • Reviewing documents for extracting information including keywords, date ranges or any combination of factors, to identify relevant data quickly. 
  • On large investigations an employer may want to conduct data and trend analysis, as a way of investigating particular themes or issues. 

However, using AI to locate relevant documents has inherent limitations. We have found that where an investigation is complex and involves multiple different issues, a review platform tends to be less successful at identifying relevant documents automatically through trends or in a nuanced way. Employers should, therefore, consider how and when review platforms can be used effectively at the outset of an investigation, in conjunction with the terms of reference, and should adapt the approach as there will not be a 'one size fits all' approach. 

Generative AI 

Increasingly employers have their own in-house generative AI tools. 

While all investigations will be fact specific, generative AI can be used effectively to provide template documents (such as invitations to investigatory meetings) or to summarise information, such as condensing an employee's grievance within the investigation report, or to assist with formulating the terms of reference. 

However, employers should also note that: 

  • Personal data and confidential information should never be entered into a publicly available generative AI tool, as this could lead to possible exposure of confidential/proprietary information into the public domain. 
  • Any organisation that uses (or proposes to allow the use of) generative AI tools should have an appropriate policy in place for staff, so that it is not misused. 
  • While AI can be used to assist with the investigative process, it cannot substitute for human decision-making when reaching an outcome. The investigator/chair must be able to show that they have considered the evidence and reached a clear and objective outcome based on their assessment of the facts, so that they can defend their decision-making process, if challenged. 

With no sign of investigations slowing down in the workplace, AI is undeniably a valuable tool for managing investigations efficiently and avoiding unnecessary duplication. However, as with all AI and automation, it needs to be deployed carefully and human oversight remains critical, to ensure fair, accurate and robust outcomes. 

If you would like more information on how best to manage workplace investigations in your business, please get in touch with your usual Mishcon contact or with a member of the Employment team

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else