Algorithms play an increasingly fundamental role in our lives, and the workplace is no exception. Accelerated by the Covid-19 crisis, the adoption of algorithmic management systems (which have previously been associated with the gig economy) is increasingly being extended to other sectors. This has resulted in a more widespread restructuring of workplace behaviours as technologies influence how, when and where we work.
“Algorithmic management” reflects a diverse set of technology-based tools and techniques to remotely manage workforces. It relies on data collection and sometimes the surveillance of workers to enable fully automated decisions (without human assistance) or semi-automated decisions (where human decision-making is shaped and informed by algorithms). This article focuses on the practical risks which arise from using these systems, as opposed to the more general data protection and discriminatory risks which are often associated with AI technologies.
Current uses of algorithmic management
Algorithmic management can be used to manage performance, allocate tasks and shifts, determine pay and working time, and set terms and conditions of employment. Access to better terms of employment, including employee benefits, may also be determined by algorithmically predicted performance.
Increasingly common in the retail and hospitality sectors, algorithms can forecast the flow of customers by using a wide range of data – anything from traffic history and point of sale data, to weather forecasts. These predictions are then analysed alongside employee skillsets and performance data to decide which employees should be scheduled on any given day. Automating shift scheduling also potentially removes the ability of line managers to exercise personal favouritism and bias in allocating work.
Similarly, delivery drivers may spend their entire working day following algorithmically generated instructions on what route to take to their destination. In doing so, they are fulfilling jobs that are themselves being allocated to them by an algorithm. They are given an algorithmically calculated target time that they are under pressure to beat, or see their performance downgraded by yet another algorithm.
The power of algorithms means that much of the day-to-day allocation of tasks to workers and performance against key metrics that has traditionally been done by managers, can be replaced by algorithms.
This has several potential advantages for employers. It results in lower costs (as time-consuming tasks can be achieved in seconds), greater efficiencies, data-driven decisions (as opposed to gut instincts) and reduced workplace tension, as decisions are based on performance data rather than human bias or favouritism.
However, it also comes with potentially significant drawbacks.
Job satisfaction and trust in algorithmic management
The replacement of managerial trust and dialogue with algorithmic management may impact staff job satisfaction.
A 2019 study of Uber drivers in New York and London found that many driver complaints could be classified into three major categories:
- Surveillance – Drivers were aware that they were being constantly tracked by the app, assessing performance indicators such as their speed, GPS location, and the acceptance rate of new riders. Taking the “wrong” actions could lead to penalties or even a permanent ban from the platform. This was found to result in less driver engagement and lower morale.
- Lack of transparency – Many drivers felt that there was a power imbalance between the app itself (which is constantly monitoring their performance) and the workers who receive very little insight into how it operates. This affected motivation and employee loyalty.
- Dehumanisation – Without building a relationship with a human supervisor, many drivers felt it was difficult to understand how they were performing, or to feel that they were doing meaningful work.
These concerns highlight the issues of algorithmic management. With this in mind, Barclays recently scrapped plans to install tracking software that would monitor how long employees spent at their desk, and what proportion of time they spent on each task.
Concerns have also been raised over whether these technologies could be used to control and exploit workers on zero-hours contracts. For example, they might give workers just enough shifts to keep them with the organisation, or auto-schedule shifts to reduce the risks of employee status in order to deprive them of full employment rights.
What can employers do?
The Institute of Future Work identifies a number of recommendations for employers using employee management systems. These include:
- Using algorithms to advise and work alongside human line managers, but not to replace them – a human manager should always have final responsibility for any workplace decisions.
- Line manager training on how to understand algorithms and how to handle an ever-increasing amount of data responsibly.
- Greater transparency for employees (and prospective employees) about when algorithms are being used and how they can be challenged, particularly in assignment of work and performance management.
- Employee contracts, collective agreements, technology agreements and employee privacy notices to include explicit statements about the employer’s collection and use of employee data through algorithmic systems.
- Reporting on equality and diversity, such as around the gender pay gap, to include information on any use of relevant algorithms in recruitment or pay decisions and how they are programmed to minimise biases.
Looking forward
There are persuasive arguments for making greater use of algorithms in all areas of employment. They reduce favouritism, and if they are constructed properly, they should lead to better matches between tasks and the attributes of people.
A fundamental problem with algorithms is that they take decisions and power away from supervisors who in the past used them in part to build the relationships that made the workplace operate effectively.
Whether algorithms actually prove to be more effective once we start observing employee responses to them, and the reduction of supervisor control that comes with them, is an open question. Ideally, we will reach a point where it is possible to strike an acceptable balance between the optimisation norms of algorithms and the fairness and transparency concerns of those who are subject to them.
You can find out more about the EU’s draft AI Act here. And if you have a specific query about the use of AI in recruiting or any other employment concerns, please talk to our Employment and HR team.