James Boyle
Partner, Data Team, Mishcon de Reya
Hello everyone and welcome to today’s Mishcon Retail Academy Session. Today we are looking at facial recognition and the session is called ‘Facing the future – Lawful use of facial recognition in retail’. So who are we? I’m James Boyle, I’m a partner in the Data Team here at Mishcon and I am joined today by Louise Schofield one of our incredible associates in the Data Team here today. We are really pleased to be here, thank you for joining. In terms of the structure that will run through today with the 45 minutes that we have, we are going to start by kind of working through what are the two types of facial recognition that we tend to come across in our day-to-day practice. Then for those of you that are here today that perhaps aren’t super familiar with kind of key data protection topics, we are also going to introduce you to some of the really kind of foundational concepts that apply in facial recognition so we’ll look briefly at things like what do we mean when we say personal data and what’s a controller, what’s a processor and then we’ll move on from those to really apply all those concepts into two case studies. One will be applying facial recognition for the purposes of effectively fraud protection and helping your employees arrive on time, that kind of thing and the second case study will be applying facial recognition in more of a marketing context and then we’ll, we’ll close the presentation today with more of a look at how, how different facial recognition technologies are being used globally and then there should be some time for Q&A afterwards.
Louise Schofield
Associate, Data Team, Mishcon de Reya
So a good place to start is with what facial recognition is so it is a technology that use algorithm’s, quite often AI to identify faces in images and the images can be static or moving and the technology will create a map of faces and these are often referred to as face maps or face prints and throughout this presentation we will generally refer to them as face prints. So in terms of how the technology actually works it will take measurements between identifiable points on faces and then create maps that can be converted into code and then it is the code that is compared. Different facial recognition models use different numbers of identifiable points but usually it is sort of upward of 30 and the output which is the numbers that you see in the nice little animation we have on screen is what’s compared so there’s quite a few stages in this and it’s much more processing than just an image to image comparison that you might be familiar with that’s been around for a long time. There are two different kinds of facial recognition technology; normal facial recognition technology and then line facial recognition so the facial recognition technology is just a one-to-one process, very much like you saw on the last slide with my face and James’ face. It’s also the kind of thing that is used to unlock your phone where the you that is unlocking your phone is compared to the stored images on the phone of your face. It tends to be something that participants are aware of, that they actively consent to and there’s very selective processing when this happens whereas with line facial recognition that is something that’s usually deployed through an existing CCTV system or through cameras that are set up in a space. That is constant comparisons that happening and screening of faces and it can be done without participants realising that it’s happening so it is a much more mass scale collection and processing of data.
James Boyle
Partner, Data Team, Mishcon de Reya
And so as we move on to look at like what are the sort of data protection compliance considerations that apply to the deployment of facial recognition technology. I wanted to make sure that we are all on the same page in terms of the key concepts and effectively jargon that we’ll be discussing through the rest of this section? And so the data protection laws regulate something called personal data and a useful question to ask around whether something is or isn’t personal is to look at well does the information relate to a living individual you can identify and classic examples of personal data would be those that you can see on screen so things like name, ID number and IP address as well as someone’s employer and financial details. The term is really, really defined so this isn’t an exhaustive list but another way that I find quite helpful in terms of working out whether something or not is personal data is to ask whether do you hold enough information to distinguish someone from a group and an example of that could be there is someone on this webinar wearing glasses who is visible and, and at the moment I am the only person on this webinar wearing glasses that’s visible so what we’re not saying is that you should be effectively be treating everything as personal data but I use that example to illustrate quite how broadly interpreted the definition of personal data is and say because we’re talking about facial recognition it’s also worth mentioning that in addition to the sort of typical types of personal data I mentioned on the last slide, there’s an additional type of personal data called special category data which is highly relevant to facial recognition. And you can see on the slide that these data types are really the much more sensitive ones so we’re talking about things like facial ethnic origin, political opinions, data about someone’s sex life or their sexual orientation or data relating to their health or genetic or biometric data. And so what is particularly relevant to facial recognition and face prints is the biometric data and I think it is worth pausing for a moment here to really thing about what a face print is. It is an intrinsically private character of a face. Changing your face is a lot more difficult than changing various other personal data points. It is often significantly more expensive and painful to change your face than it is to change other personal data points. Face prints can also allow identification with surprising accuracy given the uniqueness. It is also very much like a kind of finger print or DNA and so face prints are considered biometric data. I think one of the kind of tricky things of biometric data is that it will be special category linked. If it is needed for the purposes uniquely identifying an actual person, we’ve flagged it because in some circumstances biometric data may not be special category data but certainly for the purposes of facial recognition, all face prints are taken for use in facial recognition and whether that’s live or static going back to the two types that Louise mentioned earlier, it should be considered special category linked. So if you’re still with me, what we are going to do now is just walk through a couple more of the key legal concepts to be aware of and then we will start applying that to various case studies.
So once we’ve established that the face print is special category data, the natural next step then is to consider whether you are acting as a controller or a processor over that data. The answer to this question is critical because it, it hugely alters your compliance obligations whether or not you are a controller or a processor and so as a very rough rule of thumb if you are a retail business deploying facial recognition you are much more likely to be a controller than you are a processor whereas if you are a technology provider, if that can be licensing your facial recognition technology to a customer who could be say a supermarket, you are more likely to be a processor. You can see on the slide the kind of indicators and questions you can ask to help you work out if you a controller or a processor. The other key thing to mention here though and where it can get more complex particularly in a large retail environment is if the owner and operator are different around particular retail sites or if a landlord wants to be able to share face prints amongst a couple of different estates they have. In those circumstances it is possible that a relationship called joint controllership will spring up. This is effectively where you have more than one party effectively deciding how and why the data is used and how it is shared between them. So it is not something we are going to go into in a huge amount of detail on today because I am keen to get on to the practical case studies but it is just worth flagging that if you are in a franchise arrangement or if you have a kind of owner/operator landlord style arrangement and you are looking at facial recognition technology joint controllership may well be relevant then.
And so once we’ve established that we are processing personal data and special category data and that we are a data controller of that information, the key next step is to identify what’s called a lawful basis for that processing. I find it helpful to think of the lawful basis almost like a menu or a shopping list here and effectively unless you can identify at least one of them, depending your processing, on your processing or your use of that data won’t be lawful. But the Article 6 basis you can see listed here, I won’t repeat them all for you but the one we see most often that our clients’ rely on will be legitimate interests. The other one that’s more contentious to rely on that we can cover as well today is round consent.
And so I mentioned earlier that when you are relying on, if you are processing special category data then it can be a bit more difficult and additional rules apply. One of those additional rules is that you also need to identify Article 9, lawful basis for that processing. The two here that we see being particularly common for our clients’ to rely on when they use facial recognition will be explicit consent but again that’s quite a problematic one to rely on generally as well as the processing big in the substantial public interest.
Louise Schofield
Associate, Data Team, Mishcon de Reya
So the information commissioner who’s the regulator for data protection in the UK has issued some guidance about live facial recognition and we have extracted some of the issues that they identify in their guidance and added some of our own and so we have split those into categories. So one of the issues which is automatic collection of data at scale, there are lots of things to consider with this particularly some of the data protection principles that you need to abide by. It is very difficult to restrict processing particularly if you are using live facial recognition in a public space, so that can lead to things like indiscriminate processing and there are also other considerations around collection of data from children and vulnerable people because there is a higher level of protection afforded to those groups. Individuals also cannot control whether their data is being collected and processed in public spaces and so the requirement surrounding transparency and necessity are particularly topical for the ICO so you need to make sure that you are being transparent and explain how and why data is being processed and we will go into this in a little bit more detail in the case study.
Other things to consider that are perhaps more practical is how effective live facial recognition technology is. Many programmes that use facial recognition are not actually statistically accurate and they can produce false positives and negatives so you need to consider the impact of that on the purpose that you are trying to achieve and also make sure that you are scrutinising the technology that you are implementing appropriately. AI systems that run facial recognition are notorious for having racial and gender bias and regardless of where the model comes from, Chinese based models often inaccurately identify more western faces and more western models that come out of Silicon Valley have been known to struggle with accuracy for non-white faces so it is an issue regardless of where you get your software from so you need to make sure that you are interrogating it. Then finally function creep. If you are implementing facial recognition for security purposes you need to make sure that you are not then using the technology for something else just because you can so in the example of say supermarket security, you need to make sure that you are not then using it to track footfall in and out of your store.
So the ICO office have summarised some of the legal requirements that they suggest that you take which we’ve included on the slide. I don’t propose to go through them all but they are very important and they are of a theme so the compliance with the data protection principles is very important. Identifying and meeting the lawful bases that James explained earlier on in the presentation and then ensuring that the subjects can exercise their rights are all very important. DPIA’s, Data Protection Impact Assessment will almost certainly be needed and we’ll talk a little bit about that later but you may need to consult with the ICO and they are very open to that in their guidance.
I also wanted to flag some points included in the EU AI Act; we are not EU lawyers but it does perhaps suggest the direction of travel for this regulation in this area and the EU AI Act has quite strict restrictions that place bans on the use of facial recognition technology except for very specified purposes so I’ve summarised a little bit about what the EU AI Act says here but what’s interesting is since the Act was published, interest groups like Amnesty International have actually said that it doesn’t go far enough and they want more comprehensive bans so it is an area where we might see a bit of push and pull on the regulation.
James Boyle
Partner, Data Team, Mishcon de Reya
So what we’re going to do now is literally anchor some the key concepts into a couple of case studies so what I’d like to do now is introduce you to two scenarios. So let’s say in the first case that facial recognition technology is being introduced to improve warehouse security at Boyle Store Co and they have approximately 200 employees along with a mix of permanent and short-term staff, we’re talking about a fixed location here and the only use is to effectively enable entry and exit from the warehouse. We also as part of this case study are looking at theft deterrents so implementation of facial recognition to help look at if a person coming into Scofield’s corner shop someone who’s stolen from that shop before it’s powered by third party provider will involve monitoring the public as well as engagement with internal and external watch lists.
Louise Schofield
Associate, Data Team, Mishcon de Reya
And one thing to know about the watch list before me move on is that the ICO addresses these watch lists in their guidance specifically and they ask that you consider when using them the impact on a social stigma when people are placed on watch lists that if you’re using watch lists that are shared or contributing to law enforcement watch lists, that you consider whether you are compliant in that data sharing relationship which would include things like the controllership James was talking about earlier. They also request that you limit the retention to prevent people staying on watch lists for an unnecessary period and then they specify things that wouldn’t be appropriate to place someone on a watch list for which includes minor offences where people’s face prints have been captured where they would have no expectations that facial recognition was taking place and also extraction of face prints from third party sources like social media, other resources online and some third party services and they say that it is unlikely that any of those sources or reasons for adding people to a watch list would satisfy the requirements of necessity and proportionality.
James Boyle
Partner, Data Team, Mishcon de Reya
So if we look at the available benefits associated with the facial recognition technology some of the things our clients will often think about when implementing the tech will be well it can help us secure and staff and remove the risk associated with stolen passes and it’s cheaper than hiring 24 security guards. If we compare this to the implementation at Scofield’s corner shop, it potentially increases the ability to prosecute and act as a general deterrent as well and the watch list means that crime can be prevented before it happens. One of the sort of problems here though is that data protection law is really anchored around proportionality and fairness and so it doesn’t strike me that this technology for example being cheaper or the ability to prevent crime before it happened are a particularly compelling way of satisfying the legal requirements here under those data protection principles and so as, as we work through the issues associated with the implementation of this tech, if we work through the legal bases for the warehouse security at Boyle Store Co first and this one is in connection with employees. Typically the lawful bases that we would usually see our clients defend in this space would be legitimate interests and the substantial public interest condition or going down the consent route. The consent route is particularly difficult to go down if we are looking at an employer employee contact here. The starting point I would say is that consent shouldn’t be relied on as a lawful bases unless you are able to show that you have genuinely offered employees a range of alternative far less invasive options, choosing any of those options won’t impact their career progression, they won’t be at all pressured to choose one of those options. It is quite rarely used by our clients and it’s more difficult to implement but I would say generally if like very careful thought is being given to the various alternative mechanisms being offered then consent in an employee context could be viable subject to all of those caveats. The other issues we then look at around employee monitoring will be particularly in relation to well is there a less invasive option here how are our employees likely to react to being monitored at work or having face prints taken of them as well as to the public reputational angle here. I think if we were going to sort of draw a conclusion on this one, that the warehouse security one is viable in principle provided it’s not being forced on employees and required a detailed set of effectively compliance considerations to be made and how can we minimise the invasive privacy element of this technology on the employees which Louise will cover a little bit later. If we then look at the theft deterrents example, with this one we have much broader scanning of a user base so we are not limiting this to effectively 200 employees, each of whom are fully informed and given a general choice. On this implementation I think the legal bases that would apply here are likely to be the same as the ones for warehouse security with the caveat that I think consent is effectively non-viable here because it’s essentially impossible to collect consent to the standard you would need to under the legislation for this technology. It is also quite clear that in terms of public support for this it is fairly low. We come up against the same problem here that there are less invasive options that exist certainly from my view on the, the theft deterrence as a general scanning of members of the public is it’s extremely difficult to justify this approach a data protection perspective.
Louise Schofield
Associate, Data Team, Mishcon de Reya
It’s also something that is being challenged so Big Brother Watch is one of the big organisations that is kind of challenging businesses that are trying to implement facial recognition. They recently announced that they successfully negotiated with a gym group to remove facial recognition from 55 of their locations and they’ve also written complaints to the ICO including one in relation to the Co-op’s use of facial recognition in their stores which is available on Big Brother Watch’s website and I encourage you to take a look at it if you are considering implementing this technology just to see the kind of challenges that they include there. But it includes comments around transparency and fairness as well as whether the legal bases that were relied on were the appropriate ones for the stores’ use of live facial recognition.
So in order to mitigate some of these issues there are a few things that you can do. The first one is definitely to complete a data protection impact assessment and it can be quite difficult to start from scratch with those particularly when it comes to identifying the risks which I think James had something he wanted to add to?
James Boyle
Partner, Data Team, Mishcon de Reya
The point that I would make here around DPIA’s is that if you are someone that’s tough with completing a DPIA I can often find that at the start of that process at working out like what the risks could be. It is really difficult you effectively almost have to lock yourself in a room and just think about you know, how could all of this go wrong, like how might people object to it and like one of the more general like moral, ethical and social issues associated with this and so if, if you are in a position of having to complete a DPIA or a compliance assessment in connection with facial recognition the resource that I would suggest looking at is an academic paper called the taxonomy of privacy. If any of you would like a copy of it or a link I am happy to, to send that to you after the session and the reason why I tend to use that as my touch stone when completing DPIA’s is that it gives a really broad overview of effectively a huge range of privacy risks that could come up in different scenarios. I was looking at it this morning in like prep for today and the one that struck me as being particularly relevant for facial recognition DPIA’s are the risks of aggregation, insecurity, surveillance, secondary use, intrusion, exclusion, exposure and distress and so I really encourage anyone who might be in this space struggling to work through DPIA or a compliance assessment to like dip in to some of that academic text. I find it consistently incredibly helpful.
Louise Schofield
Associate, Data Team, Mishcon de Reya
And then in addition to that there are some other things that you can do. So you need to interrogate your suppliers to see what they are doing with the data. Anyone who is storing the face prints you need to ensure that they have appropriate security and technical measures in place. As we have already discussed, controllership will be a key issue that you will need to consider when entering into agreements with suppliers. If you are using a third party software or a third party CCTV or camera system you also need to consider those agreements and make sure that again the data protection clauses and security measures are in place. We’ve already briefly mentioned watch lists but if you are contributing to a watch list that is supplied by several different controllers and you may need to consider how the relationship and data flows work there and we don’t suggest reusing the data but if you are reusing data that you collect for something like watch lists, you may also want to consider if that has any implications on the contracts that you have in place with your suppliers. We also suggest that you complete a review of your processes and that could include things like suggesting that before any action is taken when a face print is matched with an individual who might be on a watch list, whether human intervention is appropriate. That has been used in other tests of live facial recognition as a way of reducing the statistical imbalance and false positives. You also want to consider what an appropriate risk level is for your business. Your business may have more of an appetite for risk of things like ICO enforcement, public relations issues so when completing your DPIA you also might want to consider that. We do suggest that you run a trial. This might help you assess how your customers or your employees, depending where you are implementing the technology feel about the use and this runs alongside engaging with your customers. If they are really against it, it really won’t be of any benefit to you to have a reduction in customer support and then the most important thing is to consider whether perhaps after the DPIA is to consider whether there are other less invasive options that are more suitable. So for these two case studies there are a variety of things that may be more suitable for the employee example, security guards, doormen, electronic security passes or ID swipe card swipe systems are very useful. If you are in more of a retail environment then things like electronic security tagging which a lot of shops already have in place can be quite useful alongside other measures like CCTV and security guards and in some scenarios if you have luxury goods, having some kind of interlocking or mantrap door may be more suitable.
James Boyle
Partner, Data Team, Mishcon de Reya
And so what we’ll do now is come on to a second series of case studies more applying facial recognition to marketing. And so let’s say that there are a range of mesh marketing displays out on the streets and on the escalators down to tube stations. The objective really of facial recognition here is to assess and profile people that are passing by and to really make sure that the adverts that people are seeing on these displays are tailored to them. These displays also have the ability to track eye movement so checking whether the individual is actually appropriately engaged with what’s being displayed and can also derive emotions based on the facial recognition. And so from the organisations perspective the thinking behind implementing this tech could be that well you know these kind of real time statistics and metrics related to engagement are incredibly useful like we never usually have access to that kind of data and it helps us ensure that our advertisers are able to reach users that are more directly engaged with this and they can build a more detailed profile about those individuals than ever before.
The real issue with this is that well, to cut a long story short, I don’t think it is possible to do this in a compliant way. There is incredibly low support for this kind of ad tracking and deployment of facial recognition technology the emotional recognition side of things has previously been called out by the ICO and other regulators as technology that is simply not accurate enough and can’t be deployed legally. This kind of implementation it seems extremely difficult to find a legal basis to rely on. I don’t think there is any way you could argue that this is in the substantial public interest. I also think it’s extremely difficult to try and collect consent from people as they walk past these displays because I don’t think anybody would give that consent. It also strikes me that there are a range of like much less invasive options that could be considered enough.
As we move on to look at sort of you know, if this kind of tech was being introduced, like what are the mitigation and compliance considerations that apply you’ll see that the kind of check list or to do list here is largely the same as for the previous case studies but I think that the real kind of thing to keep in mind here particularly when you start to think about the less invasive options is that there simply are much less invasive options here that might not be strictly speaking as effective as using facial recognition technology but I think as we go through the various balancing tests that data protection requires you to be, I certainly can’t see a way that completing those balancing tasks if they are done properly would give you an outcome or a conclusion that this use of facial recognition technology is compliant and appropriate. The one that leaps out at me is blue tooth tracking, here, which is often done by retailers and as you walk past particular stores if you have the app installed then you will often receive a notification from these stores. This strikes me as a you know, still fairly invasive but much less invasive option, then facial recognition technology which also allows that detailed advertiser tracking.
Louise Schofield
Associate, Data Team, Mishcon de Reya
So finally we’re just going to move on to some more general points about facial recognition and how it is used all around the world. So the first thing is that it’s not new technology. We think about it as being new technology but the first trial that I was able to find records of actually took place in London in 1998. I will say though that it has never really been very popular when it’s been trialled and a lot of the media surrounding it has been fairly critical when facial recognition was implemented in the Super Bowl in 2001. Interestingly, pre-911 there was a wide spread backlash against it and it earned itself the nickname, Snooper Bowl and commercial systems have been available since 2007 so it has slowly been gaining traction in the space of kind of surveillance and security and more recently the Las Vegas Police have, have been consulting with the Las Vega Raiders which is another NFL team to introduce facial recognition into their stadium as part of their security and there are queries around whether that is appropriate or whether it will be rolled out to other NFL stadiums that are ongoing discussions within the media and the public.
Facial recognition is very topical. A lot of people are aware of it because it does get a lot of press, things like China’s social scoring is a very well-known example which I think kind of gained a little bit more recognition after the black mirror episodes that use or highlight facial recognition but it’s something that’s likely to have happened to everyone who is online as Clearview AI have been in the press recently for the past few years as they scraped image searches online to look for faces to train algorithm and in some cases Clearview AI is, and the programme is being deployed by law enforcement so that is something that impacts us all. Many people will be aware of face ID that’s on Apple iPhones. When it was introduced I think in 2017 there was quite a lot of pushback, people didn’t like the idea that facial recognition was being used on a mobile device which interestingly had a similar backlash when touch ID, which was a fingerprint based unlocking system was introduced. So you can see that there is a general kind of public dissatisfaction with the use of biometrics, whether that’s fingerprints or face prints but even so the technology is being used. In the UK it is being tested in the premier league and at some rugby matches but there are challenges in place from organisations like Big Brother Watch and also Liberty who brought the only facial recognition case in the UK against the South Wales Police which is about more law enforcement use than commercial use but it is definitely something that is at the forefront of the public’s mind.
Something else to consider is how facial recognition fits in more widely into big data and there are two examples on the screen of how other tracking through loyalty schemes identified information about individuals. I particularly remember when the Target example came out because a lot of people were outraged that such like shopping habits could be used to find out that someone was pregnant before their family members did. So when you are deploying this technology it’s good to think about what I like to call the creepy factor and how your deployment of the facial recognition technology will fit into your other tracking and data collection.
I am quickly going to talk about what’s happening in Australia because two stores in Australia, K-Mart and Bunnings. K-Mart is a kind of grocery and home goods store and Bunnings which is kind of similar to stores here like Screw Fix or Homebase; it sells hardware goods. They implemented facial recognition technology very much in the way that you saw in our first case study and used sign posting. Now the law is slightly different in Australia but following a challenge from Choice which is an advocacy group. The Australian Information Commissioner essentially said that it wasn’t appropriate and really pushed back on its use by these, these two stores because they said that consent should be required for such sensitive information because you can’t change your face and people wouldn’t have been aware that facial recognition was, was happening and on the public relations point, this was very widely reported in the press, both in Australia and in the UK. The article on the right hand side of your screen about Bunnings was published in The Guardian in the UK so it is something to consider when you are deploying this technology, what your plan would be from a public relations perspective if you did get push back.
And then finally the use of facial recognition is being challenged everywhere but it is being used on the slides there are a few examples of different approaches taken by different regulators. It is worth bearing in mind that the lawful bases in each of these countries are slightly different but it is interesting to see that places like San Francisco have enough support to kind of ban facial recognition across the whole city which is particularly interesting when you consider the number of people from Silicon Valley that are likely to enter San Francisco so these are, these are just other things to bear in mind. The Council of Europe point at the bottom of the slide is also quite interesting particularly in light of the EU AI Act because it does suggest that more regulation is likely in this area.
James Boyle
Partner, Data Team, Mishcon de Reya
And so in terms of some key takeaways from today’s session I think when deploying facial recognition technology it is always worth keeping in mind that less invasive matters almost always exist here and the principles of data protection law put great pressure on deployers of these systems to implement those less invasive methods or justify in detail why they have decided to go down the facial recognition option instead. Generally speaking our view is that it is extremely difficult to justify the use of facial recognition at the moment. It’s possible that as technology moves on and culture attitudes change the legal position will flex around this as well. One of the key things to complete if you are implementing facial recognition technology is to complete a data protection impact assessment or indeed DPIA and I think many of you listening will be familiar with the requirement that if you don’t feel like you can satisfactorily mitigate the risks in a DPIA you need to consult with the ICO, it’s likely that if you are deploying facial recognition technology you may well also need to appoint a data protection officer and then we’ve covered in a decent amount of detail requirements for the data protection principles, the lawful bases in Article 6 and Article 9 ensuring data subjects can exercise their rights, so effectively making sure that an individual can write in and say you know, I want to erase my data, I want a copy of the data you hold about me and various other rights they have as well as taking a data protection by design and default approach. The real challenge here for retailers looking to deploy facial recognition technology is you are almost invariably reliant on your supplier doing these things because your compliance here will partially be driven by how has the supplier developed their technology so due diligence of the supplier here is critical. Those really were the key takeaways and with two minutes to spare, that brings us to the end of the presentation. I hope that you watching today have found it useful and if you have any follow up questions, please feel free to reach out to us over email or any questions drop them in the chat.
I can see we’ve got two questions at the moment. The first question asks ‘We referred to on the lawful bases explicit consent and a substantial public interest condition in Article 9. Which substantial public interest condition were we referring to?’ That’s a great question. For those of you that aren’t familiar with data protection law as a person that asked the question, before you can rely on the substantial public interest condition you then have to in fact essentially go through a list of pre-defined examples of what is in the substantial public interest. The two that we see relied on most often are the Prevention and Detachment of Unlawful Acts and Protecting the Public. It is also worth mentioning that as you go down that trail it is likely that you will also need to complete something called an appropriate policy document which helps underpin your argument that your processing is in the substantial public interest.
The second question then asks ‘What would count as a minor crime, would pickpocketing or shoplifting count as a petty crime and does this not conflict with the intention of the shops or the crimes likely to take place in shops?’ I, I think that question highlights a tension here around facial recognition generally which is essentially the legislation puts the onus on the data controller to justify its decision making around whether given how invasive facial recognition technology is, it’s justified in dealing with the harm so I could see for example, how a shop that is really struggling with petty crime may be more inclined, and may be more successful introducing facial recognition technology than another shop that has essentially no evidence of that crime being taken place and also no evidence of having tried to implement much less invasive measures with any success.
We’ve also had a question asking for the link on the paper on the taxonomy of privacy – yes absolutely we’ll make that available to you. Thank you for asking.
I think we are now over time so I think we’ll close here but again, if anyone has any follow up questions, please do reach out to us. I really hope you’ve found that engaging and useful for you and thank you very much.
Louise Schofield
Associate, Data Team, Mishcon de Reya
Thank you for joining.
Mishcon de Reya
It’s business. But it’s personal