Licence to Operate: Mapping the Public Acceptability of Facial Recognition Technology

Author: Dynon, Nicholas1

Published in National Security Journal, 20 October 2024

DOI: 10.36878/nsj20241020.06 

Download full PDF version – Licence to Operate: Mapping the Public Acceptability of Facial Recognition Technology (1,153 KB)

Abstract

Rapid developments in facial recognition technology (FRT) have made its use in contemporary surveillance-oriented security technology (SOST) systems, such as CCTV, increasingly widespread. An artificial intelligence-based technology, FRT is a force multiplier for these systems, delivering security, efficiency and business intelligence gains for organisations that deploy it. At the same time, it is a controversial technology, but unevenly so. Publics tend to accept that the technology has become part of the process of passing through customs at airports, for example, yet its use by retailers has sparked frequent backlash. The frequency of these controversies suggests that security consultants and other practitioners responsible for providing advice to organisations in relation to the suitability of security systems are failing to incorporate the ‘public acceptability’ of potential FRT deployments within their advice. Existing research on FRT public acceptability demonstrates that some deployments of FRT are more publicly acceptable than others. This paper collates the data from existing FRT public acceptability research in order to (i) identify deployment-specific patterns of acceptability, and (ii) develop a model for mapping the acceptability of potential deployments based on a ‘reward proximi-ty’ versus ‘perceived risk’ trade-off. This model may assist actors within the FRT supply chain to make more informed choices in relation to the appropriateness of facial recognition technology in specific deployment scenarios.

Keywords: Facial Recognition Technology, biometrics, live facial recognition, surveillance-oriented security technologies, video surveillance, analytics, CCTV, emerging technology.


Introduction

Deployments of live Facial Recognition Technology2 (FRT) by retailers in New Zealand and Australia have in recent years elicited national media attention – often for the wrong reasons. In a few short years, rapid developments in CCTV3 video analytics have led to a proliferation of FRT deployments amid concerns over its intrusiveness, its accuracy, apparent biases in relation to women and minorities, the lack of transparency around its growth, and the absence of safeguards and legislation regulating its use. Internationally, communities are largely unclear as to exactly what FRT is capable of, and are broadly split down the middle in their acceptance of it. What makes the technology all the more controversial is that in spite of this, we’re nevertheless witnessing a proliferation in its deployment in more of the spaces we frequent in the course of our daily lives.

Debates in New Zealand and Australia around FRT – and particularly live FRT – are recent relative to comparable jurisdictions internationally. Public discourse on FRT in the UK and US, for example, has been longer running, wider ranging, and higher in profile. In these jurisdictions, significant FRT deployments by law enforcement, government agencies, and the private sector have occurred in the absence of legislation specifically allowing, prohibiting, or identifying limits to them. In this void, limited numbers of sub-state actors, including some municipal authorities and universities, have stepped in to limit or ban the deployment of FRT within their jurisdictions, and many lobby groups have acted to mobilise opposition to what they perceive as a disproportionately intrusive surveillance-oriented security technology (SOST).4 The technological innovations that underpin FRT have been developed in the absence of public awareness, political debate, and legislative accommodation, and – echoing international experience – this has led to recent controversies in both New Zealand and Australia.

New Zealand supermarket cooperative Foodstuffs North Island Limited commenced a six-month trial of FRT across 25 of its New World and Pak’n Save supermarkets in February 2024. Citing historically high rates of retail crime across its stores, Foodstuffs looked to the tech’s ability to identify Persons of Interest from among shoppers entering its stores.1 This followed a reported 29-store trial in late 2022 in which the company refused to confirm which of its stores were involved,2 in addition to earlier discrete deployments that had made it to the media as far back as 2018.3 Only several weeks into the 2024 trial, news reports emerged of a Māori mother feeling “racially discriminated” against after being misidentified as a trespassed thief at a participating Rotorua New World supermarket.4 In the wake of the incident, University of Canterbury lecturer Mark Rickerby commented that the company’s response that it was a “genuine case of human error”, failed to address “deeper questions about such use of AI and automat-ed systems.”5 Given the procedures listed for the trial (two authorised store personnel must verify the accuracy of the FRT system match), it is likely that the ‘human error’ occurred only after the facial recognition software had already flagged the individual as a match (a false positive)5 – and that therefore the error originated with the (90% accurate) algorithm.6 It’s not the first time the company had invoked the ‘human error’ line in its communications, suggesting a reluctance to blame the technology. In an unprecedented move, the Privacy Commissioner initiated an inquiry into the trial.7 The other member of New Zealand’s supermarket duopoly, Woolworths New Zealand (formerly Progressive Enterprises Limited), has looked past FRT to alternative technologies to stem crime and antisocial behaviours in its stores, deploying body-worn cameras (BWCs) to its staff in April 2024.8 Concerns that its BWCs may include FRT prompted the company to issue a media release confirming that it does not use facial recognition technology in any of its stores.9

In Australia, a finding by consumer watchdog CHOICE that Bunnings, Kmart, and The Good Guys may have breached the Privacy Act with their use of FRT resulted in the retailers pausing their use of the technology in July 2022 after a public backlash. It also prompted the Office of the Australian Information Commissioner (OAIC) to open an investigation into their use of the technology.10 The probe into The Good Guys was dropped when the company “suspended their use of facial recognition technology and indicated that they weren’t intending to reinstate it”.11 Earlier, in an October 2021 FRT-related investigation, Australian Information Commissioner and Privacy Commissioner Angelene Falk found that convenience store group 7-Eleven had “interfered with customers’ privacy by collecting sensitive biometric information that was not reasonably necessary for its functions and without adequate notice or consent”.12

In both New Zealand and Australia police deployment of FRT hit the headlines in 2020 when their respective uses of Clearview AI were revealed. There was uproar when it was found that New Zealand Police had conducted a trial of the controversial software without consulting either its own leadership or the Privacy Commissioner.13 In December 2021, responding to an independent expert review into the matter, Police publicly stated that “it will not use live Facial Recognition technology without further detailed analysis, taking account of legal, privacy and human rights concerns – with a particular focus on the New Zealand context.”14 More recently, New Zealand Police published their first-ever policy on facial recognition, placing a stop on police deployment of Live FRT in all but rare and extreme circumstances, stating that “in the New Zealand context, it is considered that the overall risks of live FRT outweigh the potential benefits”. The policy, released in August, places safeguards on a range of other authorised police uses of FRT.15 Controversy similarly plagued the Australian Federal Police’s (AFP) deployment of the technology in 2020 with its use of Clearview AI and Auror analytics.16 The AFP suspended its use of Auror (a retail crime intelligence platform with FRT functionality) in 2023 only after a freedom of information (FOI) request revealed that more than 100 of its staff had used the platform without considering privacy or security implications.17

In apparent contradiction, many other FRT deployments raise remarkably fewer concerns. “Facial recognition is widely accepted in some forms,” note Doberstein, et al., “like in playful social media apps or when sorting photos into automated digital albums, and resisted in other forms, such as when police forces use it.”18 The public is more familiar and ‘okay’ with certain FRT deployments, such as when unlocking one’s own smart phone or passing through eGates/SmartGates at airport passport control, while other deployments – although less familiar – just seem to make inherent sense, such as in the post-incident investigation of a mass shooting or in the verification of an individual’s identity who has lost their documents in conflict or natural disaster. In short, some FRT deployments appear to be either more or less controversial than others, and where a specific deployment creates significant controversy it suggests a failure of the FRT operator and their supply chain to have adequately assessed the potential (i) level of public acceptability of their intended deployment, or (ii) reputational risks stemming from a deployment type known to attract low levels of public acceptability (or high levels of non-acceptance).

This has important implications for all parties within the FRT supply chain, from CCTV manufacturers and video analytics developers to security system hardware and software distributors, security consultants, security integrators/installers, and to the organisations that purchase and operate the technology. Fearing public backlash, purchasing organisations may be scared away from considering FRT deployments altogether, or, not anticipating controversy, they may invest in wide-scale FRT systems only to inadvertently trigger a major backlash and expose themselves to a range of unintended consequences.19 Yet the number of deployments that have resulted in media controversy suggest that in the absence of regulation, FRT vendors remain overwhelmingly driven by sales imperatives while security consultants and purchasers remain underwhelmingly knowledgeable in relation to FRT public acceptability factors.

The deployment of live FRT by casinos to identify and prohibit entry to self-declared problem gamblers is a case in point.20 One the one hand, this deployment type receives strong support by many governments and broad public acceptance, yet, on the other hand, surveillance technology suppliers tend to promote to casinos the ability of FRT tracking to support marketing and player incentive schemes, which are uses that the majority of the public find unacceptable – and which might fall into the trope of ‘surveillance capitalism’ articulated by Shoshana Zuboff in her seminal work.21 As recently as 27 March 2024, for example, an Australian Capital Territory Legislative Assembly inquiry into cashless gambling noted that there is an “abundance of industry trade papers and promotional materials for facial recognition technology that makes clear the marketing and profit maximisation benefits such technology offers for casino and other gambling venues”.22 Such dual-use messaging by vendors creates dissonance, and raises the spectre of ‘function creep’. While casino operators may see the added value of FRT for marketing and customer loyalty purposes, the majority of their customers are likely be opposed to their facial image being collected for such purposes. Additionally, a casino using its FRT for player incentivisation – in addition to problem gambler prohibition – may unwittingly take on reputational risk if it does so ignorant of the extent of likely negative public sentiment towards it. A resulting controversy may lead not only to brand damage but also to an impact on revenues and losses associated with investment into a FRT system that it may be required to ultimately shut down.

There appears to be a significant mismatch between the proliferating functionalities and use cases of FRT technology promoted by suppliers and the varying levels of public awareness and acceptability of these – and the ability of the FRT supply chain to mediate between the two. What may be extolled by an FRT marketer as a revolutionary crime prevention capability that can also collect data for business improvement and profitability may be viewed by significant segments of the population as technological overreach and a dystopic threat to individual privacy and freedoms. Additionally, the apparent success of FRT in one type of deployment may be used as part of a justification for a system’s use in an altogether disparate type of deployment. In the case of the aforementioned Foodstuffs North Island Limited FRT trial, for example, the FRT system used is touted as having been evaluated and endorsed by the South Australian Attorney-General’s Department “as an approved FRT system to identify previous barred patrons in gaming venues to prevent the recurrence of problem gambling”.23 From an operator perspective the use of FRT in identifying Persons of Interest in supermarkets may appear consistent with its use in identifying barred patrons in gaming venues, but research data indicates that from a public acceptability perspective these two scenarios are inconsistent (refer Table 12 and Table 14).

_______________
1 The author is Innovation & Risk Manager at Optic Security Group. Address for correspondence: nicholas.dynon@opticsecuritygroup.com. He wishes to acknowledge two anonymous reviewers, NSJ Managing Editor Dr John Battersby, and Andrew Thorburn, for their comments on drafts.