Facial Recognition for Access Control: Efficient, Convenient and Accurate

Systems that identify enrolled individuals are the most effective and least controversial

Steve Reinharz headshot
Steve Reinharz is the president of Robotic Assistance Devices.

Like so many other sectors of the economy, the security industry cannot find workers. Even steeply rising wages are not attracting enough candidates to fill access-related security positions. Fortunately, help is here, in the form of facial recognition (FR) solutions. With the ability to provide safe, effective and cost-efficient identity verification, FR technology can close the security industry’s labor gap.

FR has come a long way fast. In the early 2000s, the Los Angeles Police Department tested very early handheld devices. Officers would stand a suspect against a wall and take a picture, then wait as long as five minutes while the software tried to match the image against a database of fewer than 150 people who had injunctions against them. It was kludgy and ineffective, but the promise was evident.

In 2006, the first Face Recognition Grand Challenge, an event created to encourage innovation within the industry, found the best algorithms at that time to be 10 times more accurate than just four years earlier and 100 times more accurate than in 1995. Fast forward to today, and the National Institute of Standards and Technology (NIST) reports that current high-performing algorithms are “close to perfect” in matching faces against a database of 12 million people.

Facial recognition technology is having an impact at both the state and local levels across the United States. At last count, 17 state police departments and 744 local police departments utilize FR, and facial recognition is used by the U.S. Customs and Border Protection’s Global Entry system at airports to verify the identity of enrolled individuals who are returning to the United States.

FR’s less accurate rate of identifying the faces of nonwhite individuals, however, has driven some public distrust over its use as a policing tool. Innocent Black men have been arrested and accused of crimes based on mistaken FR matches.

But using surveillance video in conjunction with FR applications – where individuals have not agreed to share their biometric data – is quite different than deploying biometric identity solutions in access control environments. The many state laws restricting the technology’s use do not pertain to the ways that the security industry leverages it for worker/visitor/resident/traveler/guest/you-name-it screening purposes.

FR access control relies on “cooperative subjects.” Individuals agree to enroll in programs that use their faces to enter a workplace, visit a hospital, board a plane, use a hotel elevator or gain access to some other restricted area. They “cooperate” by submitting to a benchmark reading (ideally, a digitized, 3D “mug shot” encrypted with proprietary algorithms), then, when seeking entry to a location, look at an FR-enabled camera to provide a match. Under these conditions, error rates of the best FR solutions are near zero. Furthermore, the top 20 algorithms accurately identify Black faces equally as well as white faces.

Much of the bad rap associated with FR comes from applications that use “noncooperative” subjects. These are individuals whose faces have been captured surreptitiously by surveillance cameras, often in less-than-ideal lighting and from an awkward angle. A 2020 study of a top FR vendor found that its system’s error rates jumped from 0.1% when matching faces to a high-quality headshot to 9.3% when comparing them to surveillance video in which the subject was not looking directly at the camera or was obscured by shadows – a nearly 100-fold difference. As the security industry expands the use of FR solutions in access control applications, it needs to educate the public on this vital distinction.

Convenience will also play a role in adoption. Companies hoping to win public acceptance should make participation in FR systems optional. When people waiting in long lines to interact with an overworked security officer see others whizzing by, barely pausing to verify their identity at a biometrics-enabled camera, many will likely be persuaded.

Finally, the public needs to feel confident that, when enrolling in FR systems, their biometric data will be used only for the designated applications. The European Union is already taking an aggressive stance in establishing oversight. Its proposed “AI Act” will restrict the use of biometric recognition systems to networks in which participants have expressly agreed to share their data. Furthermore, biometrics cannot be used to profile users, such as scoring trustworthiness or predicting the potential for criminality. While the AI Act is an EU initiative, there will undoubtedly be a spillover effect beyond member countries, just as has occurred with the union’s General Data Protection Regulation.

The AI Act does permit matching a consenting individual’s face to a stored, encrypted record to verify identity and grant access through a point-of-entry. There is no ethical ambiguity for such applications. Acceptance will come by informing the public about how opt-in FR systems work, winning them over with attendant conveniences and creating sufficient oversight mechanisms to ensure proper system use.