Why SIA Opposes Massachusetts’ Far-Reaching Facial Recognition Technology Prohibition Bill

Massachusetts’ Joint Committee on the Judiciary held a hearing on Oct. 22, 2019, to consider S. 1385/H. 1538, an extreme ban on facial recognition technology prohibiting any government entity or employee at the state and local level throughout the commonwealth from “acquiring, accessing or using the technology or any information derived from it by another entity.”

The Security Industry Association (SIA) believes all technology products, including facial recognition technology, must only be used for beneficial purposes that are lawful, ethical and non-discriminatory. We also acknowledge that any technology tool has the potential for misuse by those that wield it. Taking steps to ensure every technology is used in ways that benefit Americans is a legitimate policy objective.

However, the proposal in Massachusetts is deeply flawed. As stated in SIA’s Oct. 22 testimony before the Joint Committee, here are several concerns regarding the legislation:

The costs to public safety of imposing such a ban have not been considered. While most concerns have centered on law enforcement, this ban would go far beyond it to cover every use for any purpose, including access control to secured areas of government buildings, security measures protecting visitors and employees at government facilities, identifying missing children or disoriented adults, catching driver license application fraud, detecting stolen identity documents and protecting critical infrastructure. And unlike a ban enacted in San Francisco earlier this year, there’s no exception for aviation security. Boston Logan International Airport, where more than 40 million travelers pass through each year, has a well-established program.

On Oct. 16, SIA joined several other leading national technology and trade associations in writing to the U.S. Congress to outline concerns with the serious consequences of potential bans on public-sector uses of facial recognition technology. This coalition includes the U.S. Chamber of Commerce, the Consumer Technology Association, Airports Council International, the Global Business Travel Association and other groups. Similar concerns have been raised by the Information Technology and Innovation Foundation and other respected groups.

Facial recognition technology is not new. It has been used effectively in our country by law enforcement for a decade. A ban would immediately take this proven investigative tool off the table for all law enforcement throughout Massachusetts, putting the safety of every resident at risk. For example, since 2015 the nonprofit group Thorn has provided a facial recognition tool to help investigators find underage sex trafficking victims in online ads – helping rescue 9,000 children and identify over 10,000 traffickers. If the legislation were enacted today, no law enforcement official in Massachusetts would be permitted to use this tool to identify and rescue such children, despite its proven effectiveness.

Facial recognition technology is not inherently flawed. The justification provided for a complete ban lacks thorough analysis and does not reflect a clear understanding of how facial recognition technology works and is currently used in the U.S. It finds that the benefits of the technology “are few and speculative” and “outweighed by their harms,” when, in fact, the opposite is true. The benefits of facial recognition technology are well established, while evidence of any significant unlawful use, misuse or abuse in the U.S. is lacking.Far from a “rules-free environment,” use of this technology is subject to our existing constitutional framework and laws, regulations, evidentiary rules and best practices that address many privacy and civil liberties concerns. The Bureau of Justice Assistance, the U.S. Department of Homeland Security and other law enforcement stakeholders have developed a model policy development template for facial recognition that is used by many law enforcement agencies, and use cases with best practices across the country have been detailed by the Integrated Justice Information Systems Institute. Clear and consistent parameters for law enforcement use are in place in many communities, including a policy recently adopted by the Detroit Police Department.  

The legislation also asserts that the technology “has a history of being far less accurate in identifying the faces of women, young people, and dark-skinned people” that “leads to harmful ‘false positive’ identifications.” These are unproven assertions. While facial recognition is not infallible, errors affect all demographic groups. In experimental tests, some algorithms have demonstrated lower performance in matching photos of women and minorities. But the blanket assertion that the technology performs less effectively across the board for these groups simply isn’t factual. The most recent research suggests that newer algorithms have accuracy rates for African Americans and women of color equal to or even higher than those for other groups.

Experiments purporting to show discrepancies – and the many media reports about them – deserve close scrutiny. For example, one of the most-cited studies had evaluated “facial analysis” technology, where a computer system predicts features such as age or gender based on a photo, not “facial recognition,” used to help identify a person. Conflating the two inaccurately assigned disparities to facial recognition that really dealt with facial analysis, a different technology with little application to law enforcement. Both this study and “tests” conducted by the American Civil Liberties Union (ACLU) of off-the-shelf facial recognition technology using photos of lawmakers in California, Massachusetts and Washington D.C., ignored the key role of confidence scores associated with each potential match returned by such systems. The latter used lower confidence thresholds to return more potential matches (or “false matches” according to the ACLU).

Consistent performance across all demographic variables and levels of image quality is a key objective of facial recognition technology developers. In fact, the basis for concerns about “bias” is fast disappearing as facial recognition technology has achieved revolutionary accuracy improvements in the past few years. The National Institutes of Standards and Technology (NIST), the world’s leading authority on facial recognition performance, released its most recent report, after testing 203 algorithms from 51 developers against a data set or more than 12 million individuals – with “close to perfect” performance by high-performing algorithms and miss rates averaging 0.1 percent.

While the performance of different systems does vary, the real-world impact of instances where it doesn’t work as well as intended is highly dependent on the specific purpose and method in which it is employed. A related misconception involves the alleged risks and implications of “misidentification” by law enforcement. But in all known U.S. law enforcement uses as an identification tool, a human investigator must confirm whether any of the computer-provided photos is the person in a submitted image. Still, search results are not used to positively identify a person or as the sole basis for an arrest. In law enforcement the technology itself does not identify a person and therefore cannot “misidentify” a person as someone they are not. Importantly, a “false positive” in this case is not misidentification; it is part of how the process works to create a group of potential matches based on a similarity score. For more information, see SIA’s recent paper Face Facts: Dispelling Common Myths Associated With Facial Recognition Technology.

Beyond its problematic justification, the terms and definitions in the legislation are ultimately unworkable. Under the legislation, the ban is applicable to “biometric surveillance systems”– a new term defined as “any computer software that performs face recognition or other remote biometric recognition,” regardless of whether its use has anything to do with surveillance. These terms and definitions are so broad and uncarefully crafted that analysts believe the legislation could “effectively prevent any government official from using any technology with facial-recognition capabilities, including Facebook, iPhones, and more.”

“Face recognition” is further broadly defined in a way that might possibly include a wide range of other types of video and image analytics technologies that may not be used for identification purposes at all, such as simply grouping together and comparing images of similar people or objects. These technologies can be critical in emergency situations where time is limited. And facial recognition can enhance these capabilities further. For example, in the aftermath of the Boston Marathon bombings, with very limited video analytics and facial recognition tools available, it took several days to identify the suspects. Using the technology tools available today, leads could have been generated in minutes versus days, possibly avoiding the human costs of the lengthy police chase and shoot out in public that ensued.

Americans support the responsible use of facial recognition. While some media narratives would lead you to believe there is widespread support for sweeping restrictions on the technology, in fact, recent national polls have found that less than one in five Americans would support strict limits if it came at the expense of public safety, and more than half of Americans trust law enforcement to use the technology responsibly. In an August poll of Massachusetts residents, 66 percent said law enforcement should not be precluded from using new technologies such as facial recognition, 64 percent believed facial recognition technology has the potential to enhance safety and only 15 percent would limit law enforcement’s use of the technology at the expense of public safety.

The rush to ban the most significant uses of this important and beneficial technology is unjustified and premature, at a time when the it is being integrated successfully throughout a wide range of commercial and government applications. Instead we should be talking about ways to provide reassurance that facial recognition technology is being used responsibly, without eliminating all the proven benefits.  Collaboration and communication between law enforcement agencies, industry and the public can address concerns with facial recognition, and these efforts should be based on a clear understanding of the technology, not misconceptions about it. SIA and its members stand ready to work with policymakers on this important issue.