Walking into the local bodega, passing through the subway turnstiles and going to a show at Madison Square Garden are all regular New York City occurrences.
They’re also situations in which New Yorkers’ faces could end up on a computer screen or get scanned for biometrics — all without their consent.
In recent weeks, Madison Square Garden owner James Dolan’s use of facial recognition at the venue has drawn new scrutiny to the technology. Dolan has been using the technology to ban attorneys involved in litigation against him from entering the arena.
It's not just privacy or accountability issues, however, that are raising red flags for critics of the technology. Opponents say there are racial discrepancies — not only in where this technology is used, but in the rate at which it misidentifies people of color.
Now, activists and lawmakers are trying to sound the alarm on the technology’s potential to cause harm, particularly to brown and Black New Yorkers, before its use becomes even more widespread.
Manhattan state Sen. Brad Hoylman-Sigal, a Democrat, is working on legislation in Albany that would put a pause on the use of facial recognition technology by law enforcement until usage guidelines are established, including a framework for the disposal of the collected images and data.
“With facial recognition technology, essentially everyone is a suspect,” Hoylman-Sigal said in an interview with NY1. “When you walk into a venue like Madison Square Garden, your face is scanned. Not just your face, though, I should add; also your expression, your feelings, some facial recognition technology can actually lip read. All of that information is stored.”
“Where is it used after, say, Jim Dolan screens you as you walk into a Knicks game?” he asked. “That really is concerning to me and other public officials.”
'Automating the bias'
The NYPD, for its part, has deployed facial recognition technology since at least 2011. But details on how that technology is used remain murky, and its accuracy has been called into question by critics.
Researchers have found that the technology is more likely to misidentify people of color, trans people, and women than middle-aged white men.
One 2019 study titled “Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network” found that four major commercial facial recognition tools and four state-of-the-art facial recognition tools misidentified white faces in pair matching 10% of the time, while Black and Asian faces were misidentified nearly twice as often.
A study by the National Institute of Standards and Technology also released in 2019 indicated higher error rates for pair matching of Black and Asian faces, with error rates up to 100 times higher for Black and Asian people than for white people on certain facial recognition systems.
And a Massachusetts Institute of Technology study in 2018 found the rate of error was highest when it came to dark-skinned Black women.
“Essentially, for those women, the system might as well have been guessing gender at random,” MIT said in a news release about the study.
Opponents argue that in a city that has a history of targeting Muslims after the Sept. 11, 2001 terror attacks and using racist law enforcement practices like “stop and frisk,” the use of facial recognition technology is essentially a way to codify prejudice.
In 2013, a federal judge ruled that the use of “stop and frisk” was racially discriminatory and unconstitutional under the Fourth and 14th amendments.
“When we think about the average human who has a certain level of bias, we can think about how creating technology — especially facial recognition technology — [it] is a man-made creation and so it's automating the bias that is already there,” Attiya Latif, a NYC-based staff organizer with Amnesty International, said.
New Yorkers living in areas at greater risk of stop-and-frisk policing are more likely to be exposed to facial recognition technology, according to research conducted by Amnesty International.
The human rights organization also found that in the Bronx, Brooklyn and Queens, there was a higher concentration of facial recognition-compatible CCTV, or video surveillance, cameras in areas with a higher proportion of nonwhite residents.
In 2019, Gothamist reported that Staten Island District Attorney Michael McMahon purchased Clearview AI technology.
The facial recognition software differs from traditional forms of technology by allowing users to search for potential face matches from sites such as YouTube, Facebook, LinkedIn and Venmo, as opposed to solely relying on government databases of mugshots and driver’s licenses.
Records revealed under the Freedom of Information Law request by the Legal Aid Society showed that the Staten Island DA’s Office paid $10,000 in May of 2019 for 11 employees to use Clearview’s services for one year.
Elected officials condemned the use of this software and called for the immediate halt of its use.
“Facial recognition technology like Clearview AI has the capacity to be not a tool for public safety, but a threat to it,” Public AdvocateJumaane Williams wrote in a statement.
Cost concerns
Critics have also raised concerns about how much money the NYPD spends on the technology.
Privacy advocates like Albert Fox Cahn, the executive director of Surveillance Technology Oversight Project, a nonprofit advocacy organization hosted by the Urban Justice Center, say that the costs of these surveillance tools are often buried within city budgets.
In 2020, the City Council, working with civil liberties groups like S.T.O.P. — the Surveillance Technology Oversight Project — passed the POST Act, which requires more transparency around surveillance expenditures.
Ten years’ worth of contracts amounting to nearly $3 billion in NYPD surveillance contracts were uncovered under the act. The contracts were $2.5 billion more than previously reported, and included more than $400 million spent on the Domain Awareness System, an opaque surveillance system that collects tens of thousands of camera feeds from around the city.
Cahn says despite the passage of the POST Act, getting the NYPD to disclose its records on the technology remains an uphill battle.
The group is currently in litigation with the NYPD under New York’s Freedom of Information Law to compel the department to produce documents requested about their use of facial recognition surveillance in Times Square. They’re expected to be back in court next month, according to Cahn.
Police officials did not respond to request for comment.
Political action
Last month, New York Attorney General Letitia James sent a letter to the corporation that owns Madison Square Garden asking it to explain how it uses facial recognition technology to prohibit ticket holders from entering its venues.
In the letter, James called on MSG Entertainment to report the steps the company is taking to comply with the state’s civil and human rights laws, and ensure that its technology will not lead to discrimination.
“Our policy does not unlawfully prohibit anyone from entering our venues, and it is not our intent to dissuade attorneys from representing plaintiffs in litigation against us,” an MSG Entertainment spokesperson wrote in a statement. “We are merely excluding a small percentage of lawyers, and only during active litigation.”
Currently, there are no federal or state laws regulating the technology’s use. But lawmakers like Hoylman-Sigal are trying to change that.
Along with his bill that would effectively put a pause on the use of facial recognition technology, the state senator is sponsoring a bill that would bar landlords from using facial recognition systems on residential premises. Both of the bills remain in committee.
On the city level, Councilmember Carlina Rivera, whose Manhattan district includes the East Village, Gramercy Park and the Lower East Side, has introduced a bill that would require city agencies to submit annual reports about surveillance technology data, including but not limited to the collection, use, purchase and sale of the data. Under the current iteration of the bill, the NYPD would be excluded from this requirement.
Another bill sponsored by Rivera would define how the technology is used in residential spaces. Councilmember Jennifer Gutiérrez, whose district includes Williamsburg, Bushwick and Ridgewood, has also introduced a bill that would require building owners to submit registration when they're using biometric recognition.
There’s also a bill sponsored by Williams that would prohibit unauthorized surveillance on GPS systems without a person’s consent.
“We’re hoping that with this legislation, and a hearing at the City Council level, we can have the NYPD there in public speaking on the record as to what they're doing with the data, how they intend to use it and expand it,” Rivera said. “We want to get those answers to the public immediately.”