A bipartisan group in Congress is working on legislation that could regulate the use of facial recognition by the private sector, federal government, and law enforcement.
“We have a responsibility to not only encourage innovation, but to protect the privacy and safety of Americans consumers,” Rep. Carolyn Maloney (D — NY) said today, while acknowledging a need to educate others in Congress and explore consumer privacy and data security protections currently in place.
“In that vein, I would like to announce today that our committee is committed to introducing, and marking up common sense facial recognition legislation in the very near future,” said Maloney, chairwoman of the House Oversight and Reform Committee.
The House Oversight and Reform Committee held its third hearing in less than a year today about facial recognition, this time to explore its use in the private sector. Facial recognition is already being used in job interviews, by Delta Airlines in airports, to replace time clocks in some workplaces, to unlock doors in a housing complex, and to unlock smartphones like Apple’s iPhone X and Google’s Pixel 4.
Facial recognition was found to be one of the biggest areas of investment in 2019 among AI startups and businesses, according to the 2019 AI Index report.
Solutions offered in testimony and questioning today suggested things like the need to require opt-in consent before using a person’s photo to train a model, the need to meet high performance standards before governmental deployment, regulation that protects against facial recognition being used at political rallies or protests, and protections to guard against overpolicing of schools or deployment by law enforcement to make arrests.
“There should be something in our civil rights law and our justice system that does not allow a person to be persecuted based on the fact that we know this data is not adequate and it has biases,” said Congresswoman Brenda Lawrence (D — MI), who said the majority of her constituents are people of color. Last year she introduced a bill in support of the development of ethical AI.
Though cities like San Francisco and San Diego have passed bans or moratoriums on the use of the tech by city agencies or law enforcement, and companies like Amazon and Microsoft have asked for guidelines, Congress has yet to produce regulation over the use of facial recognition.
Today’s hearing was the third in a series that started last year. In House Oversight and Reform hearings in spring 2019, members of Congress agreed on a need for action, and examined facial recognition’s potential to violate the First and Fourth Amendments due to the technology’s tendency to discriminate.
Analysis by the Department of Commerce’s National Institute of Standards and Technology (NIST) of facial recognition released last month analyzed nearly 100 businesses, including leaders like SenseTime and Microsoft. The report found evidence of racial bias in the performance of identifying people who are not white men, with false positive performance being 10 to 100 times worse for women of color, or people of African or Asian descent.
Higher rates of false positive identification were also found for the elderly, children, and women overall compared to white men. Facial recognition is also considered less effective on people who are transgender or do not conform to gender norms.
The likelihood that facial recognition will work best on a white man and worst on children and woman of color today is a reflection of the role of power in AI ethics.
Rep. Alexandria Ocasio-Cortez (D — NY) also called for protections against automation of injustice or biases that “compound on the lack of diversity in Silicon Valley as well.”
“This is some real-life Black Mirror stuff that we’re seeing here, and I think it’s really important that everyone really understand what’s happening because this, as you pointed out, is happening secretly,” she said.
This is the first hearing with Maloney as chairwoman of the Oversight and Reform Committee following the death of Rep. Elijah Cummings (D — MD) last fall.
Both in hearings last year and in the one held today, Democrats and Republicans agreed about a need for policy to regulate the use of facial recognition software use in society.
“I think that this is where conservatives and progressives come together, and it’s on defending our civil liberties is on defending our Fourth Amendment rights, and it is that right to privacy. And I agree with the chairwoman, and the ranking member [Rep. Jimmy] Gomez and others on the other side of the aisle — we’ve had really good conversations about addressing this issue,” Rep. Mark Meadows (R — NC) said.
Cummings and ranking member Rep. Jim Jordan (R — OH) were reportedly working on bipartisan facial recognition reform legislation, but the future of a substantial bill was in question late last year due to Cummings’ death and division over issues like impeachment.
Last week, the Trump administration released regulatory AI guidelines to steer the actions of federal agencies creating policy for the private sector and encourage adoption of similar guidelines nations around the world.
In contrast to a warning against overregulation former Google CEO Eric Schmidt made recently, experts testifying before Congress in both meetings urged lawmakers to regulate use of facial recognition.
In opening remarks today, Jordan — a Republican well known for his defense of President Trump against impeachment — said he has “no intention of hampering technological advancements of the private sector” but called facial recognition regulation an issue that transcends politics. He said action must be taken to ensure protection of individual rights, and that, for example, its use should be unlawful at protests or political rallies.
“It doesn’t matter if it’s a President Trump rally or a Bernie Sanders rally, the idea of American citizens being tracked and cataloged for merely showing their faces in public is deeply troubling,” he said.
“The urgent issue we must tackle is reining in the government’s unchecked use of this technology when it impairs our freedoms and our liberties. Our late chairman Elijah Cummings became concerned about government use of facial recognition technology after learning it was used to surveil protests in the his district related to Freddie Gray. He saw this as a deeply inappropriate encroachment on the freedom of speech and association, and I couldn’t agree more,” Jordan said.
Jordan would go on to say facial recognition regulation should begin with an evaluation of how it’s used by federal government entities and a halt of any expansion of facial recognition use by the federal government.
Companies like Microsoft and Amazon have asked lawmakers to regulate use of the technology. However, Amazon shareholders voted last May to reject a proposal to halt sale of Amazon’s facial analysis software Rekognition to governments. Weeks later, AWS CEO Andy Jassy said Amazon will sell facial recognition technology to any government.
In another hearing last spring, experts testified that the majority of state and local law enforcement agencies have no standards for the use of facial recognition to identify suspects. The FBI received criticism from members of Congress in a hearing in June for its failure to comply with Government Accountability Office (GAO) recommendations first made in 2016 to adopt auditing and assessment standards for its system that uses driver’s license photos from dozens of states across the country.
The first item chairwoman Maloney placed on the record in the meeting today was an ACLU study about misidentification of members of Congress by Amazon’s Rekognition as criminals. One of those misidentified in the ACLU exercise was Rep. Gomez (D — CA). Amazon came up more than any other tech giant during the hearing, and Gomez said Amazon’s aggressive promotion of the tech and the vote by Amazon shareholders last year emboldened his belief in a need for regulation.
“This technology is fundamentally flawed. For somebody who gets pulled over by the police, in certain areas it’s not a big deal. In other areas, it could mean life or death if the people think you are a violent felon,” he said.
Rep. Meadows (R — NC) urged against a focus on inaccuracy rates to instead recognize and define where it’s appropriate for facial recognition technology to be used at all.
“To focus only on the false positives, I think, is a major problem for us though,” he said, recognizing the speed of technological progress. “So I’m here to say that if we’re only focusing on the fact that they’re not getting it right with facial recognition, we’ve missed the whole argument. … My concern is not that they improperly identify Mr. Gomez, my concern is that they will properly identify Mr. Gomez and use it in the wrong manner.”
Rep. Rashida Tlaib (D — MI) and Rep. Gerry Connolly (D — VA) agreed that a focus on false positives is misguided, and that how and why the technology is deployed must be considered.
Members of the committee today heard from five experts, including AI Now Institute cofounder Meredith Whittaker, who acknowledged that most facial recognition technology that government uses is developed by the private sector, then licensed to governments and businesses. She spoke to the role facial recognition is playing in power dynamics throughout society.
“Facial recognition is usually deployed by those who already have power — say employers, landlords, and the police — to surveil, control, and in some cases oppress those who don’t,” she said.
Whittaker called audits and standards a step in the right direction, but not enough to ensure the safe deployment of facial recognition software. She warned that if standards alone are used as the check on facial recognition, they could mask harm instead of preventing it.
Security Industry Association (SIA) senior director of government relations Jake Parker urged the acknowledgment of positive uses of facial recognition, such as to identify sexual traffickers or victims of sexual trafficking, for fraud detection, or in hospitals to verify patient identity.
Whether or not experts support expansive use of facial recognition technology, many agreed in the need for privacy regulation that could address a number of issues the technology presents.
AI is receiving growing attention from members of Congress and state and local governments as lawmakers consider how tech like facial recognition can be used in society.
Legislation in the past year has been proposed to limit use of facial recognition in public housing, establish a national AI strategy, or require businesses to receive opt-in approval from an individual to allow their image to be used to train a facial recognition model. The use of facial recognition in China against minority groups like Uighur Muslims is often mentioned as a motivator for lawmakers to move fast.
“I think it is a model for authoritarian social control that is backstopped by extraordinarily powerful technology,” Whittaker said about China’s use of facial recognition software. “I think one of the differences between China and the U.S. is that there the technology is announced as state policy. In the U.S., this is primarily corporate technology that is being secretly threaded through our core infrastructures without that kind of acknowledgment.”
In part due to concern about surveillance like the kind seen in China, facial recognition legislation is also been proposed in the United Kingdom, and the European Union is expected to release more personal protections against facial recognition later this year.