ICE and CBP’s facial recognition app cannot actually verify people’s identity


Face recognition application Mobile fortificationnow Used by immigration agents in the United States In towns and cities across the United States, it was not designed to reliably identify people on the streets and was deployed without the scrutiny that has historically governed the rollout of technologies that impact people’s lives. privacyaccording to records reviewed by WIRED.

the Department of Homeland Security Mobile Fortify launched in the spring of 2025 to “identify or verify” the identities of individuals stopped or detained by DHS officers during federal operations, records show. DHS explicitly linked the rollout to Executive ordersigned by President Donald Trump On his first day in office, he called for a “comprehensive and effective” crackdown on illegal immigrants through the use of expedited deportations, expanded detention, and funding pressure on states, among other tactics.

Although DHS has repeatedly framed Mobile Fortify as a tool to identify people through facial recognition, the app does not actually “verify” the identities of people stopped by federal immigration agents — a well-known limitation of the technology and a function of how Mobile Fortify is designed and used.

“Every manufacturer of this technology, every police department has a policy that clearly states that facial recognition technology is incapable of providing positive identification, that it makes mistakes, and that it is only for generating leads,” says Nathan Wessler, deputy director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project.

Records reviewed by WIRED also show that DHS’s hasty approval of Fortify last May was enabled by dismantling central privacy reviews and quietly removing restrictions on department-level facial recognition — changes overseen by a former Heritage Foundation attorney and Project 2025 contributor, who now works in a senior privacy role at DHS.

Department of Homeland Security – which has refused to provide details on the methods and tools used by agents, despite repeated calls from Oversight officials and Nonprofit privacy watchdogs– Use Mobile Fortify to scan the faces of not only “target individuals”, but also people later They have been confirmed to be US citizens and others who were observing or protesting enforcement activity.

Reports have documented federal agents telling citizens that they were being recorded through facial recognition and that their faces would be added to a database without their consent. Other accounts describe the treatment of agents Accent, perceived ethnicity, or skin color As a basis for escalating confrontations, then use facial scanning Next step Once you stop. Together, the cases illustrate a broader shift in DHS enforcement toward low-level street encounters followed by biometric capture such as facial scans, with limited transparency around the tool’s operation and use.

Fortify technology does the filling Facial capture hundreds of miles from the US border, allowing the Department of Homeland Security to create non-consensual facial fingerprints of people who, says the Department of Homeland Security’s Privacy Office, “could conceivably be US citizens or lawful permanent residents.” As with the circumstances surrounding its distribution to CBP and ICE agents, Fortify’s functionality is primarily visible today through court filings and the testimony of sworn agents.

In a federal lawsuit this month, lawyers for the state of Illinois and the city of Chicago said the app had been used “in the field more than 100,000 times” since its launch.

In Oregon testimony last year, an agent said two photos of a detained woman taken using his facial recognition app showed different identities. The agent said the woman was handcuffed and looking down, which prompted him to physically change her position to get the first photo. He testified that the movement made her scream in pain. The app returned a name and photo of a woman named Maria; The match was rated by the agent as “maybe”.

Agents called her “Maria, Maria” to gauge her reaction. When she failed to respond, they took another photo. The agent testified that the second outcome was “possible,” but added, “I don’t know.” When asked for supporting probable cause, the agent pointed to the woman speaking Spanish, her presence with others who appeared to be noncitizens, and a “possible match” via facial recognition. The agent testified that the application did not indicate how confident the system was in the match. “It’s just a picture, Your Honor. You have to look at the eyes, the nose, the mouth and the lips.”

Leave a Reply

Your email address will not be published. Required fields are marked *