CBP signs Clearview AI agreement to use facial recognition in ‘tactical targeting’


United States Customs The Border Protection plans to spend $225,000 for a year to gain access Clearview Artificial IntelligenceA Face recognition A tool that compares images with billions of images Deleted from the Internet.

The deal expands access to Clearview tools for Border Patrol Headquarters’ Intelligence Division (INTEL) and the National Targeting Center, units that collect and analyze data as part of What CBP calls A coordinated effort to “disable, dismantle, and dismantle” people and networks perceived as security threats.

The contract states that Clearview provides access to “more than 60 billion publicly available images” and will be used for “tactical targeting” and “strategic counter-network analysis,” suggesting that the service is intended to be an integral part of analysts’ daily intelligence work rather than being reserved for isolated investigations. CBP says its intelligence units rely on a “variety of sources,” including commercially available tools and publicly available data, to identify people and map their connections to national security and immigration operations.

The agreement expects analysts to handle sensitive personal data, including biometric identifiers such as facial images, and requires nondisclosure agreements for contractors with access. It does not specify the types of images agents will upload, whether searches may include US citizens, or how long uploaded images or search results will be retained.

The Clearview contract comes as the Department of Homeland Security faces increasing scrutiny over how facial recognition is used in federal enforcement operations beyond borders, including large-scale actions in U.S. cities that have swept up American citizens. Civil liberties groups and lawmakers have questioned whether face-scanning tools are being deployed as routine intelligence infrastructure, rather than a limited investigative aid, and whether safeguards are keeping pace with the expansion.

Last week, Sen. Ed Markey Legislation introduced That would prevent ICE and CBP from using facial recognition technology altogether, citing concerns about including biometric surveillance without clear boundaries, transparency or public consent.

CBP did not immediately respond to questions about how Clearview is integrated into its systems, what types of images agents are allowed to upload, and whether searches might include U.S. citizens.

Clearview’s business model has come under scrutiny because it relies on scraping images from public websites on a large scale. These photos are converted into biometric templates without the knowledge or consent of the people being photographed.

Clearview also appears in the recently released DHS version Artificial Intelligence Stockslinked to a CBP pilot program that began in October 2025. The inventory entry links the pilot program to CBP’s traveler verification system, which conducts facial comparisons at ports of entry and other border-related screening processes.

CBP states in its public privacy documents that the traveler verification system does not use information from “commercial sources or publicly available data.” At launch, access to Clearview will likely be linked to CBP’s automated targeting system, which links biometric galleries, watch lists, and enforcement records, including files associated with recent Immigration and Customs Enforcement operations in areas of the United States far from any border.

Clearview AI did not immediately respond to a request for comment.

Recent tests By the National Institute of Standards and Technologywhich evaluated Clearview AI among other vendors, found that face-searching systems can perform well on “high-quality, visa-like images,” but falter in less controlled settings. Images taken at border crossings that “were not originally intended for automated facial recognition” produced error rates that were “much higher, often in excess of 20 percent, even with the most accurate algorithms,” federal scientists say.

The testing underscores a central limitation of the technology: The National Institute of Standards and Technology (NIST) found that face search systems cannot reduce false matches without increasing the risk of the systems failing to identify the correct person.

As a result, NIST says, agencies may run the program in an “investigational” environment that presents a ranked list of candidates for human review rather than a single confirmed match. However, when systems are configured to always return candidates, searches for people not already in the database will still generate “matches” for review. In those cases, the results will always be 100 percent wrong.

Leave a Reply

Your email address will not be published. Required fields are marked *