Anthropic Rolls Out Identity Verification for Select Claude Users

Anthropic has started implementing identity verification for certain Claude AI users, prompting users to verify their identities when accessing specific capabilities. The verification process requires a valid, government-issued photo ID and a selfie taken via phone or computer camera. The system then compares the selfie against the provided ID.

The company has not disclosed the exact use cases for this verification in its announcement. However, an Anthropic spokesperson clarified in an update that the requirement applies only to a small number of cases involving activity that "indicates potentially fraudulent or abusive behavior, which violates [Anthropic's] usage policy."

User Backlash and Privacy Concerns

Many users have criticized the new verification process, questioning its necessity—particularly for paying subscribers who already have credit card details on file. Critics also raised concerns about Anthropic's choice of Persona Identities for handling the verification process. Persona provides age verification services for companies like OpenAI and Roblox.

Persona's major investors include Founders Fund, a venture capital firm co-founded by Peter Thiel. Thiel is also the co-founder and chairman of Palantir, a surveillance company whose customers include federal agencies such as the FBI, CIA, and US Immigration and Customs Enforcement (ICE).

Criticism of Palantir centers on its use of facial recognition and AI technologies to expand government surveillance capabilities. This has led to concerns about how Persona, as a subsidiary or partner of Palantir, might handle sensitive user data.

Anthropic’s Data Privacy Assurances

In its announcement, Anthropic emphasized that Persona will handle user IDs and selfies but will not store or copy the images. The company stated that Persona is "contractually limited" in how it can use the data and that all information processed is "encrypted in transit and at rest."

Anthropic also assured users that identity data will not be used to train its AI models and will not be shared with third parties.

Update: Verification Targets Fraudulent or Abusive Behavior

Update April 16, 2026, 11:35 AM ET: Following inquiries, an Anthropic spokesperson told Engadget that the identity verification applies only to cases where activity suggests "potentially fraudulent or abusive behavior, which violates [Anthropic's] usage policy."

Source: Engadget