signzy

API Marketplace

downArrow
Logo
Responsive
Decorative line

Face Match

Overview

Face matchcompares a live face (selfie or video frame) to the face on an identity document or an enrolled template to confirm it’s the same person. In compliance workflows, it links document authenticity to user ownership, reducing impersonation and synthetic identity risk. Modern systems combine feature embeddings, liveness, and PAD to resist spoofs (photos, masks, deepfakes). Thresholds are tuned to balance False Accept and False Reject Rates by product risk.
Regulated onboarding often requires human review for borderline scores and clear audit trails. Quality controls (lighting, framing, glare) and bias testing improve accuracy across demographics. De-duplication prevents multi-accounting by comparing against existing galleries. Paired with document verification and registry checks, face match provides strong assurance for remote KYC, step-up authentication, and high-risk transactions .

FAQ

How does face match work?

Models convert faces into numerical embeddings and compare similarity against a reference (ID photo or prior enrollment). Liveness and PAD ensure a real person is present, preventing replay or presentation attacks during remote verification.

What affects accuracy most?

Image quality (focus, glare, pose), demographic coverage of training data, and robust liveness. Tuning thresholds per use case (onboarding vs. login) reduces false decisions and downstream manual reviews.

Why add liveness/PAD?

Matching alone can be fooled by printed images or screens. Liveness confirms biological motion; PAD targets known spoofs. Together, they significantly lower successful attacks without adding excessive user friction.

How should we handle failures?

Provide guided recapture, switch to assisted/video KYC, or step up with additional evidence (registry hit, NFC chip read). Maintain audit logs of attempts, decisions, and thresholds for regulatory review.