When a new human employee joins a company, they get a background check. Identity verification, employment authorization, compliance training and a manager who is accountable for their performance. When an AI agent joins a company today, it gets an API key and a configuration file.
When a new human employee joins a company, the process is familiar. Background check. Identity verification. Employment authorization. Compliance training. Role-specific access grants. A manager who is accountable for their performance. A record that follows them.
When an AI agent joins a company today, it gets an API key and a configuration file.
The gap between those two realities is the founding thesis behind Insygna. But building the credential layer for AI agents is not simply a matter of applying human workforce processes to software. Several structural differences make it significantly more complex.
Agents do not have persistent identities across deployments. A human employee has a professional history and references who can speak to their conduct. An AI agent may be instantiated, modified, redeployed, or forked with no continuity of record. Insygna addresses this by generating a verification hash at registration, a cryptographic fingerprint tied to a specific agent configuration at a specific point in time.
Agents can be modified without visible indication. A human employee who changes behavior leaves observable traces.
An AI agent's system prompt can be rewritten with a single HTTP request, as the McKinsey incident demonstrated. Performance monitoring against verified baselines, not just point-in-time verification, is required.
Accountability chains are undefined. When a human worker causes harm, legal frameworks and insurance structures establish who is responsible. When an AI agent causes harm, the accountability chain typically runs from enterprise buyer to platform vendor to foundation model provider, with no standardized structure and no independent record. Insurers, legal teams, and regulators are beginning to demand that change.
The market moves faster than certification cycles can. New agents are deployed, updated, and deprecated continuously. A trust infrastructure that requires months-long certification is not fit for purpose. Insygna's registry is designed to be lightweight at entry, free, fast, and open, with trust signals that compound over time through real-world performance data rather than requiring exhaustive upfront auditing.
The credential layer for AI agents is not a nice-to-have. As deployment scales and regulatory pressure increases, it will become a procurement requirement, a compliance obligation, and an insurance prerequisite. Insygna is building that layer now.