Vault is excellent at what it was built for. Static secrets management, database credential rotation, PKI infrastructure — Vault handles these well. AI agents have a different credential lifecycle, and putting Vault in the middle does not fix the structural gap.
What Vault does well
HashiCorp Vault was designed around a specific problem: organizations were storing secrets in environment variables, config files, and code repositories. Vault provided a centralized, encrypted, audited store for secrets with fine-grained access policies and multiple authentication methods. It solved the "where do secrets live" problem cleanly.
Vault's dynamic secrets feature is genuinely impressive: instead of storing a database password, Vault generates a unique set of credentials for each request, with a TTL that expires them automatically. The database never stores a persistent password. When the application is done, the credentials expire or are revoked. This is a significant improvement over static database passwords shared across services.
Vault also handles PKI management, key-value secrets storage, and wrap-unwrap operations for secure secret delivery. If your stack needs any of these capabilities, Vault is a reasonable choice, and its integrations with Kubernetes, AWS IAM, and other infrastructure are mature.
The gap: OAuth is not a static secret
The fundamental mismatch is that Vault treats OAuth credentials as static secrets to be stored and retrieved. An OAuth access token goes into Vault, and Vault hands it to whatever service requests it. That is storing a secret, not managing a credential lifecycle.
OAuth credential management for AI agents requires a different set of operations:
First, OAuth tokens must be minted on demand, not pre-stored. A token stored in Vault and handed to an agent could be days old by the time the agent uses it. If the token was rotated at the provider level since it was stored, the stored token is invalid. Vault knows nothing about this — it stores what it is given.
Second, OAuth tokens have scope, and scope enforcement requires understanding the OAuth protocol. Vault can store a token tagged with arbitrary metadata, but it has no mechanism to enforce that a token with repo:write scope is not handed to an agent whose policy says it should only have repo:read. Vault can check if the requesting service has access to the secret path — but it cannot downscope the secret based on the requester's policy.
Third, OAuth token lifecycle events — mint, use, scope downgrade, revocation — need to be attributable to specific agent runs. Vault's audit log records who accessed which secret path. It does not record which API calls were made with the token after it left Vault, because Vault is not in the request path after the token is issued.
The Vault pattern that teams try
The most common pattern we see when teams try to use Vault for agent credentials is something like this: a background job refreshes OAuth tokens before they expire and stores the fresh tokens in Vault. Agents retrieve tokens from Vault when they start a task. A TTL on the Vault path forces agents to re-fetch periodically.
This works, and it is better than storing tokens in environment variables. But it has three significant gaps. First, the refresh job is a new piece of infrastructure you have to build, maintain, and monitor. It is not a Vault-native capability. Second, the token sitting in Vault between the refresh and the agent's retrieval is a static credential at rest — it is a target. If Vault is compromised or misconfigured, the stored tokens are accessible to anyone with the right path permissions. Third, the token in Vault has the scope it was created with — broad, because the refresh job typically uses a long-lived refresh token that grants broad access. There is no mechanism in this pattern to downscope the token for the specific agent task.
Vault's OAuth secrets engine
HashiCorp does ship an OAuth secrets engine for Vault, and it is worth discussing specifically. The OAuth secrets engine can store OAuth client credentials and generate tokens on demand. This is closer to what agents need.
The limitations are in the implementation. The OAuth secrets engine was designed for user-based OAuth flows — it stores a client ID, client secret, and refresh token, and uses them to request a fresh access token when asked. For agents that need to operate as themselves (client credentials flow), this is a reasonable fit. For agents that need delegated user permissions (authorization code flow), the secrets engine requires a human to authorize once and store the resulting refresh token, which it then uses to generate new access tokens. This works for some agent patterns but breaks down for agents that need to work with multiple users' permissions.
More critically, Vault's OAuth secrets engine has no scope enforcement layer. You can store multiple client configurations with different scopes, but there is no policy engine that maps agent identity to allowed scope ceiling and enforces it at request time. The enforcement logic has to live in your application code or your Vault policy rules — both of which require significant custom work to implement correctly.
What agent-native credential management looks like
The architecture that solves the AI agent credential problem is a proxy, not a store. Instead of storing a token and handing it to the agent, the proxy intercepts the agent's OAuth request, applies the agent's scope policy, mints a token from the provider with the appropriate scope, and returns it — without the token ever being stored at rest between mint and use.
The proxy has visibility into the full token lifecycle because it is in the request path: it sees the mint, it can see subsequent token use if it also proxies API calls, and it executes the revocation when the token expires. This is fundamentally different from Vault's "store and retrieve" model. The proxy is active in the credential lifecycle; Vault is passive.
The audit log that results from proxy-based management is per-event rather than per-access. Vault logs who accessed the secret path. A proxy logs each token event: mint (with scope and agent attribution), each subsequent use (with the API endpoint called), and revocation (with timestamp). For post-incident attribution in an agentic system, the proxy log is orders of magnitude more useful.
When Vault still makes sense alongside agent credential management
This is not an argument to remove Vault from your stack. If you are already using Vault for database credentials, PKI, or static secret storage, keep using it. Those are the problems Vault was built for.
The argument is that adding OAuth credential management for AI agents on top of Vault requires significant custom work and still leaves scope enforcement and per-event audit as gaps you have to build yourself. Using a purpose-built agent credential proxy for the OAuth layer, while keeping Vault for what it is actually good at, gives you the right tool for each problem.
Alter and Vault are not in competition — most of our customers use both. Alter handles the agent OAuth lifecycle. Vault handles the database credentials, the API keys that are not OAuth-based, and the PKI infrastructure. Each tool does what it was designed for. That is a better outcome than trying to stretch Vault into a problem space it was not designed for and ending up with partial coverage of both.