← Back to Blog

Getting Useful OAuth Audit Logs Into Your SIEM

OAuth audit logs SIEM dashboard

Ask any security engineer what their SIEM shows them about their AI agent OAuth activity and you will get one of two answers: nothing, or log entries that say "app authorized" with no indication of which agent, which task, or which human initiated the action. Neither answer is useful for incident detection or forensic investigation.

What providers actually log

Every major OAuth provider maintains an audit log. The quality and accessibility of those logs varies significantly. GitHub's audit log exports are comprehensive for repository-level actions but coarse for token events — you can see that a token was used to push a commit, but not the full chain of events that led to that push. Slack's audit log API requires an Enterprise Grid subscription. Google Workspace's Admin audit log is reasonably detailed but requires configuring log export to BigQuery or Cloud Storage before it is accessible outside Google's console. Salesforce's event monitoring is excellent but only available on Enterprise and Unlimited edition licenses.

The deeper problem is attribution. Every provider logs at the application level. The audit entry records that your OAuth application made a call. It does not record which of your agents used that application's token, which task the agent was executing, or which user's workflow triggered the agent. That attribution gap is the fundamental reason why OAuth audit logs are not actionable for AI agent security monitoring.

What a useful audit event looks like

A useful OAuth audit event for an AI agent deployment contains six fields that standard provider logs do not include.

Agent ID: A stable identifier for the specific agent that requested the token. Not the OAuth application ID — that is the same for all agents using the same OAuth app. The agent ID distinguishes between your issue triage agent, your PR review agent, and your deployment agent, even if all three use the same GitHub OAuth application.

Run ID: A unique identifier for the specific agent execution. The agent might run 50 times per day. Each run is a distinct incident for investigation purposes. A single run ID lets you reconstruct exactly what happened during one specific execution.

Task context: What the agent was doing when it requested the token. This does not need to be verbose — a structured field like task_type: pr_review or task_type: issue_label is sufficient. It gives you the "why" context that turns a token event from a meaningless database entry into an understandable security record.

Scope requested vs scope granted: Log both. The delta between what the agent asked for and what it got is useful signal. An agent consistently requesting scopes it does not have is either misconfigured or behaving unexpectedly. An agent whose requests are always within policy is a baseline you can alert on deviations from.

Initiating user: When an agent runs in response to a human action — a developer triggering a pipeline, a user invoking a chatbot — log that user's identity alongside the agent event. This closes the attribution chain from human action to agent behavior to API call.

Token lifecycle events: Log token mint, last-used, and revocation separately. A token that was minted but never used is different from a token that was used 400 times. A token revoked before its TTL expired may indicate an emergency response. Each lifecycle event carries different security meaning.

Building the enriched event pipeline

The most practical architecture for enriched OAuth audit logs uses a proxy as the event source. The proxy sits between your agents and the OAuth providers, sees every credential request, and can attach all six enrichment fields at the time the event occurs — before the token is issued, while you still have the full request context.

The proxy emits events in a structured format (JSON is standard) to a log aggregation endpoint. That endpoint forwards to your SIEM. The entire pipeline adds typically 5-15ms of latency to the credential request — acceptable for all but the most latency-sensitive agent workloads.

If you are using Alter, this pipeline is built in. Events emit as JSON webhooks to a target you specify. The default payload includes all six enrichment fields plus timestamp, provider, and token ID. The event schema is documented and versioned — schema changes are backwards-compatible for at least two major versions.

If you are building this yourself, the key decision is where to attach the enrichment fields. Doing it at the agent side (having each agent add metadata to its token requests) requires changes to every agent. Doing it at the proxy side (the proxy reads the agent ID from the request routing and attaches it to the log event) is a one-time infrastructure change. The proxy approach is always preferable for large agent fleets.

SIEM-specific implementation notes

Splunk. Alter's Splunk integration uses HTTP Event Collector (HEC). Configure a new HEC token with a dedicated index for agent OAuth events. The index should have a separate retention policy from your general security logs — agent OAuth events are high-volume and you may want to reduce retention to 30-90 days to manage storage costs. The built-in Splunk dashboard for Alter events includes panels for scope request rate, scope denial rate, and active token count per agent.

Datadog. Alter integrates with Datadog via the Logs API. Configure a custom source tag of source:alter-oauth for all events. In Datadog Log Management, create a facet on agent_id and task_type to enable filtering. The most useful alerts to set up in Datadog: scope escalation rate above baseline (indicates an agent is behaving unusually) and token mint rate above baseline (indicates an agent may be running more frequently than expected).

Elastic SIEM. Alter's Elastic integration uses the Elasticsearch REST API with a custom index template that maps the six enrichment fields to Elastic Common Schema (ECS) fields. Token mint events map to ECS event.category: authentication. Token revocation events map to event.category: session. This mapping ensures that Alter events appear correctly in Elastic's built-in security workflows and detection rules.

Detection rules that actually fire

With enriched events in your SIEM, you can write detection rules that are specific to agent OAuth behavior rather than generic identity rules that generate noise.

Four rules that provide real signal:

Scope creep detection: Alert when an agent's average requested scope count increases by more than 25% week over week. This indicates the agent's behavior is expanding — which might be legitimate (new features deployed) or concerning (the agent is being used for purposes outside its original design).

Off-hours token activity: Alert when an agent mints tokens outside its expected operational window. A scheduled batch agent that normally runs at 2am but mints tokens at 2pm is either being invoked manually, has been reconfigured, or is running in response to an unexpected trigger. All three warrant investigation.

Revoked token reuse attempt: Alert when an agent attempts to use a token that Alter has already revoked. This indicates either a bug (the agent is not handling token revocation correctly) or something more concerning (the token was extracted from the agent's storage and is being replayed by a different process).

Unusual provider for agent: Alert when an agent makes a credential request to a provider it has never used before. A PR review agent that suddenly requests Salesforce credentials has either been reconfigured significantly or has been compromised and is being used as a pivot point to access systems it was not designed to touch.

The retention question

Agent OAuth events are high-volume. A fleet of 20 agents making 100 API calls per day generates 2,000 token events daily. Over a year, that is 730,000 events. At typical SIEM storage costs, annual retention for that volume is manageable — but only if you are deliberate about it.

Our recommendation: keep 90 days of full-fidelity events (all six enrichment fields, raw payload). Archive events older than 90 days to cold storage (S3, GCS, Azure Blob) with a structured format that enables re-ingestion for forensic investigations. Purge after 2 years unless your compliance requirements specify longer retention.

The 90-day hot window is chosen because most incident investigations work backward from detection. If your median time to detect is two weeks (which is typical for security incidents involving insider threats or slow-burn credential abuse), 90 days gives you six times that buffer to investigate before the relevant events expire.