← Back to Blog

Connecting LangChain Agents to Alter in 15 Minutes

LangChain agent OAuth integration

A concrete walkthrough of wiring a LangChain tool agent to use Alter for all OAuth credential requests. One config change, no changes to your agent code.

What we're building

By the end of this walkthrough, you will have a LangChain agent that requests all OAuth credentials through Alter's proxy. The agent code itself does not change — LangChain's tool calling mechanism uses standard HTTP headers for authentication, which means swapping the credential source requires only updating the token endpoint the tools point to.

We will use a concrete example: a LangChain agent with three tools — a GitHub tool that reads PR descriptions, a Slack tool that posts messages, and a Google Calendar tool that reads upcoming events. All three use OAuth. By the end, all three will get their tokens from Alter instead of directly from the providers.

Prerequisites: Python 3.11+, LangChain 0.2+, an Alter account (free tier works), and existing OAuth app credentials for GitHub, Slack, and Google registered with Alter.

Step 1: Register your agent with Alter

Log into the Alter dashboard and navigate to Agents → Register New Agent. Give the agent a name that matches how you think about it in your codebase — "pr-review-agent" or "daily-standup-bot", something that makes the audit log readable. Alter generates an agent ID and a registration token.

The agent ID is a UUID you will pass in HTTP headers so Alter can attribute token events to this specific agent. The registration token is used once to complete the setup — after that, the agent authenticates with a short-lived client credential that Alter rotates automatically.

# Store these in your secrets manager, not in code
ALTER_AGENT_ID=agt_01hx4k7m9n2p3q5r6s7t8u9v0w
ALTER_AGENT_SECRET=sec_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ALTER_PROXY_URL=https://proxy.aIterai.org

Step 2: Define your scope policy

In the Alter dashboard under Policies, create a policy for this agent. You can also define it in YAML and push it via the API if you prefer infrastructure-as-code. For this walkthrough we'll use the dashboard.

# alter-policy.yaml — pushed via API or defined in dashboard
agent_policy:
  agent_id: agt_01hx4k7m9n2p3q5r6s7t8u9v0w
  integrations:
    github:
      max_scope: [repo:read, pr:read]
      deny_scope: [repo:write, admin:org, delete_repo]
    slack:
      max_scope: [channels:read, chat:write]
      deny_scope: [admin, channels:manage, users:read]
    google:
      max_scope:
        - https://www.googleapis.com/auth/calendar.readonly
      deny_scope:
        - https://www.googleapis.com/auth/calendar
        - https://mail.google.com/
  token_ttl: 3600
  per_task_downscope: false

The max_scope is the ceiling — even if your tool code requests write access, Alter caps it at read. The deny_scope list is the hard block. The token_ttl: 3600 means tokens expire after one hour regardless of whether they have been used. Save the policy. It activates immediately.

Step 3: Install the Alter SDK (optional)

Alter ships a thin Python SDK that wraps the proxy token endpoint. It is optional — the proxy is a standard OAuth token endpoint, so you can use any OAuth client library. The SDK adds automatic retry on 401, token caching to avoid unnecessary proxy calls within the TTL window, and a context manager that associates tokens with your current task context.

pip install alter-sdk
from alter import AlterClient

alter = AlterClient(
    agent_id=os.environ["ALTER_AGENT_ID"],
    agent_secret=os.environ["ALTER_AGENT_SECRET"],
    proxy_url=os.environ["ALTER_PROXY_URL"],
)

Step 4: Replace token fetching in your LangChain tools

The change is localized to wherever your tools fetch their OAuth tokens. LangChain's standard GitHub tool, for example, takes a github_access_token parameter. Instead of reading that from an environment variable or a secrets manager directly, you fetch it from Alter.

# Before: token from environment
github_token = os.environ["GITHUB_ACCESS_TOKEN"]

# After: token from Alter proxy
github_token = alter.get_token(
    provider="github",
    scope=["repo:read", "pr:read"],
    task_context="review-pr-{pr_number}"
)

The task_context parameter is optional but valuable. It attaches a human-readable label to every token event in the audit log. When you look at the audit log later, you see "token minted for task review-pr-4521" rather than just a UUID. It takes 30 seconds to add and makes post-incident analysis dramatically easier.

The same pattern applies to the Slack and Google Calendar tools:

slack_token = alter.get_token(
    provider="slack",
    scope=["channels:read", "chat:write"],
    task_context="post-standup-summary"
)

calendar_token = alter.get_token(
    provider="google",
    scope=["https://www.googleapis.com/auth/calendar.readonly"],
    task_context="fetch-todays-events"
)

Step 5: Verify the proxy in action

Run your agent on a test task and watch the Alter dashboard's Live Events view. You should see token mint events appearing as each tool executes — one event per provider, each showing the agent ID, the task context, the scope granted, and the TTL. If you see a scope broader than what your policy allows, the proxy has already downscoped it — the event will show a scope_downscoped: true flag and log both the requested scope and the granted scope.

To confirm the TTL is working, wait 61 minutes after your test run and check the Tokens view in the dashboard. The tokens from the test run should show status expired. They are no longer valid for API calls. If you try to use them manually via curl, you will get a 401 from the provider.

Step 6: Wire up the audit log export

Once the proxy is working, connect Alter's audit event stream to wherever your security logs live. In Settings → Integrations, configure a webhook to your SIEM or logging endpoint. Alter sends a JSON event for every token mint, token use, and token revocation.

If you use Datadog, the native Datadog integration sends events directly to your Datadog account using the Logs API — no webhook middleware required. For Splunk, configure an HEC (HTTP Event Collector) token and paste it into the Splunk integration settings. Events start flowing within seconds.

The event schema is consistent across all providers, which means you can build a single dashboard that shows "all token activity for agent X" regardless of whether the activity is against GitHub, Slack, or Google. That was not possible before because each provider has its own audit log format and access requirements.

Common questions from the first integration

What if the proxy is down? Alter's proxy runs in three availability zones with automatic failover. The SLA is 99.95%. If the proxy is unavailable, token requests will fail with a 503 — your agent will see an authentication error. For workloads that cannot tolerate even brief interruptions, we support a local proxy mode that caches a set of rotating short-lived tokens on your infrastructure, refreshed every 30 minutes from the Alter cloud.

Does this add latency? The proxy adds an average of 18ms per token request in our benchmarks. Since tokens are cached for the TTL duration, most agent runs involve one token request per provider — typically three to five requests per agent invocation. The total latency overhead is 60-90ms per run. If your agent is doing any network I/O at all, this is not measurable in end-to-end task duration.

What happens when a token expires mid-task? The SDK handles automatic re-mint. If the provider rejects a token with 401, the SDK detects the 401, requests a fresh token from the proxy, and retries the original request transparently. Your tool code sees a successful API call. The re-mint is logged as a separate event in the audit log, distinct from the original mint.

What you've done and what's next

At this point, your LangChain agent is getting all OAuth tokens from Alter. Every token is short-lived, scoped to the policy ceiling, attributed to the specific agent, and logged with task context. If you need to revoke all tokens for this agent — for a security incident or a decommission — one API call to DELETE /agents/{agent_id}/tokens revokes every active token across all providers simultaneously.

The next step is adding more agents and more providers. The pattern is identical: register the agent, define the policy, update the token fetch calls. After three or four integrations, the policy-as-code approach becomes the natural default and the "credential setup" phase of a new agent deployment takes about 10 minutes rather than an afternoon of provider-by-provider OAuth setup.