- Full observability (costs, latency, logs)
- Access to 250+ LLMs through one interface
- Automatic fallbacks, retries, and caching
- Budget controls and guardrails
Quick Start
Setup
1. Install Package
2. Add Provider in Model Catalog
- Go to Model Catalog → Add Provider
- Select your provider (OpenAI, Anthropic, Google, etc.)
- Enter your API keys
- Name your provider (e.g.,
openai-prod)
@openai-prod (or whatever you named it).
Model Catalog Guide →
Set up budgets, rate limits, and manage credentials
3. Get Portkey API Key
Create your API key at app.portkey.ai/api-keys. Pro tip: Attach a default config to your API key for features like fallbacks and caching—no code changes needed.4. Use in Code
Vercel Functions
Works with all Vercel AI functions:Tool Calling
Switching Providers
Change the model string to switch providers:Advanced Features
For fallbacks, caching, and load balancing, create a Config and attach it to your API key. The config applies automatically—no code changes required.Fallbacks
Auto-switch to backup models if the primary fails:Load Balancing
Distribute requests across providers:Caching
Reduce costs with response caching:Runtime Config
Pass config inline when needed:Configs Guide →
Fallbacks, retries, caching, load balancing, and more
Guardrails
Add input/output validation via config:Guardrails Guide →
PII detection, content filtering, and custom rules
Observability
All requests are automatically logged with:- Cost and token usage
- Latency metrics
- Full request/response payloads
- Custom metadata

Observability Guide →
Track costs, performance, and debug issues

