Token Analytics
Token Analytics provides detailed tracking of AI model usage, costs, and trends across your UniSync instance.
Overview
Every AI model call in UniSync is tracked, including:
- Tokens consumed (input + output)
- Cost per call
- Model used (GPT-4, Gemini, Claude, etc.)
- Pipeline step that made the call
- Associated agent and environment
Dashboard
The Token Analytics page shows:
Summary Cards
- Total Tokens — Lifetime token consumption
- Total Cost — Cumulative spending across all providers
- Average Cost per Article — Cost efficiency metric
- Active Models — Currently used AI models
Usage by Model
Breakdown of token usage per AI model:
- Input tokens vs. output tokens
- Cost per model
- Trend over time
Usage by Pipeline Step
See which phases consume the most tokens:
- Content generation (typically the largest)
- Source research
- Content strategy generation
- Keyword classification
Usage by Provider
Aggregate view by provider (OpenAI, Google, Anthropic).
Cost Trends
Time-series charts showing:
- Daily/weekly/monthly spending
- Cost per article trend
- Token consumption trend
Model Pricing
UniSync automatically syncs model pricing data every 24 hours. The model_pricing table maintains current rates for:
- Input token price (per 1K tokens)
- Output token price (per 1K tokens)
- Model capabilities and limits
Filtering
Filter analytics by:
- Date range — Custom time periods
- Environment — Per-site breakdown
- Agent — Per-agent costs
- Model — Specific AI model
- Provider — OpenAI, Google, Anthropic
Related
- API Management — Manage API credentials
- AutoPilot — Monitor automated costs
- Token Analytics API — API endpoints