Integrate MarginFront with Anthropic
This recipe shows how to add MarginFront usage tracking to an app that calls the Anthropic API (Claude models). Same pattern as the OpenAI recipe, but Anthropic names its fields slightly differently. What this does for you: You call Claude. MarginFront records the model, token count, and customer. You see costs on your dashboard. If you have a pricing plan, your customer gets billed automatically.Prerequisites
- A MarginFront API key (
mf_sk_...). Get one from Developer Zone > API Keys in the dashboard. - An Anthropic API key.
- An agent and signal already created in the dashboard (or they’ll be auto-created on first event).
Complete working example
Copy-paste and run. Receives a question from a customer, asks Claude for an answer, sends the answer back, and tells MarginFront what happened.Where the data comes from: field mapping
| What MarginFront needs | Where to get it from Anthropic | Example value |
|---|---|---|
model | response.model | "claude-sonnet-4-20250514" |
inputTokens | response.usage.input_tokens | 1024 |
outputTokens | response.usage.output_tokens | 512 |
The naming difference from OpenAI
This is the one thing that trips people up:| Provider | Input tokens field | Output tokens field |
|---|---|---|
| OpenAI | usage.prompt_tokens | usage.completion_tokens |
| Anthropic | usage.input_tokens | usage.output_tokens |
inputTokens and outputTokens (camelCase). You just need to read from the right field on the provider side.
What happens if MarginFront is down?
Same as OpenAI — nothing bad. The SDK runs in fire-and-forget mode by default:- MarginFront unreachable? Event goes into a local retry buffer. Retries automatically.
- Validation error? Warning logged, event dropped. Your customer still got their answer.
- Your server crashes? That one event is lost. Acceptable for most use cases.
Streaming responses
Anthropic’s streaming works differently from OpenAI. Themessage_stop event includes the final message with usage data:
Extended thinking (Claude models with thinking enabled)
If you use Claude’s extended thinking feature, the input and output token counts inresponse.usage already include thinking tokens. You don’t need to do anything special — just pass them as-is:
Using multiple Claude models
If your agent picks different models for different tasks (e.g., Haiku for fast summaries, Sonnet for detailed analysis), just pass the model that was actually used:Error handling (if you want more control)
Turn off fire-and-forget to catch tracking errors:Auto-provisioning note
If you log an event for acustomerExternalId or agentCode that MarginFront hasn’t seen before, it will auto-create a minimal customer or agent record. Handy for prototyping, but can mask typos. If events show up under an unexpected name, check your IDs in the dashboard.
