Integrate MarginFront with OpenAI
This recipe shows how to add MarginFront usage tracking to an Express.js app that calls OpenAI. By the end, every OpenAI call your agent makes will automatically show up in your MarginFront dashboard with cost and usage data. What this does for you: You call OpenAI. MarginFront records what model was used, how many tokens were consumed, and which customer it was for. You see the cost on your dashboard. If you have a pricing plan set up, your customer gets billed automatically.Prerequisites
- A MarginFront API key (
mf_sk_...). Get one from Developer Zone > API Keys in the dashboard. - An OpenAI API key.
- An agent and signal already created in the dashboard (or they’ll be auto-created on first event — see note below).
Complete working example
This is a full Express.js endpoint you can copy-paste and run. It receives a question from a customer, asks OpenAI for an answer, sends the answer back, and then tells MarginFront what happened.Where the data comes from: field mapping
Every value you pass to MarginFront comes directly from OpenAI’s response object. Here’s where each one lives:| What MarginFront needs | Where to get it from OpenAI | Example value |
|---|---|---|
model | response.model | "gpt-4o" |
inputTokens | response.usage.prompt_tokens | 523 |
outputTokens | response.usage.completion_tokens | 117 |
customerExternalId, agentCode, signalName, modelProvider) come from your own code, not from OpenAI.
What happens if MarginFront is down?
Nothing bad. The SDK runs in fire-and-forget mode by default:- MarginFront unreachable? The event goes into a local retry buffer. The SDK retries automatically in the background. Your customer never notices.
- MarginFront rejects the event? (e.g., validation error) The SDK logs a warning to the console. The event is dropped — but your customer still got their answer.
- Your server crashes before the event is sent? That one event is lost. This is rare and acceptable for most use cases. If you need guaranteed delivery, set
fireAndForget: falseand handle errors yourself.
Error handling (if you want more control)
The default fire-and-forget mode is fine for most apps. But if you want to know when tracking fails (for logging, alerting, etc.), turn it off:Streaming responses
If you use OpenAI’s streaming mode (stream: true), you won’t get token counts in the stream chunks. You need to wait for the stream to finish and read the usage field from the final response:
stream_options: { include_usage: true } or OpenAI won’t include token counts in the streamed response.
Multiple models in one agent
If your agent uses different models for different tasks (e.g., GPT-4o for complex questions, GPT-4o-mini for simple ones), just pass whichever model was actually used. MarginFront looks up the cost per model automatically:Auto-provisioning note
If you log an event for acustomerExternalId or agentCode that MarginFront hasn’t seen before, it will auto-create a minimal customer or agent record. This is handy for prototyping but can mask typos — if events show up under a customer name you don’t recognize, double-check your IDs in the dashboard.
