Dexter
Dexter
Docs

Token Pricing

Token pricing is the most natural fit for LLM-backed endpoints where the true cost scales with prompt size or model usage.

OpenAI Example

import express from 'express';
import { createTokenPricing, createX402Server } from '@dexterai/x402/server';
 
const app = express();
const server = createX402Server({
  payTo: 'YourSolanaAddress...',
  network: 'solana:5eykt4UsFv8P8NJdTREpY1vzqKqZKvdp',
  facilitatorUrl: 'https://x402.dexter.cash',
});
 
const pricing = createTokenPricing({
  model: 'gpt-4o-mini',
});
 
app.use(express.json());
 
app.post('/api/chat', async (req, res) => {
  const { prompt, systemPrompt } = req.body;
  const paymentSig = req.headers['payment-signature'];
 
  if (!paymentSig) {
    const quote = pricing.calculate(prompt, systemPrompt);
    const requirements = await server.buildRequirements({
      amountAtomic: quote.amountAtomic,
      resourceUrl: req.originalUrl,
      description: `${quote.model}: ${quote.inputTokens.toLocaleString()} input tokens`,
    });
 
    res.setHeader('PAYMENT-REQUIRED', server.encodeRequirements(requirements));
    res.setHeader('X-Quote-Hash', quote.quoteHash);
    return res.status(402).json({
      model: quote.model,
      inputTokens: quote.inputTokens,
      usdAmount: quote.usdAmount,
      tier: quote.tier,
    });
  }
 
  const quoteHash = req.headers['x-quote-hash'];
  if (!pricing.validateQuote(prompt, quoteHash)) {
    return res.status(400).json({ error: 'Prompt changed, re-quote required' });
  }
 
  const result = await server.settlePayment(paymentSig);
  if (!result.success) {
    return res.status(402).json({ error: result.errorReason });
  }
 
  res.json({ ok: true, transaction: result.transaction });
});

Custom Model Example

import { createTokenPricing } from '@dexterai/x402/server';
 
const pricing = createTokenPricing({
  model: 'claude-3-sonnet',
  inputRate: 3.0,
  outputRate: 15.0,
  maxTokens: 4096,
});

When This Is Better Than Generic Dynamic Pricing

Use token pricing when:

  • the endpoint wraps an LLM
  • model cost is the dominant pricing factor
  • buyers benefit from seeing model and token context in the quote

Use Dynamic Pricing when the cost model is not really about tokens.

On this page