Dynamic pricing for x402 resources

We've updated our x402 client/server stack to enable dynamic payments, perfect for AI APIs that need to charge on a per-token basis.

Dynamic pricing for x402 resources

We've updated our x402 client/server stack to enable dynamic payments, perfect for AI APIs that need to charge on a per-token basis.

The new upto payment scheme

You can now pass a scheme property to verifyPayment() and settlePayment() which lets you control the payment scheme:

  • exact (default) - The client pays the exact amount specified in the payment requirements.
  • upto (new) - The client pays any amount up to the specified maximum amount.

Example: charging for AI inference

This new payment scheme is ideal for AI inference APIs where costs are dynamic based on the number of tokens used in each request.

const paymentArgs = {
	resourceUrl: "https://api.example.com/premium-content",
	method: "GET",
	paymentData,
	payTo: "0x1234567890123456789012345678901234567890",
	network: arbitrum,
	scheme: "upto", // enables dynamic pricing
	price: "$0.10", // max payable amount
	facilitator: thirdwebFacilitator,
};

// First verify the payment is valid for the max amount
const verifyResult = await verifyPayment(paymentArgs);

if (verifyResult.status !== 200) {
	return Response.json(verifyResult.responseBody, {
		status: verifyResult.status,
		headers: verifyResult.responseHeaders,
	});
}

// Do the expensive work that requires payment
const { answer, tokensUsed } = await callExpensiveAIModel();

// Now settle the payment based on actual usage
const pricePerTokenUsed = 0.00001; // ex: $0.00001 per AI model token used
const settleResult = await settlePayment({
	...paymentArgs,
	price: tokensUsed * pricePerTokenUsed, // adjust final price based on usage
});

// 
return Response.json(answer);

This can be generalized to any type of API that uses variable amount of units of work, like RPC node APIs for example.

On the client side, the upto scheme is supported in v5.114.0 and above.

A fully working template

We recently open sourced a fully functional AI application that charges per tokens used, check out the repository on Github.

GitHub - thirdweb-example/x402-ai-inference: Pay for inference with x402
Pay for inference with x402. Contribute to thirdweb-example/x402-ai-inference development by creating an account on GitHub.

Learn more

To learn more about how to set this up in your own projects, check out the documentation.

x402 Server
Accept x402 payments in your APIs from any x402-compatible client.