MIT Open Source antidrift
anymodel
LLM router with unified batch support. 11 providers, 323 models. Write once, switch providers without changing code.
Node.js · Python · Go
11 providers
323 models
50% batch cost savings
0 required config
$ npm install @probeo/anymodelWhat it does
anymodel wraps 11 LLM providers behind one API. Change provider in your config and the rest of your code stays the same. It supports native batch for OpenAI, Anthropic, and Google — submitting jobs to the providers' batch endpoints directly, not faking it with sequential calls. That's where the 50% cost reduction comes from.
Fallback routing retries with the next provider in your priority list if one is rate-limited or down. Streaming and tool calling work across all providers.
Usage
Node.js
import { anymodel } from '@probeo/anymodel';
const client = anymodel({ provider: 'anthropic', model: 'claude-sonnet-4-6' });
// Single call
const res = await client.complete({ prompt: 'Summarize this doc.' });
// Batch (50% cost savings for OpenAI/Anthropic/Google)
const batch = await client.batch([
{ prompt: 'Analyze page 1.' },
{ prompt: 'Analyze page 2.' },
{ prompt: 'Analyze page 3.' },
]);
// Streaming
for await (const chunk of client.stream({ prompt: 'Write a report.' })) {
process.stdout.write(chunk.text);
} Fallback routing
const client = anymodel({
providers: [
{ provider: 'anthropic', model: 'claude-sonnet-4-6' },
{ provider: 'openai', model: 'gpt-4o' },
{ provider: 'google', model: 'gemini-2.5-pro' },
],
});
// Tries providers in order on rate limit or error Providers
OpenAI Anthropic Google Mistral Groq DeepSeek xAI Together Fireworks Perplexity Ollama
Used in Probeo
anymodel was extracted from Probeo, where it runs in production across multiple pipeline stages.