LiteLLM

Open source Python library to call 100+ LLMs with one unified OpenAI-compatible interface

★★★★☆ Open Source 🧑‍💻 Code Assistants
LiteLLM is an open source library that normalizes API calls across 100+ LLM providers into a single, OpenAI-compatible interface. Call Claude, Gemini, Mistral, Cohere, Ollama, and hundreds of other models with the same code you'd write for OpenAI, without maintaining separate API client libraries for each. Beyond the Python library, LiteLLM ships an optional self-hosted proxy server that adds production features: request load balancing across multiple deployments, automatic retries and fallbacks, rate limiting, detailed cost tracking per model and API key, and a management dashboard. Teams use the proxy to standardize model access company-wide while maintaining visibility into costs and usage. LiteLLM has over 13,000 GitHub stars and is embedded in many popular agent frameworks including LangChain, LlamaIndex, and CrewAI as the default multi-model abstraction layer. It's the practical answer to 'how do I avoid vendor lock-in when calling multiple LLMs' for Python developers.

What the community says

LiteLLM is one of the most-recommended tools in the AI engineering community for multi-provider setups, with consistent praise for how quickly developers can switch between models without changing their application code. The built-in cost tracking is frequently cited as essential for teams managing AI spending. Some users report that the proxy can add latency overhead and that keeping up with provider API changes occasionally introduces bugs. The maintainer is active on GitHub and responds quickly to issues, which builds community trust.

See alternatives to LiteLLM

LiteLLM Pricing Plans

Open Source
Free
  • Full library
  • Self-hosted proxy
  • MIT license
  • Community support
Enterprise
Contact sales
  • SSO integration
  • RBAC and access control
  • SLA support
  • Compliance features

User Reviews

Write a Review

Similar Tools in Code Assistants

Related Guides