Picture by Creator | ideogram.ai

 

Introduction

 
With the surge of enormous language fashions (LLMs) lately, many LLM-powered purposes are rising. LLM implementation has launched options that have been beforehand non-existent.

As time goes on, many LLM fashions and merchandise have grow to be accessible, every with its execs and cons. Sadly, there may be nonetheless no normal strategy to entry all these fashions, as every firm can develop its personal framework. That’s the reason having an open-source instrument akin to LiteLLM is helpful once you want standardized entry to your LLM apps with none further price.

On this article, we’ll discover why LiteLLM is useful for constructing LLM purposes.

Let’s get into it.

 
 

Profit 1: Unified Entry

 
LiteLLM’s greatest benefit is its compatibility with completely different mannequin suppliers. The instrument helps over 100 completely different LLM companies by standardized interfaces, permitting us to entry them whatever the mannequin supplier we use. It’s particularly helpful in case your purposes make the most of a number of completely different fashions that must work interchangeably.

A couple of examples of the main mannequin suppliers that LiteLLM helps embody:

  • OpenAI and Azure OpenAI, like GPT-4.
  • Anthropic, like Claude.
  • AWS Bedrock & SageMaker, supporting fashions like Amazon Titan and Claude.
  • Google Vertex AI, like Gemini.
  • Hugging Face Hub and Ollama for open-source fashions like LLaMA and Mistral.

The standardized format follows OpenAI’s framework, utilizing its chat/completions schema. Which means we are able to change fashions simply with no need to know the unique mannequin supplier’s schema.

For instance, right here is the Python code to make use of Google’s Gemini mannequin with LiteLLM.

from litellm import completion

immediate = "YOUR-PROMPT-FOR-LITELLM"
api_key = "YOUR-API-KEY-FOR-LLM"

response = completion(
      mannequin="gemini/gemini-1.5-flash-latest",
      messages=[{"content": prompt, "role": "user"}],
      api_key=api_key)

response['choices'][0]['message']['content']

 

You solely must receive the mannequin identify and the respective API keys from the mannequin supplier to entry them. This flexibility makes LiteLLM supreme for purposes that use a number of fashions or for performing mannequin comparisons.

 

Profit 2: Value Monitoring and Optimization

 
When working with LLM purposes, you will need to observe token utilization and spending for every mannequin you implement and throughout all built-in suppliers, particularly in real-time eventualities. 

LiteLLM allows customers to take care of an in depth log of mannequin API name utilization, offering all the mandatory info to regulate prices successfully. For instance, the `completion` name above could have details about the token utilization, as proven beneath.

utilization=Utilization(completion_tokens=10, prompt_tokens=8, total_tokens=18, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=None, text_tokens=8, image_tokens=None))

 

Accessing the response’s hidden parameters may also present extra detailed info, together with the price.

 

With the output much like beneath:

{'custom_llm_provider': 'gemini',
 'region_name': None,
 'vertex_ai_grounding_metadata': [],
 'vertex_ai_url_context_metadata': [],
 'vertex_ai_safety_results': [],
 'vertex_ai_citation_metadata': [],
 'optional_params': {},
 'litellm_call_id': '558e4b42-95c3-46de-beb7-9086d6a954c1',
 'api_base': '
 'model_id': None,
 'response_cost': 4.8e-06,
 'additional_headers': {},
 'litellm_model_name': 'gemini/gemini-1.5-flash-latest'}

 

There may be a whole lot of info, however crucial piece is `response_cost`, because it estimates the precise cost you’ll incur throughout that decision, though it might nonetheless be offset if the mannequin supplier gives free entry. Customers may outline customized pricing for fashions (per token or per second) to calculate prices precisely. 

A extra superior cost-tracking implementation may also permit customers to set a spending finances and restrict, whereas additionally connecting the LiteLLM price utilization info to an analytics dashboard to extra simply combination info. It is also doable to offer customized label tags to assist attribute prices to sure utilization or departments.

By offering detailed price utilization information, LiteLLM helps customers and organizations optimize their LLM utility prices and finances extra successfully. 

 

Profit 3: Ease of Deployment

 
LiteLLM is designed for straightforward deployment, whether or not you utilize it for native improvement or a manufacturing atmosphere. With modest sources required for Python library set up, we are able to run LiteLLM on our native laptop computer or host it in a containerized deployment with Docker with out a want for complicated further configuration. 

Talking of configuration, we are able to arrange LiteLLM extra effectively utilizing a YAML config file to checklist all the mandatory info, such because the mannequin identify, API keys, and any important customized settings in your LLM Apps. You can too use a backend database akin to SQLite or PostgreSQL to retailer its state.

For information privateness, you’re answerable for your individual privateness as a person deploying LiteLLM your self, however this method is safer because the information by no means leaves your managed atmosphere besides when despatched to the LLM suppliers. One characteristic LiteLLM supplies for enterprise customers is Single Signal-On (SSO), role-based entry management, and audit logs in case your utility wants a safer atmosphere.

General, LiteLLM supplies versatile deployment choices and configuration whereas maintaining the info safe.

 

Profit 4: Resilience Options

 
Resilience is essential when constructing LLM Apps, as we wish our utility to stay operational even within the face of surprising points. To advertise resilience, LiteLLM supplies many options which are helpful in utility improvement.

One characteristic that LiteLLM has is built-in caching, the place customers can cache LLM prompts and responses in order that equivalent requests do not incur repeated prices or latency. It’s a helpful characteristic if our utility regularly receives the identical queries. The caching system is versatile, supporting each in-memory and distant caching, akin to with a vector database.

One other characteristic of LiteLLM is automated retries, permitting customers to configure a mechanism when requests fail as a result of errors like timeouts or rate-limit errors to robotically retry the request. It’s additionally doable to arrange further fallback mechanisms, akin to utilizing one other mannequin if the request has already hit the retry restrict. 

Lastly, we are able to set fee limiting for outlined requests per minute (RPM) or tokens per minute (TPM) to restrict the utilization degree. It’s a good way to cap particular mannequin integrations to forestall failures and respect utility infrastructure necessities.

 

Conclusion

 
Within the period of LLM product development, it has grow to be a lot simpler to construct LLM purposes. Nevertheless, with so many mannequin suppliers on the market, it turns into arduous to determine a typical for LLM implementation, particularly within the case of multi-model system architectures. Because of this LiteLLM can assist us construct LLM Apps effectively.

I hope this has helped!
 
 

Cornellius Yudha Wijaya is a knowledge science assistant supervisor and information author. Whereas working full-time at Allianz Indonesia, he likes to share Python and information ideas by way of social media and writing media. Cornellius writes on a wide range of AI and machine studying subjects.