Skip to main content

📈 [BETA] Prometheus metrics

info

🚨 Prometheus metrics will be out of Beta on September 15, 2024 - as part of this release it will be on LiteLLM Enterprise starting at $250/mo

Enterprise Pricing

Contact us here to get a free trial

LiteLLM Exposes a /metrics endpoint for Prometheus to Poll

Quick Start

If you're using the LiteLLM CLI with litellm --config proxy_config.yaml then you need to pip install prometheus_client==0.20.0. This is already pre-installed on the litellm Docker image

Add this to your proxy config.yaml

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
litellm_settings:
success_callback: ["prometheus"]
failure_callback: ["prometheus"]

Start the proxy

litellm --config config.yaml --debug

Test Request

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}'

View Metrics on /metrics, Visit http://localhost:4000/metrics

http://localhost:4000/metrics

# <proxy_base_url>/metrics

📈 Metrics Tracked

Error Metrics

Metric NameDescription
litellm_error_code_metric_totalTotal number of errors by error code and model

This metric provides a count of errors encountered, categorized by error code and model. For example:

Proxy Requests / Spend Metrics

Metric NameDescription
litellm_requests_metricNumber of requests made, per "user", "key", "model", "team", "end-user"
litellm_spend_metricTotal Spend, per "user", "key", "model", "team", "end-user"
litellm_total_tokensinput + output tokens per "user", "key", "model", "team", "end-user"

Error Monitoring Metrics

| Metric Name | Description | | litellm_llm_api_failed_requests_metric | Number of failed LLM API requests per "user", "key", "model", "team", "end-user" | | litellm_error_code_metric_total | Total number of errors by error code and model |

Request Latency Metrics

Metric NameDescription
litellm_request_total_latency_metricTotal latency (seconds) for a request to LiteLLM Proxy Server - tracked for labels litellm_call_id, model
litellm_llm_api_latency_metriclatency (seconds) for just the LLM API call - tracked for labels litellm_call_id, model

LLM API / Provider Metrics

Metric NameDescription
litellm_deployment_stateThe state of the deployment: 0 = healthy, 1 = partial outage, 2 = complete outage.
litellm_remaining_requests_metricTrack x-ratelimit-remaining-requests returned from LLM API Deployment
litellm_remaining_tokensTrack x-ratelimit-remaining-tokens return from LLM API Deployment
litellm_deployment_success_responsesTotal number of successful LLM API calls for deployment
litellm_deployment_failure_responsesTotal number of failed LLM API calls for deployment
litellm_deployment_total_requestsTotal number of LLM API calls for deployment - success + failure
litellm_deployment_latency_per_output_tokenLatency per output token for deployment
litellm_deployment_successful_fallbacksNumber of successful fallback requests from primary model -> fallback model
litellm_deployment_failed_fallbacksNumber of failed fallback requests from primary model -> fallback model

Budget Metrics

Metric NameDescription
litellm_remaining_team_budget_metricRemaining Budget for Team (A team created on LiteLLM)
litellm_remaining_api_key_budget_metricRemaining Budget for API Key (A key Created on LiteLLM)

Monitor System Health

To monitor the health of litellm adjacent services (redis / postgres), do:

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
litellm_settings:
service_callback: ["prometheus_system"]
Metric NameDescription
litellm_redis_latencyhistogram latency for redis calls
litellm_redis_failsNumber of failed redis calls
litellm_self_latencyHistogram latency for successful litellm api call

🔥 Community Maintained Grafana Dashboards

Link to Grafana Dashboards made by LiteLLM community

https://github.com/BerriAI/litellm/tree/main/cookbook/litellm_proxy_server/grafana_dashboard