Parea AI works seamlessly with LiteLLM proxy using the wrap_openai_client/patchOpenAI
methods.
Quickstart
Assuming you had a liteLLM config.yaml file with the following content:
model_list:
- model_name: gpt-4o
litellm_params:
model: gpt-4o
api_key: OPENAI_API_KEY
- model_name: claude-3-haiku-20240307
litellm_params:
model: claude-3-haiku-20240307
api_key: ANTHROPIC_API_KEY
- model_name: azure_gpt-3.5-turbo
litellm_params:
model: azure/<azure_model_name>
api_key: AZURE_API_KEY
api_base: https://<url>.openai.azure.com/
- model_name: anthropic.claude-3-haiku-20240307-v1:0
litellm_params:
model: bedrock/anthropic.claude-3-haiku-20240307-v1:0
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
aws_region_name: us-west-2
You could wrap the OpenAI client with Parea AI as follows:
Visualizing your traces
In your Parea logs dashboard, you can visualize your traces and see the detailed steps the LLM took across the various models.