Assuming you had a liteLLM config.yaml file with the following content:
model_list:-model_name: gpt-4o # user-facing model aliaslitellm_params:# all params accepted by litellm.completion() - https://docs.litellm.ai/docs/completion/inputmodel: gpt-4oapi_key: OPENAI_API_KEY-model_name: claude-3-haiku-20240307# user-facing model alias litellm_params:model: claude-3-haiku-20240307api_key: ANTHROPIC_API_KEY-model_name: azure_gpt-3.5-turbo litellm_params:model: azure/<azure_model_name>api_key: AZURE_API_KEYapi_base: https://<url>.openai.azure.com/-model_name: anthropic.claude-3-haiku-20240307-v1:0 litellm_params:model: bedrock/anthropic.claude-3-haiku-20240307-v1:0aws_access_key_id: AWS_ACCESS_KEY_IDaws_secret_access_key: AWS_SECRET_ACCESS_KEYaws_region_name: us-west-2
You could wrap the OpenAI client with Parea AI as follows:
import openaifrom parea import Parea, tracep = Parea(api_key="PAREA_API_KEY")client = openai.OpenAI(api_key="litellm", base_url="<LiteLLM_URL, e.g. http://0.0.0.0:26264>")p.wrap_openai_client(client)defllm_call(model:str):return client.chat.completions.create(model=model, messages=[{"role":"user","content":"this is a test request, write a short poem"}])@tracedefmain():# request sent to model set on litellm proxy using config.yaml, `litellm --config config.yaml` response = llm_call(model="claude-3-haiku-20240307") response2 = llm_call(model="gpt-4o") response3 = llm_call(model="azure_gpt-3.5-turbo") response4 = llm_call(model="anthropic.claude-3-haiku-20240307-v1:0")return{"claude": response,"gpt": response2,"azure": response3,"bedrock": response4}if __name__ =="__main__":print(main())