LLM Proxy
Fetch Deployed Prompt
Given a deployment_id, fetches the deployed prompt and its details. Can be optionally used to fill-in the templated prompt with provided inputs.
POST
Authorizations
Body
application/json
ID of deployed prompt
If provided, these key-value pairs will be used to replace the corresponding keys in the templated prompt
Response
200 - application/json
This is the ID for a deployed prompt. You can find your deployed prompts on the Deployments tab.
Version number of the deployed prompt
If deployed prompt has a specified function call, it will appear.
If deployed prompt has function, these will appear as JSON strings.
Model name of deployed prompt
Model parameters of deployed prompt
Name of the deployed prompt
The messages of the deployed prompt
Provider name of deployed prompt
Available options:
openai
, azure
, anthropic
, anyscale
, vertexai
, aws_bedrock
, openrouter
, mistral
, litellm
, groq
, fireworks
, cohere
Was this page helpful?