LLM Proxy
Fetch Deployed Prompt
Given a deployment_id, fetches the deployed prompt and its details. Can be optionally used to fill-in the templated prompt with provided inputs.
POST
/
api
/
parea
/
v1
/
deployed-prompt
Authorizations
x-user-id
string
headerrequiredBody
application/json
deployment_id
string
requiredID of deployed prompt
llm_inputs
object | null
If provided, these key-value pairs will be used to replace the corresponding keys in the templated prompt
Response
200 - application/json
deployment_id
string
requiredThis is the ID for a deployed prompt. You can find your deployed prompts on the Deployments tab.
version_number
number
requiredVersion number of the deployed prompt
name
string | null
Name of the deployed prompt
functions
string[] | null
If deployed prompt has function, these will appear as JSON strings.
function_call
string | null
If deployed prompt has a specified function call, it will appear.
prompt
object | null
The messages of the deployed prompt
model
string | null
Model name of deployed prompt
provider
enum<string> | null
Provider name of deployed prompt
Available options:
openai
, azure
, anthropic
, anyscale
, vertexai
, aws_bedrock
, openrouter
, mistral
, litellm
, groq
, fireworks
, cohere
model_params
object | null
Model parameters of deployed prompt
Was this page helpful?