POST
/
api
/
parea
/
v1
/
deployed-prompt
p = Parea(api_key="PAREA_API_KEY")  # replace with your API key

def main() -> UseDeployedPromptResponse:
    return p.get_prompt(
        UseDeployedPrompt(
            deployment_id="p-qZrnFesaeCpqcXJ_yL3wi",
            llm_inputs={"x": "Python", "y": "Flask"},
        )
    )
{
  "deployment_id": "p-qZrnFesaeCpqcXJ_yL3wi",
  "functions": [],
  "model": "gpt-4o-mini",
  "model_params": {
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "temp": 0.5,
    "top_p": 1
  },
  "name": "hello world",
  "prompt": {
    "inputs": {
      "x": "Python",
      "y": "Flask"
    },
    "messages": [
      {
        "content": "I want a Hello World program in Python. Using the Flask framework.",
        "role": "user"
      }
    ],
    "raw_messages": [
      {
        "content": "I want a Hello World program in {{x}}. Using the {{y}} framework.",
        "role": "user"
      }
    ]
  },
  "provider": "openai",
  "version_number": 1
}
p = Parea(api_key="PAREA_API_KEY")  # replace with your API key

def main() -> UseDeployedPromptResponse:
    return p.get_prompt(
        UseDeployedPrompt(
            deployment_id="p-qZrnFesaeCpqcXJ_yL3wi",
            llm_inputs={"x": "Python", "y": "Flask"},
        )
    )

Authorizations

x-user-id
string
header
required

Body

application/json
deployment_id
string
required

ID of deployed prompt

llm_inputs
object | null

If provided, these key-value pairs will be used to replace the corresponding keys in the templated prompt

Response

200
application/json
Successful Response
deployment_id
string
required

This is the ID for a deployed prompt. You can find your deployed prompts on the Deployments tab.

version_number
number
required

Version number of the deployed prompt

name
string | null

Name of the deployed prompt

functions
string[] | null

If deployed prompt has function, these will appear as JSON strings.

function_call
string | null

If deployed prompt has a specified function call, it will appear.

prompt
object | null

The messages of the deployed prompt

model
string | null

Model name of deployed prompt

provider
enum<string> | null

Provider name of deployed prompt

Available options:
openai,
azure,
anthropic,
anyscale,
vertexai,
aws_bedrock,
openrouter,
mistral,
litellm,
groq,
fireworks,
cohere
model_params
object | null

Model parameters of deployed prompt