Take advantage of Parea’s version management with deployed prompts. Deploying a prompt makes it available via the SDK. All prompt parameters are bundled with a deployment, including function call schemas and model parameters.

Prerequisites

  1. First, you’ll need a Parea API key. See Authentication to get started.
  2. For any model you want to use with the SDK, set up your Provider API keys.

How to deploy a prompt

Visit the Playground to view or create prompts. After testing your prompt, you can click the blue rocket icon to deploy it.

You must save your prompt before deploying it.

DeployPrompt

Afterward, optionally provide a name and commit message for the deployment and copy the deployment ID. (Don’t worry; you can get the deployment ID later from the Deployments tab)

DeployModal

Prompt deployments are specific to the selected version. In the Playground, when you change a prompt or its parameters and save it, a new version will automatically be created. You can bump or rollback deployment versions while maintaining the same deployment ID. This makes it easy to change your prompt without updating your code.

Bumping / Revert a deployment

In the Playground, if a prompt has been previously deployed and you click the rocket icon on a new version, you will be prompted to confirm whether to bump (if the current version is > than the deployed) or revert (if the current version is < than the deployed) the deployment. (You could also create an entirely new deployment with a different name and ID.)

BumpRevert

How to use a deployed prompt

You can interact with your deployed prompts in two ways. Use the prompt directly or fetch the prompt’s data to use in your code.

Use deployed prompt

Using the completion endpoint, you can set the deployment_id parameter and provide the prompt template inputs.

def deployed_critic(argument: str) -> CompletionResponse:
    return p.completion(
        Completion(
            deployment_id="p-PSOwRyIPaQRq4xQW3MbpV",
            llm_inputs={"argument": argument},
        )
    )

Fetch deployed prompt

You can fetch the prompt data and utilize it in your code using the get deployed prompt endpoint. The response will include the raw prompt and the prompt filled in with the provided inputs and other metadata.

If you fetch a deployed prompt to use with your own LLM API, but still want traced logs associated with the deployment, you can either add deployment_id to the trace decorator, or use trace_insert({"deployment_id": "p-PSOwRyIPaQRq4xQW3MbspVz"}) in your code.
def get_critic_prompt(argument: str) -> UseDeployedPromptResponse:
    return p.get_prompt(
        UseDeployedPrompt(
            deployment_id="p-PSOwRyIPaQRq4xQW3MbpV",
            llm_inputs={"argument": argument}
        )
    )

Deployments Dashboard

On the Deployments tab you can get a list of your deployments and associated information. Clicking an item will open a detailed view of your prompt and dashboard specific for that deployment.

deployment_table

Detailed Deployment View

The detailed deployments page has two tabs. Prompt and Dashboard.

Prompt

On the prompt tab you can view all of the prompt’s parameters including the messages and any attached functions. You can also access the history of the deployment and bump or revert to a different version. By clicking ‘Edit’ you can jump to the Playground and make changes to the prompt and deploy a new version.

PromptView

Dashboard

On the dashboard tab you can see the prompt’s usage statistics and trace logs. By clicking any trace you can view the full detailed trace.

DeploymentView

Cookbook examples