Completion
Get a completion response using either one of your organization’s deployed prompts, or by providing completion details including prompt and inputs in the request.
Body
Unique identifier for a set of logs to be associated as a chain.
Name to identify a trace. Will be visible in logs for filtering.
Unique identifier for a end-user. Will be visible in logs for filtering.
This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab. If a deployment_id is provided, Parea will fetch all of the associated configuration including model name, model parameters, and any associated functions. Any information provided on the llm_configuration field will be used instead of the associated deployed prompts fields.
A name for this completion. Will be visible in logs for filtering.
Name of project. Default is “default”.
Field should only be used when a deployment_id is provided. Key value pairs for prompt template. The keys should match the names of the deployed prompt template’s variables.
LLM completion request configuration
Any additional metadata to record.
The target or “gold standard” completion response.
List of string tags to associate with this completion.
Whether to use the cache for this completion. Defaults to True. cache based on llm_inputs and/or llm_configuration
Whether to omit the inputs from the log. Defaults to False. inputs fields are llm_inputs and llm_configuration
Whether to omit the output from the llm completion from the logs. Defaults to False.
Whether to skip logging this completion. Defaults to False.
Response
Unique identifier for a specific log
LLM response
Time in seconds to complete the request.
Token count of input prompt
Token count of output completion
Token count of input prompt + output completion
Cost of this completion in USD.
The model that will complete your prompt. Ex. gpt-3.5-turbo
Supported model providers: openai, anthropic, azure
Whether this completion was served from the cache.
Whether HTTP status was successful or not. Possible values: “success”, “error”
Datetime from a POSIX timestamp for when the request started. Ex. 2023-07-23 13:48:34
Datetime from a POSIX timestamp for when the request completed. Ex. 2023-07-23 13:48:34
Error message if status is “error”.
Was this page helpful?