POST
/
api
/
parea
/
v1
/
completion

Authorizations

x-user-id
string
headerrequired

Body

application/json
llm_inputs
object | null

Key-value pairs as inputs to prompt template. Only needs to be provided if deployment_id is provided or llm_configuration.messages are templated.

llm_configuration
object

LLM configuration parameters such as messages, functions, etc.

project_name
string | null
default: default

Project name which is used to associate the log with a project.

project_uuid
string | null

Project UUID which is used to associate the log with a project. Does not need to be provided if the project_name is provided.

experiment_uuid
string | null

Experiment UUID which is used to associate the log with an experiment.

parent_trace_id
string | null

UUID of the parent log. If given, will be used to associate the generation in a chain & create hierarchical nested logs.

root_trace_id
string | null

UUID of the root log. If given, will be used to associate the generation in a chain & create hierarchical nested logs.

end_user_identifier
string | null

Special field to track the end user which is interacting with your LLM app.

deployment_id
string | null

This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab. If a deployment_id is provided, Parea will fetch all of the associated configuration including model name, model parameters, and any associated functions. Any information provided on the llm_configuration field will be used instead of the associated deployed prompts fields.

eval_metric_ids
integer[] | null

List of evaluation metric IDs deployed on Parea which should be used to evaluate the completion output.

metadata
object | null

Key-value pairs to be associated with the log.

tags
string[] | null

List of tags to be associated with the log.

target
string | null

Optional ground truth output for the inputs. Will be used for evaluation and can be used when creating a test case from the log.

trace_id
string | null

UUID of the generation log. If not given, will be auto-generated.

trace_name
string | null

Name of the generation log. If not given, will be auto-generated in the format llm-{provider}.

provider_api_key
string | null

Provider API key to generate response. If not given, API keys saved on the platform will be used

cache
boolean
default: true

If true, the completion will be cached to avoid latency & cost for any subsequent completion using the same inputs.

log_omit_inputs
boolean
default: false

If true, the inputs, llm_configuration.messages, llm_configuration.functions, llm_configuration.model_params will not be logged.

log_omit_outputs
boolean
default: false

If true, the generated response will not be logged.

log_omit
boolean
default: false

Equivalent to setting both log_omit_inputs and log_omit_outputs to true.

log_sample_rate
number | null
default: 1

If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)

inference_id
string | null

Deprecated field which is the same trace_id

name
string | null

Deprecated field

retry
boolean
default: false

Deprecated field

fallback_strategy
string[] | null

Deprecated field

stream
boolean
default: false

Deprecated field. Use /completion/stream instead.

Response

200 - application/json
content
string
required

Generated completion content.

latency
number
required

Latency of the completion in seconds.

input_tokens
integer
required

Number of tokens in the input.

output_tokens
integer
required

Number of tokens in the output.

total_tokens
integer
required

Total number of tokens in the input and output.

cost
number
required

Cost of the completion in USD.

model
string
required

Model name.

provider
string
required

Provider name.

cache_hit
boolean
required

If true, the completion was fetched from the cache.

status
string
required

Status of the completion. Either 'success' or 'error'.

error
string | null

Error message if the completion failed.

trace_id
string | null

UUID of the log of the completion. Will be the same as the trace_id in the request if provided.

start_timestamp
string
required

Start timestamp of the completion.

end_timestamp
string
required

End timestamp of the completion.

inference_id
string
required

UUID of the log of the completion. The same as the trace_id in the request if provided.