Get an LLM
GET/v2/llms/:llm_id
Get details about a specific LLM.
Request
Path Parameters
The name of the LLM to retrieve.
Header Parameters
Possible values: >= 1
The API will make a best effort to complete the request in the specified seconds or time out.
Possible values: >= 1
The API will make a best effort to complete the request in the specified milliseconds or time out.
Responses
- 200
- 403
- 404
The LLM details.
- application/json
- Schema
- Example (from schema)
Schema
- Array [
- ]
Possible values: Value must match regular expression llm_.*
The ID of the LLM.
Name of the LLM.
The description of the LLM.
Indicates whether the LLM is enabled.
If this is the default LLM, it is used in queries when the generator is not specified.
prompts object[]deprecated
List of prompts that the model can use. This is deprecated; see /v2/generation_presets
instead.
Possible values: Value must match regular expression pmt_.*
The ID of the prompt.
Name of the prompt. This is used as the prompt_name
in a query.
The description of the prompt.
Indicates whether the prompt is enabled.
Indicates if this prompt is the default prompt used with the LLM.
{
"id": "string",
"name": "string",
"description": "string",
"enabled": true,
"default": true
}
Permissions do not allow retrieving this LLM.
- application/json
- Schema
- Example (from schema)
Schema
The messages describing why the error occurred.
The ID of the request that can be used to help Vectara support debug what went wrong.
{
"messages": [
"Internal server error."
],
"request_id": "string"
}
LLM not found.
- application/json
- Schema
- Example (from schema)
Schema
The ID cannot be found.
ID of the request that can be used to help Vectara support debug what went wrong.
{
"id": "string",
"messages": [
"string"
],
"request_id": "string"
}