Create a new turn in the chat
POSThttps://api.vectara.io/v2/chats/:chat_id/turns
Create a new turn in the chat. Each conversation has a series of turn
objects, which are the sequence of message and response pairs that make up the dialog.
Request
Path Parameters
Possible values: Value must match regular expression cht_.+$
The ID of the chat.
Header Parameters
Possible values: >= 1
The API will make a best effort to complete the request in the specified seconds or time out.
Possible values: >= 1
The API will make a best effort to complete the request in the specified milliseconds or time out.
- application/json
Body
The chat message or question.
How can I use the Vectara platform?
search object
generation object
chat object
Indicates whether to save the chat in both the chat and query history. This overrides chat.store
.
true
Indicates whether to enable intelligent query rewriting. When enabled, the platform will attempt to extract metadata filter and rewrite the query to improve search results.
false
Indicates whether the response should be streamed or not.
false
Responses
- 200
- 400
- 403
- 404
A response to a chat request.
- application/json
- text/event-stream
- Schema
- Example (auto)
Schema
If the chat response was stored, the ID of the chat.
If the chat response was stored, the ID of the turn.
The message from the chat model for the chat message.
Languages that the Vectara platform supports.
Possible values: [auto
, eng
, deu
, fra
, zho
, kor
, ara
, rus
, tha
, nld
, ita
, por
, spa
, jpn
, pol
, tur
, vie
, ind
, ces
, ukr
, ell
, heb
, fas
, hin
, urd
, swe
, ben
, msa
, ron
]
auto
search_results object[]
Indicates the probability that the summary is factually consistent with the results. The system excludes this property if it encounters excessively large outputs or search results.
The rendered prompt sent to the LLM. Useful when creating customer prompt_template
templates.
Non-fatal warnings that occurred during request processing
Possible values: [exceeded_max_input_length_fcs
, intelligent_query_rewriting_failed
]
View the actual query made to backend that was rephrased by the LLM from the input query.
rewritten_queries object[]
{
"chat_id": "string",
"turn_id": "string",
"answer": "string",
"response_language": "auto",
"search_results": [
{
"text": "string",
"score": 0,
"part_metadata": {},
"document_metadata": {},
"document_id": "string",
"table": {
"id": "table_1",
"title": "string",
"data": {
"headers": [
[
{
"text_value": "string",
"int_value": 0,
"float_value": 0,
"bool_value": true,
"colspan": 0,
"rowspan": 0
}
]
],
"rows": [
[
{
"text_value": "string",
"int_value": 0,
"float_value": 0,
"bool_value": true,
"colspan": 0,
"rowspan": 0
}
]
]
},
"description": "string"
},
"request_corpora_index": 0
}
],
"factual_consistency_score": 0,
"rendered_prompt": "string",
"warnings": [
"exceeded_max_input_length_fcs"
],
"rephrased_query": "string",
"rewritten_queries": [
{
"corpus_key": "string",
"filter_extraction": {
"query": "string",
"metadata_filter": "string"
}
}
]
}
- Schema
- Example (auto)
Schema
- search_results
- chat_info
- generation_chunk
- generation_end
- generation_info
- factual_consistency_score
- end
- error
An individual event when the response is streamed.
Possible values: [search_results
, chat_info
, generation_chunk
, generation_end
, generation_info
, factual_consistency_score
, end
, error
]
search_results object[]
rewritten_queries object[]
ID of the chat.
Possible values: Value must match regular expression cht_.+$
ID of the turn.
Possible values: Value must match regular expression trn_.+$
Part of the message from the generator. All summary chunks must be appended together in order to get the full summary.
The rendered prompt sent to the LLM. Useful when creating customer prompt_template
templates.
View the actual query made to backend that was rephrased by the LLM from the input query.
The probability that the summary is factually consistent with the results.
The error messages.
{
"type": "search_results",
"search_results": [
{
"text": "string",
"score": 0,
"part_metadata": {},
"document_metadata": {},
"document_id": "string",
"table": {
"id": "table_1",
"title": "string",
"data": {
"headers": [
[
{
"text_value": "string",
"int_value": 0,
"float_value": 0,
"bool_value": true,
"colspan": 0,
"rowspan": 0
}
]
],
"rows": [
[
{
"text_value": "string",
"int_value": 0,
"float_value": 0,
"bool_value": true,
"colspan": 0,
"rowspan": 0
}
]
]
},
"description": "string"
},
"request_corpora_index": 0
}
],
"rewritten_queries": [
{
"corpus_key": "string",
"filter_extraction": {
"query": "string",
"metadata_filter": "string"
}
}
]
}
Turn creation request was malformed.
- application/json
- Schema
- Example (auto)
Schema
field_errors object
The ID of the request that can be used to help Vectara support debug what went wrong.
{
"field_errors": {},
"messages": [
"string"
],
"request_id": "string"
}
Permissions do not allow creating a turn in the chat.
- application/json
- Schema
- Example (auto)
Schema
The messages describing why the error occurred.
The ID of the request that can be used to help Vectara support debug what went wrong.
{
"messages": [
"Internal server error."
],
"request_id": "string"
}
Corpus or chat not found.
- application/json
- Schema
- Example (auto)
Schema
The ID cannot be found.
The ID of the request that can be used to help Vectara support debug what went wrong.
{
"id": "string",
"messages": [
"string"
],
"request_id": "string"
}
Authorization: x-api-key
name: x-api-keytype: apiKeyin: header
- csharp
- curl
- dart
- go
- http
- java
- javascript
- kotlin
- c
- nodejs
- objective-c
- ocaml
- php
- powershell
- python
- r
- ruby
- rust
- shell
- swift
- HTTPCLIENT
- RESTSHARP
var client = new HttpClient();
var request = new HttpRequestMessage(HttpMethod.Post, "https://api.vectara.io/v2/chats/:chat_id/turns");
request.Headers.Add("Accept", "application/json");
request.Headers.Add("x-api-key", "<x-api-key>");
var content = new StringContent("{\n \"query\": \"How can I use the Vectara platform?\",\n \"search\": {\n \"corpora\": [\n {\n \"custom_dimensions\": {},\n \"metadata_filter\": \"doc.title = 'Charlotte''s Web'\",\n \"lexical_interpolation\": 0.025,\n \"semantics\": \"default\",\n \"corpus_key\": \"my-corpus\",\n \"query\": \"string\"\n }\n ],\n \"offset\": 0,\n \"limit\": 10,\n \"context_configuration\": {\n \"characters_before\": 30,\n \"characters_after\": 30,\n \"sentences_before\": 3,\n \"sentences_after\": 3,\n \"start_tag\": \"<em>\",\n \"end_tag\": \"</em>\"\n },\n \"reranker\": {\n \"type\": \"customer_reranker\",\n \"reranker_name\": \"Rerank_Multilingual_v1\",\n \"limit\": 0,\n \"cutoff\": 0\n }\n },\n \"generation\": {\n \"generation_preset_name\": \"vectara-summary-ext-v1.2.0\",\n \"max_used_search_results\": 5,\n \"prompt_template\": \"[\\n {\\\"role\\\": \\\"system\\\", \\\"content\\\": \\\"You are a helpful search assistant.\\\"},\\n #foreach ($qResult in $vectaraQueryResults)\\n {\\\"role\\\": \\\"user\\\", \\\"content\\\": \\\"Given the $vectaraIdxWord[$foreach.index] search result.\\\"},\\n {\\\"role\\\": \\\"assistant\\\", \\\"content\\\": \\\"${qResult.getText()}\\\" },\\n #end\\n {\\\"role\\\": \\\"user\\\", \\\"content\\\": \\\"Generate a summary for the query '${vectaraQuery}' based on the above results.\\\"}\\n]\\n\",\n \"max_response_characters\": 300,\n \"response_language\": \"auto\",\n \"model_parameters\": {\n \"llm_name\": \"gpt4\",\n \"max_tokens\": 0,\n \"temperature\": 0,\n \"frequency_penalty\": 0,\n \"presence_penalty\": 0\n },\n \"citations\": {\n \"style\": \"none\",\n \"url_pattern\": \"https://vectara.com/documents/{doc.id}\",\n \"text_pattern\": \"{doc.title}\"\n },\n \"enable_factual_consistency_score\": true\n },\n \"chat\": {\n \"store\": true\n },\n \"save_history\": true,\n \"intelligent_query_rewriting\": false,\n \"stream_response\": false\n}", null, "application/json");
request.Content = content;
var response = await client.SendAsync(request);
response.EnsureSuccessStatusCode();
Console.WriteLine(await response.Content.ReadAsStringAsync());