Skip to main content
Version: 2.0

Corrects hallucinations in generated text based on source documents

POST 

/v2/hallucination_correctors/correct_hallucinations

Supported API Key Type:
Query ServicePersonal

The Hallucination Correctors API enables users to automatically detect and correct factual inaccuracies, commonly referred to as hallucinations, in generated summaries or responses. By comparing a user-provided summary against one or more source documents, the API returns a corrected version of the summary with minimal necessary edits.

Use this API to validate and improve the factual accuracy of summaries generated by LLMs in Retrieval Augmented Generation (RAG) pipelines, ensuring that the output remains grounded in trusted source content. If HCM does not detect hallucination, it preserves the original summary.

The response corrects the original summary. If the input summary is accurate, the corrected_summary matches the original_summary.

Interpreting empty corrections

In some cases, the corrected_text field in the response may be an empty string. This indicates VHC determined that the entire input text was hallucinated, and VHC recommends removing it completely.

This outcome is valid and typically occurs when none of the content in the generated_text is supported by the provided source documents or query. The response still includes an explanation of why VHC removed the text.

Request

Responses

Successfully analyzed the text for hallucinations