Vectara Release Notes
Here’s where we keep you up to date with all the latest features and product documentation updates to help you get even more out of the Vectara platform. Whether you're building sophisticated generative AI applications, experimenting with Retrieval Augmented Generation (RAG), or exploring our newest API endpoints, this page is your go-to place to see how we’re evolving and how these product and documentation changes can benefit your enterprise.
New Integrations Section
October 17, 2024
Vectara introduces a new documentation section highlighting our integrations with various systems in the larger generative AI community, including Airbyte, DataVolo, Flowise, LangChain, LangFlow, LlamaIndex, and Unstructured.io. This section showcases how Vectara's advanced capabilities in document indexing and neural retrieval can enhance AI applications through strategic partnerships.
Why it matters: This update provides developers with an overview of Vectara's community and partner integrations, enabling them to leverage powerful tools and frameworks in conjunction with Vectara's capabilities. These integrations can enable developers to more easily enhance their AI applications, improve search accuracy, and streamline their development process.
More information:
Community Collaborations and Partnerships
Search Cutoffs and Limits
October 9, 2024
This feature introduces cutoffs and limits for search results. Cutoffs set a minimum relevance threshold, while limits control the maximum number of returned results after reranking. These can be applied individually or combined across various reranker types.
Why it matters: These controls allow developers to customize reranker inputs, ensuring highly relevant results and optimizing resource usage. By enabling more precise result filtering, application builders have more flexibility for specific use cases, from content categorization to focused data retrieval.
Updated API endpoints:
More information:
Chain Reranker
October 8, 2024
The Vectara chain reranker lets you apply multiple reranking strategies sequentially, allowing users to combine different reranking strategies and giving you absolute control. This feature enables the application of diverse ranking criteria at each stage of the ranking process, from neural reranking and maximal marginal relevance to custom business logic, all in a customizable sequence.
Why it matters: This unique innovation addresses complex search scenarios that require complex relevance and business rules and enables enterprises to fully customize Vectara's behavior. By allowing the combination of various reranking strategies, it significantly enhances the quality of Retrieval Augmented Generation (RAG) outcomes.
Updated API endpoints:
More information:
Document and Document Part/Vector Count API
October 1, 2024
You can now retrieve more comprehensive metrics about a corpus, including the number of documents or document parts.
Why it matters: Administrators can now efficiently manage resource allocation and monitor data usage trends. This feature helps ensure that corpus growth stays within allocated quotas and provides insights into document segmentation patterns.
Updated API endpoint:
Retrieve metadata about a corpus
More information:
UI Enhancement: Custom Prompts in Console
August 20, 2024
The Vectara Prompt Engine allows users to create customized prompt templates that can reference relevant text and metadata for Retrieval Augmented Generation (RAG) applications.
Why it matters: This feature enables more advanced workflows and customizations for creating context-aware responses, such as answering questions based on previous answers in RFIs or RFPs, drafting support tickets from user feedback, and customizing result formatting. The ability to define roles and provide detailed context in prompts helps guide LLMs to generate more accurate and relevant responses.
More information:
User Defined Function Reranking
August 15, 2024
Vectara introduces the User Defined Function Reranker, giving enterprises more granular control over search result ordering by defining custom reranking functions using document-level metadata, part-level metadata, or scores generated from the request-level metadata. This flexibility is particularly useful for a wide range of use cases.
Why it matters: This feature allows enterprises to modify scores based on metadata, conditions, and custom logic, in order to craft highly tailored search experiences. This advanced functionality can guide LLMs to prioritize certain information, especially when used with the chain reranker. Use cases can include recency bias for news searches, location bias for local business queries, and e-commerce bias for promotional content.
More information:
Mockingbird LLM
July 16, 2024
Vectara releases Mockingbird, our Large Language Model optimized (LLM) designed for Retrieval Augmented Generation (RAG) scenarios. It offers enhanced accuracy and improved performance in summarizing large datasets, generating structured data, and providing multilingual support.
Why it matters: Mockingbird outperforms leading models in RAG quality, citation accuracy, and structured output precision. It's particularly valuable for enterprises requiring accurate summaries of large data volumes, structured data extraction, and multilingual capabilities. Mockingbird supports critical languages including Arabic, French, Spanish, Portuguese, Italian, German, Chinese, Dutch, Korean, Japanese, and Russian, making it ideal for global applications.
More information:
Vectara REST API v2
June 6, 2024
The Vectara API v2 provides a more RESTful, intuitive structure with simpler authentication, new top-level objects, and better defaults for hybrid search and reranking, making it easier to develop applications with Vectara’s GenAI platform.
Why it matters: This update significantly improves the developer experience, making it easier to integrate Vectara into applications. The standardized error codes and improved defaults reduce development overhead and potential silent errors. While REST API v2 is a major improvement, Vectara continues to support gRPC for low-latency applications
New API endpoint(s):
More information:
Vectara Multilingual Reranker v1 (Slingshot)
May 28, 2024
The state-of-the-art Vectara Multilingual Reranker, also known as Slingshot, provides more accurate neural ranking than the initial Boomerang retrieval. By significantly improving the precision of retrieved search results, Slingshot enhances the performance of Retrieval Augmented Generation (RAG) pipelines. It excels in globally distributed, multilingual environments, reducing irrelevant responses and minimizing hallucinations in generative AI applications.
Why it matters: Slingshot significantly enhances the precision of retrieved results, crucial for reducing hallucinations and irrelevant responses in generative AI applications. While computationally more expensive, it offers improved text scoring across a wide range of languages (100+), making it suitable for diverse content as a powerful tool for enterprises.
Deprecated: The reranker_id
and rnk_272725719
have been deprecated.
Use reranker_name
and Rerank_Multilingual_v1
instead.
More information:
- Vectara Multilingual Reranker
- Unlocking the State-of-the-Art Reranker: Introducing the Vectara Multilingual Reranker_v1
Semantic Conversation History Search
May 15, 2024
Vectara now allows administrators to search across conversation logs for specific patterns or unresolved queries. This leverages semantic search capabilities to identify gaps in knowledge bases and pinpoint "unknown unknowns" in conversations where users may have asked unexpected or unresolved questions.
Why it matters: With this capability, enterprises can enhance their customer support by analyzing user interactions and improving response accuracy. They can identify unresolved or ambiguous user questions, even if the language is informal or the question does not fit specific patterns.
More information:
Semantic Conversation History Search
Generative Response Styling
May 15, 2024
Vectara now allows users to format citations in summaries using Markdown or HTML and including document and part level metadata directly in citation links. This feature is useful for enterprises that require formatted, context-rich summaries for integrating generative responses into web-based applications and ensuring citations are clear and appropriately formatted.
Why it matters: By allowing structured citations, Vectara simplifies the integration process for developers who need to embed references directly into user-facing applications without additional parsing logic. This improvement enhances usability for various platforms, including web-based content and internal systems that support HTML or Markdown.
More information: