-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Description
What would you like to see?
Posting from Discord:
I would like to inquire about a potential enhancement for the WorkspaceAPI, specifically regarding the /v1/workspace/{slug}/chat endpoint.
Details:
API Endpoint: /v1/workspace/{slug}/chat
Description: Execute a chat with a workspace.
Parameters:
slug: The unique identifier of the workspace.
Authorization: Authentication token.
Request body: JSON object containing the message and mode of conversation (query or chat).Proposed Enhancement:
Currently, the /v1/workspace/{slug}/chat endpoint returns a response that includes both text response generated by LLM (Language Model) and associated sources. However, is there a way where the users may only need the text from the sources without relying on the LLM response, especially in query mode? So the LLMs will not be invoked in any case whether there are relevant information or not. How would i be able to do this?
Basically, use AnythingLLM as your vector database and just return the chunks but using all the config and set up already done on AnythingLLM.