这是indexloc提供的服务,不要输入任何密码
Skip to content

[FEAT]: API vector database endpoints #881

@timothycarambat

Description

@timothycarambat

What would you like to see?

Posting from Discord:

I would like to inquire about a potential enhancement for the WorkspaceAPI, specifically regarding the /v1/workspace/{slug}/chat endpoint.

Details:

API Endpoint: /v1/workspace/{slug}/chat
Description: Execute a chat with a workspace.
Parameters:
slug: The unique identifier of the workspace.
Authorization: Authentication token.
Request body: JSON object containing the message and mode of conversation (query or chat).

Proposed Enhancement:

Currently, the /v1/workspace/{slug}/chat endpoint returns a response that includes both text response generated by LLM (Language Model) and associated sources. However, is there a way where the users may only need the text from the sources without relying on the LLM response, especially in query mode? So the LLMs will not be invoked in any case whether there are relevant information or not. How would i be able to do this?

Basically, use AnythingLLM as your vector database and just return the chunks but using all the config and set up already done on AnythingLLM.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions