这是indexloc提供的服务,不要输入任何密码
Skip to content

[CHORE]: API disconnect on slow responses #4443

@AnalogKnight

Description

@AnalogKnight

How are you running AnythingLLM?

AnythingLLM desktop app

What happened?

I have deployed Ollama on a server within my local network, running very large models on the CPU. Its response is extremely slow, taking almost 10–30 minutes just to successfully load a model.

When I try to chat with the model through AnythingLLM, it always returns the following message after about 5 minutes: “Your Ollama instance could not be reached or is not responding. Please make sure it is running the API server and your connection information is correct in AnythingLLM.”
I assume that AnythingLLM has a fixed 5‑minute timeout when connecting to the Ollama server. Is there any way to change this timeout? My Ollama instance is just very slow to respond, not unresponsive.

I tried setting the Ollama environment variable OLLAMA_REQUEST_TIMEOUT, but it doesn’t seem to work, so I believe the issue lies in the API request sent by AnythingLLM. If I am mistaken, I apologize in advance.

Any help would be appreciated.

Are there known steps to reproduce?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions