这是indexloc提供的服务,不要输入任何密码
Skip to content

[BUG]: No response is returned, no errors in logs #2464

@JackTheTripperr

Description

@JackTheTripperr

How are you running AnythingLLM?

AnythingLLM desktop app

What happened?

I'm using a local installation of AnythingLLM on Windows 11 with APIpie via the generic OpenAI provider option. When I submit my prompt, I don't get a response. I'm not sure where things are getting hung up because I'm not seeing any errors when running debug or checking the logs. Additionally, I can see through my API usage that the API provider charged me for the completion, so I'm not entirely sure why the response isn't showing up.

I've tried various models (admittedly, all from APIpie), checked for firewall issues, you name it. Everything appears to be working correctly, I just don't see the response. Any help would be greatly appreciated!

{"level":"info","message":"\u001b[36m[EncryptionManager]\u001b[0m Loaded existing key & salt for encrypting arbitrary data.","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY ENABLED]\u001b[0m Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.","service":"backend"}
{"level":"info","message":"prisma:info Starting a sqlite pool with 17 connections.","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY SENT]\u001b[0m {\"event\":\"server_boot\",\"distinctId\":\"f679be1b-d29e-437f-8524-49101d68923d\",\"properties\":{\"runtime\":\"desktop\"}}","service":"backend"}
{"level":"info","message":"Skipping preloading of AnythingLLMOllama - LLM_PROVIDER is generic-openai.","service":"backend"}
{"level":"info","message":"\u001b[36m[CommunicationKey]\u001b[0m RSA key pair generated for signed payloads within AnythingLLM services.","service":"backend"}
{"level":"info","message":"\u001b[36m[EncryptionManager]\u001b[0m Loaded existing key & salt for encrypting arbitrary data.","service":"backend"}
{"level":"info","message":"[production] AnythingLLM Standalone Backend listening on port 3001. Network discovery is enabled.","service":"backend"}
{"level":"info","message":"\u001b[36m[BackgroundWorkerService]\u001b[0m Feature is not enabled and will not be started.","service":"backend"}
{"level":"info","message":"\u001b[32m[Event Logged]\u001b[0m - update_llm_provider","service":"backend"}
{"level":"info","message":"\u001b[36m[NativeEmbedder]\u001b[0m Initialized","service":"backend"}
{"level":"info","message":"\u001b[32m[Event Logged]\u001b[0m - update_llm_provider","service":"backend"}
{"level":"info","message":"\u001b[36m[EncryptionManager]\u001b[0m Loaded existing key & salt for encrypting arbitrary data.","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY ENABLED]\u001b[0m Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.","service":"backend"}
{"level":"info","message":"prisma:info Starting a sqlite pool with 17 connections.","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY SENT]\u001b[0m {\"event\":\"server_boot\",\"distinctId\":\"f679be1b-d29e-437f-8524-49101d68923d\",\"properties\":{\"runtime\":\"desktop\"}}","service":"backend"}
{"level":"info","message":"Skipping preloading of AnythingLLMOllama - LLM_PROVIDER is generic-openai.","service":"backend"}
{"level":"info","message":"\u001b[36m[CommunicationKey]\u001b[0m RSA key pair generated for signed payloads within AnythingLLM services.","service":"backend"}
{"level":"info","message":"\u001b[36m[EncryptionManager]\u001b[0m Loaded existing key & salt for encrypting arbitrary data.","service":"backend"}
{"level":"info","message":"[production] AnythingLLM Standalone Backend listening on port 3001. Network discovery is enabled.","service":"backend"}
{"level":"info","message":"\u001b[36m[BackgroundWorkerService]\u001b[0m Feature is not enabled and will not be started.","service":"backend"}
{"level":"info","message":"\u001b[36m[NativeEmbedder]\u001b[0m Initialized","service":"backend"}
{"level":"info","message":"\u001b[36m[uo]\u001b[0m Inference API: https://apipie.ai/v1/ Model: hermes-3-llama-3.1-405b","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY SENT]\u001b[0m {\"event\":\"sent_chat\",\"distinctId\":\"f679be1b-d29e-437f-8524-49101d68923d\",\"properties\":{\"multiUserMode\":false,\"LLMSelection\":\"generic-openai\",\"Embedder\":\"native\",\"VectorDbSelection\":\"lancedb\",\"multiModal\":false,\"TTSSelection\":\"elevenlabs\",\"runtime\":\"desktop\"}}","service":"backend"}
{"level":"info","message":"\u001b[32m[Event Logged]\u001b[0m - sent_chat","service":"backend"}

screen01
screen02
screen03
screen04

Are there known steps to reproduce?

No response

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions