-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Closed
Labels
Integration RequestRequest for support of a new LLM, Embedder, or Vector databaseRequest for support of a new LLM, Embedder, or Vector databaseenhancementNew feature or requestNew feature or requestfeature request
Description
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
I'm using a local installation of AnythingLLM on Windows 11 with APIpie via the generic OpenAI provider option. When I submit my prompt, I don't get a response. I'm not sure where things are getting hung up because I'm not seeing any errors when running debug or checking the logs. Additionally, I can see through my API usage that the API provider charged me for the completion, so I'm not entirely sure why the response isn't showing up.
I've tried various models (admittedly, all from APIpie), checked for firewall issues, you name it. Everything appears to be working correctly, I just don't see the response. Any help would be greatly appreciated!
{"level":"info","message":"\u001b[36m[EncryptionManager]\u001b[0m Loaded existing key & salt for encrypting arbitrary data.","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY ENABLED]\u001b[0m Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.","service":"backend"}
{"level":"info","message":"prisma:info Starting a sqlite pool with 17 connections.","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY SENT]\u001b[0m {\"event\":\"server_boot\",\"distinctId\":\"f679be1b-d29e-437f-8524-49101d68923d\",\"properties\":{\"runtime\":\"desktop\"}}","service":"backend"}
{"level":"info","message":"Skipping preloading of AnythingLLMOllama - LLM_PROVIDER is generic-openai.","service":"backend"}
{"level":"info","message":"\u001b[36m[CommunicationKey]\u001b[0m RSA key pair generated for signed payloads within AnythingLLM services.","service":"backend"}
{"level":"info","message":"\u001b[36m[EncryptionManager]\u001b[0m Loaded existing key & salt for encrypting arbitrary data.","service":"backend"}
{"level":"info","message":"[production] AnythingLLM Standalone Backend listening on port 3001. Network discovery is enabled.","service":"backend"}
{"level":"info","message":"\u001b[36m[BackgroundWorkerService]\u001b[0m Feature is not enabled and will not be started.","service":"backend"}
{"level":"info","message":"\u001b[32m[Event Logged]\u001b[0m - update_llm_provider","service":"backend"}
{"level":"info","message":"\u001b[36m[NativeEmbedder]\u001b[0m Initialized","service":"backend"}
{"level":"info","message":"\u001b[32m[Event Logged]\u001b[0m - update_llm_provider","service":"backend"}
{"level":"info","message":"\u001b[36m[EncryptionManager]\u001b[0m Loaded existing key & salt for encrypting arbitrary data.","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY ENABLED]\u001b[0m Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.","service":"backend"}
{"level":"info","message":"prisma:info Starting a sqlite pool with 17 connections.","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY SENT]\u001b[0m {\"event\":\"server_boot\",\"distinctId\":\"f679be1b-d29e-437f-8524-49101d68923d\",\"properties\":{\"runtime\":\"desktop\"}}","service":"backend"}
{"level":"info","message":"Skipping preloading of AnythingLLMOllama - LLM_PROVIDER is generic-openai.","service":"backend"}
{"level":"info","message":"\u001b[36m[CommunicationKey]\u001b[0m RSA key pair generated for signed payloads within AnythingLLM services.","service":"backend"}
{"level":"info","message":"\u001b[36m[EncryptionManager]\u001b[0m Loaded existing key & salt for encrypting arbitrary data.","service":"backend"}
{"level":"info","message":"[production] AnythingLLM Standalone Backend listening on port 3001. Network discovery is enabled.","service":"backend"}
{"level":"info","message":"\u001b[36m[BackgroundWorkerService]\u001b[0m Feature is not enabled and will not be started.","service":"backend"}
{"level":"info","message":"\u001b[36m[NativeEmbedder]\u001b[0m Initialized","service":"backend"}
{"level":"info","message":"\u001b[36m[uo]\u001b[0m Inference API: https://apipie.ai/v1/ Model: hermes-3-llama-3.1-405b","service":"backend"}
{"level":"info","message":"\u001b[32m[TELEMETRY SENT]\u001b[0m {\"event\":\"sent_chat\",\"distinctId\":\"f679be1b-d29e-437f-8524-49101d68923d\",\"properties\":{\"multiUserMode\":false,\"LLMSelection\":\"generic-openai\",\"Embedder\":\"native\",\"VectorDbSelection\":\"lancedb\",\"multiModal\":false,\"TTSSelection\":\"elevenlabs\",\"runtime\":\"desktop\"}}","service":"backend"}
{"level":"info","message":"\u001b[32m[Event Logged]\u001b[0m - sent_chat","service":"backend"}
Are there known steps to reproduce?
No response
Metadata
Metadata
Assignees
Labels
Integration RequestRequest for support of a new LLM, Embedder, or Vector databaseRequest for support of a new LLM, Embedder, or Vector databaseenhancementNew feature or requestNew feature or requestfeature request