-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Closed
Labels
Description
How are you running AnythingLLM?
All versions
What happened?
For some users, depending on context window length and the specific chat being sent it appears possible that a null string can be sent to the tiktoken calculation which causes a crash on an API call returning a null response.
text.substring is not a function TypeError: text.substring is not a function
at _Tiktoken.encode (/home/ubuntu/anything-llm/server/node_modules/js-tiktoken/dist/index.cjs:137:32)
at TokenManager.tokensFromString (/home/ubuntu/anything-llm/server/utils/helpers/tiktoken.js:22:33)
at TokenManager.countFromString (/home/ubuntu/anything-llm/server/utils/helpers/tiktoken.js:33:25)
at /home/ubuntu/anything-llm/server/utils/helpers/tiktoken.js:47:28
at Array.reduce (<anonymous>)
at TokenManager.statsFrom (/home/ubuntu/anything-llm/server/utils/helpers/tiktoken.js:46:39)
at messageArrayCompressor (/home/ubuntu/anything-llm/server/utils/helpers/chat/index.js:56:20)
at GeminiLLM.compressMessages (/home/ubuntu/anything-llm/server/utils/AiProviders/gemini/index.js:223:18)
at chatWithWorkspace (/home/ubuntu/anything-llm/server/utils/chats/index.js:206:39)
at async /home/ubuntu/anything-llm/server/endpoints/api/workspace/index.js:594:24
Are there known steps to reproduce?
Send API workspace chats enough times until the models context window is overflow'd resulting in the chat needing to be pruned and managed