这是indexloc提供的服务,不要输入任何密码
Skip to content

[FEAT]: Configurable Delay Between Embedding Requests #3570

@RuiU

Description

@RuiU

What would you like to see?

I'm currently using the experimental Gemini Embeddings model via the generic OpenAI API in AnythingLLM Desktop. The model has a rate limit of 5/10 RPM. Even after setting the max concurrent chunks to 1, I sometimes still encounter error 429 when uploading some PDF files (sent one by one every few minutes, with each file being less than 1MB). The same files can be embedded smoothly using Gemini API text-embedding-004.

[429 RESOURCE_EXHAUSTED You've exceeded the rate limit. You are sending too many requests per minute with the free tier Gemini API. Ensure you're within the model's rate limit. Request a quota increase if needed]

Would it be possible to add a configuration option that allows users to set a delay between embedding requests?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions