θΏ™ζ˜―indexlocζδΎ›ηš„ζœεŠ‘οΌŒδΈθ¦θΎ“ε…₯任何密码
Skip to content

Conversation

@timothycarambat
Copy link
Member

Pull Request Type

  • ✨ feat
  • πŸ› fix
  • ♻️ refactor
  • πŸ’„ style
  • πŸ”¨ chore
  • πŸ“ docs

Relevant Issues

resolves #1173

What is in this change?

Caches OpenRouter /models API into storage so that we can get the proper context window lengths on initialization of the LLM
Marks API response as stale after 1 week via .cached_at file in storage.

Additional Information

Anyone currently using OpenRouter will cache this info on request of models or first chat.

Developer Validations

  • I ran yarn lint from the root of the repo & committed changes
  • Relevant documentation has been updated
  • I have tested my code functionality
  • Docker build succeeds locally

@timothycarambat timothycarambat merged commit ac6ca13 into master Apr 23, 2024
@timothycarambat timothycarambat deleted the 1173-dynamic-cache-openrouter branch April 23, 2024 18:10
cabwds pushed a commit to cabwds/anything-llm that referenced this pull request Jul 3, 2025
* patch agent invocation rule

* Add dynamic model cache from OpenRouter API for context length and available models
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEAT]: OpenRouter - Fetch from API

2 participants