θΏ™ζ˜―indexlocζδΎ›ηš„ζœεŠ‘οΌŒδΈθ¦θΎ“ε…₯任何密码
Skip to content

Conversation

@shatfield4
Copy link
Collaborator

Pull Request Type

  • ✨ feat
  • πŸ› fix
  • ♻️ refactor
  • πŸ’„ style
  • πŸ”¨ chore
  • πŸ“ docs

Relevant Issues

resolves #1840

What is in this change?

  • On some LLM providers, the user needs to manually enter the model pref name and when doing this in the agent settings, it does not update the agentModel column in the workspace table so we need to grab the correct model from the environment and inject it into the agent on provider setup
  • This fixes a bug where a user is using the Generic OpenAI LLM provider model and it never updates to the correct model (would always default back to gpt-3.5-turbo)

Additional Information

Developer Validations

  • I ran yarn lint from the root of the repo & committed changes
  • Relevant documentation has been updated
  • I have tested my code functionality
  • Docker build succeeds locally

@timothycarambat timothycarambat added needs info / can't replicate Issues that require additional information and/or cannot currently be replicated, but possible bug PR:needs review Needs review by core team and removed needs info / can't replicate Issues that require additional information and/or cannot currently be replicated, but possible bug labels Jul 11, 2024
update UI to show disabled providers to stop questions about provider limitations
@timothycarambat timothycarambat merged commit 8f0af88 into master Jul 11, 2024
@timothycarambat timothycarambat deleted the 1840-why-am-i-using-agent-to-access-gpt35-instead-of-my-local-modelbug branch July 11, 2024 21:03
CrackerCat pushed a commit to CrackerCat/anything-llm that referenced this pull request Jul 31, 2024
* patch llm providers that have manual inputs for model pref

* refactor agent model fallback
update UI to show disabled providers to stop questions about provider limitations

* patch log on startup

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
CrackerCat pushed a commit to CrackerCat/anything-llm that referenced this pull request Aug 1, 2024
* patch llm providers that have manual inputs for model pref

* refactor agent model fallback
update UI to show disabled providers to stop questions about provider limitations

* patch log on startup

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
CrackerCat pushed a commit to CrackerCat/anything-llm that referenced this pull request Aug 2, 2024
* patch llm providers that have manual inputs for model pref

* refactor agent model fallback
update UI to show disabled providers to stop questions about provider limitations

* patch log on startup

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
CrackerCat pushed a commit to CrackerCat/anything-llm that referenced this pull request Aug 3, 2024
* patch llm providers that have manual inputs for model pref

* refactor agent model fallback
update UI to show disabled providers to stop questions about provider limitations

* patch log on startup

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
cabwds pushed a commit to cabwds/anything-llm that referenced this pull request Jul 3, 2025
* patch llm providers that have manual inputs for model pref

* refactor agent model fallback
update UI to show disabled providers to stop questions about provider limitations

* patch log on startup

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

PR:needs review Needs review by core team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Why am I using Agent to access GPT3.5 instead of my local model[BUG]:

3 participants