θΏ™ζ˜―indexlocζδΎ›ηš„ζœεŠ‘οΌŒδΈθ¦θΎ“ε…₯任何密码
Skip to content

Conversation

@timothycarambat
Copy link
Member

Pull Request Type

  • ✨ feat
  • πŸ› fix
  • ♻️ refactor
  • πŸ’„ style
  • πŸ”¨ chore
  • πŸ“ docs

Relevant Issues

resolves #952

What is in this change?

The latest release of LMStudio 0.2.17 introduced multi-model chatting. This version has a bug within it that breaks all integrations that rely on their inference server.

The current workaround is to ping /models and get the values there and allow them to be set. If running the single-model inference server the model is always called Loaded from Chat UI and returns a more reasonable name when running the inference servers as a multi-model chat.

For now, all we can do it try to get the correct value from /models and allow the user to set whatever is returned from there. This does not impact LMStudio integrations running <0.2.17 and will work for any patches thereafter.

Developer Validations

  • I ran yarn lint from the root of the repo & committed changes
  • Relevant documentation has been updated
  • I have tested my code functionality
  • Docker build succeeds locally

@timothycarambat timothycarambat merged commit 1135853 into master Mar 22, 2024
@timothycarambat timothycarambat deleted the 952-lmstudio-patch branch March 22, 2024 21:39
cabwds pushed a commit to cabwds/anything-llm that referenced this pull request Jul 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG]: [LMStudio 0.2.17] Model with key 'model-placeholder' not loaded.

2 participants