Patch LMStudio Inference server bug integration #957
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request Type
Relevant Issues
resolves #952
What is in this change?
The latest release of LMStudio 0.2.17 introduced multi-model chatting. This version has a bug within it that breaks all integrations that rely on their inference server.
The current workaround is to ping
/modelsand get the values there and allow them to be set. If running the single-model inference server the model is always calledLoaded from Chat UIand returns a more reasonable name when running the inference servers as a multi-model chat.For now, all we can do it try to get the correct value from
/modelsand allow the user to set whatever is returned from there. This does not impact LMStudio integrations running <0.2.17 and will work for any patches thereafter.Developer Validations
yarn lintfrom the root of the repo & committed changes