forked from huggingface/chat-ui
-
Notifications
You must be signed in to change notification settings - Fork 0
Test New LLMs (Llama2, CodeLlama, etc.) on Chat-UI? #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
krrishdholakia
wants to merge
21
commits into
varunvummadi:main
Choose a base branch
from
krrishdholakia:patch-3
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* add optional timestamp field to messages * Add a `hashConv` function that only uses a subset of the message for hashing
* Add ability to define custom model/dataset URLs * lint --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
* Update README.md * Update README.md Co-authored-by: Julien Chaumond <julien@huggingface.co> * Align with header * lint * fixed markdown table of content --------- Co-authored-by: Julien Chaumond <julien@huggingface.co> Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
* disable login on first message * update banner here too * modal wording tweaks * prevent NaN --------- Co-authored-by: Victor Mustar <victor.mustar@gmail.com>
This reverts commit 6183fe7.
* disable login on first message * update banner here too * modal wording tweaks * prevent NaN * fix login wall * fix flicker * lint * put modal text behind login check * fix bug with sending messages without login * fix misalignment between ui and api * fix data update on disable login --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
This reverts commit 7767757.
* Update README.md * Update README.md Co-authored-by: Julien Chaumond <julien@huggingface.co> * Update README.md --------- Co-authored-by: Julien Chaumond <julien@huggingface.co>
The userMessageToken, assistantMessageToken, messageEndToken, and parameters.stop settings in `MODELS` do not have to be a token. They can be any string.
* rm open assistant branding * Update SettingsModal.svelte * make settings work with a dynamic list of models * fixed types --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.
* allow different user and assistant end-token For models like Llama2, the EndToken is not the same for a userMessage and an assistantMessage. This implements `userMessageEndToken` and `assistantMessageEndToken` which overwrites the messageEndToken behavior. This PR also allows empty strings as userMessageToken and assistantMessageToken and makes this the default. This adds additional flexibility, which is required in the case of Llama2 where the first userMessage is effectively different because of the system message. Note that because `userMessageEndToken` and `assistantMessageToken` are nearly always concatenated, it is almost redundant to have both. The exception is `generateQuery` for websearch which have several consecutive user messages. * Make model branding customizable based on env var (huggingface#345) * rm open assistant branding * Update SettingsModal.svelte * make settings work with a dynamic list of models * fixed types --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com> * trim and remove stop-suffixes from summary (huggingface#369) The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model. * add a login button when users are logged out (huggingface#381) * add fallback to message end token if there's no specified tokens for user & assistant --------- Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com> Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
* Use modelUrl instead of building it from model name * Preserve compatibility with optional modelUrl config Use modelUrl if informed, else use the previous behavior.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi @varunvummadi,
Notice you forked chat-ui. if you're trying to test other LLMs - besides GigaML's finetuned ones - (codellama, wizardcoder, etc.) with it, I just wrote a 1-click proxy to translate openai calls to huggingface, anthropic, togetherai, etc. api calls.
code
Here's the PR on adding openai to chat-ui: huggingface#452
I'd love to know if this solves a problem for you