这是indexloc提供的服务,不要输入任何密码
Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion server/storage/models/.gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
Xenova
downloaded/*
downloaded/*
!downloaded/.placeholder
6 changes: 5 additions & 1 deletion server/storage/models/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,8 @@ If you would like to use a local Llama compatible LLM model for chatting you can
> If running in Docker you should be running the container to a mounted storage location on the host machine so you
> can update the storage files directly without having to re-download or re-build your docker container. [See suggested Docker config](../../../README.md#recommended-usage-with-docker-easy)

All local models you want to have available for LLM selection should be placed in the `storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI.
> [!NOTE]
> `/server/storage/models/downloaded` is the default location that your model files should be at.
> Your storage directory may differ if you changed the STORAGE_DIR environment variable.

All local models you want to have available for LLM selection should be placed in the `server/storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI.
1 change: 1 addition & 0 deletions server/storage/models/downloaded/.placeholder
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
All your .GGUF model file downloads you want to use for chatting should go into this folder.