How are you running AnythingLLM?
AnythingLLM desktop app
What happened?

Cannot use huggingface Inference Endpoint (dedicated, paid) as model, since "Workspace Chat model" cannot be choosed or be empty if the provider is HuggingFace
Are there known steps to reproduce?
-
setup HuggingFace in system config: LLM preference => Inference Endpoint, enter HuggingFace Inference Endpoint, HuggingFace Access Token, Model Token Limit
-
workspace setting => chat settings => choose huggingface as provider => workspace chat model list is empty => click update workspace botton shows "please select an item from list"