这是indexloc提供的服务,不要输入任何密码
Skip to content

[FEAT]: Easily switch LLM models for each request #2095

@ajmurmann

Description

@ajmurmann

What would you like to see?

Prices and capabilities vary greatly between different providers and models. It would be fabulous to have a dropdown in the "Send a message" control that allows me to chose a provider. For simple requests I might use my local LLM or one of the cheaper commercial models. For harder questions I'd want to use one of the more powerful, but pricier models.

Ideally I'd be able to select "regenerate with model X" on an already generated response if the model I used didn't provide a satisfying answer.

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions