这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@timothycarambat
Copy link
Member

Cleanup LLM selection code
resolves #311

Cleanup LLM selection code
resolves #311
@review-agent-prime
Copy link

The PR looks good overall, but there are a few areas where improvements can be made:

  1. In the AnthropicAiOptions component, the AnthropicApiKey input field is of type password. This is a good practice for sensitive data, but it might be better to provide a way for users to toggle visibility of the input field. This can help prevent errors when users input their API keys.
+import { useState } from 'react';
...
+const [showApiKey, setShowApiKey] = useState(false);
...
+<input
+  type={showApiKey ? "text" : "password"}
+  ...
+/>
+<button onClick={() => setShowApiKey(!showApiKey)}>
+  {showApiKey ? "Hide API Key" : "Show API Key"}
+</button>
  1. In the OpenAiOptions component, you're using lodash.debounce to delay the state update of openAIKey. This is a good practice to limit the number of updates, but it might be better to use a local state for the input field and only update the openAIKey state when the input field is blurred. This can prevent unnecessary API calls if you're using the openAIKey state to fetch data.
+const [inputKey, setInputKey] = useState(settings?.OpenAiKey);
...
+<input
+  ...
+  defaultValue={inputKey ? "*".repeat(20) : ""}
+  onChange={(e) => setInputKey(e.target.value)}
+  onBlur={onUpdateOpenAiKey}
+/>
  1. In the LLMSelection component, you're using an alert to display errors. This might not provide the best user experience. Consider using a more user-friendly way to display errors, such as a toast notification or an error message near the form.
-import showToast from "../../../../../utils/toast";
...
+if (error) {
+  showToast(`Failed to save LLM settings: ${error}`, "error");
+  return;
+}
  1. In the AzureAiOptions and AnthropicAiOptions components, you're using the defaultValue prop to set the initial value of the input fields. This is fine for uncontrolled components, but if you want to make these components controlled, consider using the value prop instead and updating the state when the input field changes.
+const [azureEndpoint, setAzureEndpoint] = useState(settings?.AzureOpenAiEndpoint);
...
+<input
+  ...
+  value={azureEndpoint}
+  onChange={(e) => setAzureEndpoint(e.target.value)}
+/>
  1. In the OpenAIModelSelection component, you're disabling the select field when the models are loading. This is a good practice, but it might be better to also display a loading spinner to give users a visual feedback that the data is being fetched.
+import Spinner from "../../../../../components/Spinner";
...
+if (loading) {
+  return (
+    <div className="flex flex-col w-60">
+      ...
+      <Spinner />
+    </div>
+  );
+}

Please consider these suggestions and apply them if you find them useful.
The changes in this PR are well-structured and improve the modularity of the code. The extraction of the options for each AI provider into separate components (OpenAiOptions, AzureAiOptions, AnthropicAiOptions) is a good practice as it enhances readability and maintainability of the code.

However, I noticed that the settings prop is passed to each of these new components, but it's not clear if all the settings are used in each component. If not all settings are used, consider passing only the necessary ones to each component. This would make the components more reusable and easier to understand.

For example, if OpenAiOptions only uses the OpenAiKey and OpenAiModelPref settings, you could pass only these:

<OpenAiOptions apiKey={settings?.OpenAiKey} modelPref={settings?.OpenAiModelPref} />

And then in OpenAiOptions component, you would use props.apiKey and props.modelPref instead of props.settings.OpenAiKey and props.settings.OpenAiModelPref.

This is just a suggestion for further improvement, overall the changes in this PR are well done.
The PR looks good overall, but there are a few areas where improvements can be made:

  1. In frontend/src/models/system.js, the customModels function is using a POST request to fetch data. This is not a common practice and can lead to confusion. It's better to use a GET request for fetching data and reserve POST for creating new resources. If you need to send data to the server to fetch the custom models, consider using query parameters or request headers.
customModels: async function (provider, apiKey) {
  return fetch(`${API_BASE}/system/custom-models?provider=${provider}&apiKey=${apiKey}`, {
    method: "GET",
    headers: baseHeaders(),
  })
  // rest of the code
},
  1. In server/endpoints/system.js, the customModels endpoint is also using a POST request. It should be changed to a GET request to align with the changes in the frontend code.
app.get(
  "/system/custom-models",
  [validatedRequest],
  async (request, response) => {
    try {
      const { provider, apiKey } = request.query;
      const { models, error } = await getCustomModels(provider, apiKey);
      return response.status(200).json({
        models,
        error,
      });
    } catch (error) {
      console.error(error);
      response.status(500).end();
    }
  }
);
  1. In server/utils/AiProviders/openAi/index.js, the isValidChatCompletionModel function is making an API call to validate the model name. This could lead to unnecessary API calls if the function is called frequently. Consider caching the result of the API call to reduce the number of requests.

  2. In server/utils/helpers/updateENV.js, the validOpenAIModel function has been removed. If you're removing this function because the model validation is now done in the isValidChatCompletionModel function, make sure to update the checks array for OpenAiModelPref to include a call to isValidChatCompletionModel instead of validOpenAIModel.

OpenAiModelPref: {
  envKey: "OPEN_MODEL_PREF",
  checks: [isNotEmpty, isValidChatCompletionModel],
},

Please consider these suggestions and apply the necessary changes.

@timothycarambat timothycarambat merged commit 67c85f1 into master Oct 31, 2023
@timothycarambat timothycarambat deleted the 311-open-ai-fine-tune-selection branch October 31, 2023 18:38
franzbischoff referenced this pull request in franzbischoff/anything-llm Nov 4, 2023
* Implement retrieval and use of fine-tune models
Cleanup LLM selection code
resolves #311

* Cleanup from PR bot
cabwds pushed a commit to cabwds/anything-llm that referenced this pull request Jul 3, 2025
* Implement retrieval and use of fine-tune models
Cleanup LLM selection code
resolves Mintplex-Labs#311

* Cleanup from PR bot
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OpenAI Custom model support

2 participants