-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
feat: Add AI/ML API (aimlapi) as new LLM provider #4156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Introduces support for the AI/ML API (aimlapi) as a new LLM provider across the frontend and backend. Adds configuration options, provider selection UI, privacy info, and model selection for aimlapi. Implements backend integration for chat completions, agent support, and custom model listing. Updates environment examples and helper utilities to support the new provider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the code here LGTM, I am getting 404s on some models during testing here. I do see that the Anthropic API is having downtime ATM so this may be the cause of it.
I patched a bug here where the model and API key were not saving correctly. Other than that all looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not pulled the code for funcitonality but there are a few issues/nit-picks I can see just from reviewing the code.
Env naming convention
In LLM_PROVIDER=
aimlapi`
AIML_API_KEY => AIML_LLM_API_KEY
In EMBEDDING_ENGINE=
aimlapi`
AIML_API_KEY => AIML_EMBEDDER_API_KEY
This avoids a collission where someone may use two API keys for LLM and embedder. We made this mistake a long time ago with OpenAI/Azure OpenAI where they share a key and it keeps side effects from changing one key from impacting the other.
Missing Privacy Entry for Embedder
The data handling page will crash/show an invalid state if the user has the embedder as AIML since there is no data handling entry for it as the embedder. Only the LLM preference has an entry from what I can see.
Logo formatting
- The image has a transparent bg, it should be white so you can see it in any theme.
- The logo is also stretched vertically and compressed horizontally. The providers logo should retain its dimensions/aspet and fit in a 330x330 image size. Whitespace around the icon is fine.
Comments
Good idea to cache, querying this many models could be a huge mess
While the same provider, importing the two consts AIMLAPI_HEADERS
and AIMLAPI_BASE_URL
is probably better both moved to two class properties.
Use of Untooled for AI/ML Agent provider:
While I have not tested, shouldnt this API work with tools
via the official tool implementation? Untooled is our internal hack around non-tool calling models for now and has its own drawbacks. If this provider wraps tool calls when the request is set we should stick with the OpenAI tool call implementation instead of Untools since we want to kill that feature in the future anyway.
If we need to use Untooled then no big deal, not a problem just a suggestion.
Clarification
Please dont take any feedback critically. Some people get upset when we suggest feedback but the main thing we try to do is make it easier for us to maintain code that is merged since ultimately we become the maintainers after the initial merge. So normalization and consistency across providers and more is very important to us, which most of my comments above have to do with
Hi Timothy, Thank you for the detailed feedback, it's incredibly helpful. I appreciate the time you took to go through the code so thoroughly. Your points all make perfect sense and will be addressed. I completely understand and value the importance of consistency and maintainability, especially when the code becomes part of a larger shared base. Again, thanks a lot for your time. I`ll fix issues ASAP. |
Standardizes environment variable names for AI/ML API keys, separating LLM and embedding keys as AIML_LLM_API_KEY and AIML_EMBEDDER_API_KEY. Updates frontend, backend, and documentation to use the new variable names, improves provider integration, and adds privacy info for AI/ML API embedding. Also updates the AimlApiProvider implementation and related helper logic.
Introduces support for the AI/ML API (aimlapi) as a new LLM provider across the frontend and backend. Adds configuration options, provider selection UI, privacy info, and model selection for aimlapi. Implements backend integration for chat completions, agent support, and custom model listing. Updates environment examples and helper utilities to support the new provider.
Pull Request Type
Relevant Issues
resolves #3945
What is in this change?
Environment configuration:
docker/.env.example
andserver/.env.example
withAIML_API_KEY
,AIML_MODEL_PREF
, and embedding engine entries.Frontend UI:
AimlApiOptions
components undercomponents/LLMSelection
andcomponents/EmbeddingSelection
.aimlapi.png
) and privacy description.Backend integration:
server/utils/AiProviders/aimlapi
andserver/utils/EmbeddingEngines/aimlapi
modules for chat and embedding APIs.systemSettings.js
,customModels.js
, andhelpers/*.js
for provider selection and model caching.server/utils/agents
for aimlapi.Helper updates:
getCustomModels
,getLLMProvider
, andgetEmbeddingEngineSelection
to include aimlapi.updateENV.js
validation to acceptaimlapi
as a valid provider.Additional Information
This change ensures seamless integration of AI/ML API models alongside existing providers and maintains consistent UX patterns for model selection and configuration.
Developer Validations
yarn lint
from the root of the repo & committed changes