这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@tlandenberger
Copy link
Contributor

Provide option to use LocalAI as local embedding engine. LocalAI must be running with installed embedding model. The selected model for embedding needs to be set in the .env file.

@review-agent-prime
Copy link

server/utils/EmbeddingEngines/localAi/index.js

Consider using the axios library for making HTTP requests in the LocalAiEmbedder class. This would provide more flexibility and control over request configurations, error handling, and response data processing.
Create Issue

    const axios = require('axios');

    class LocalAiEmbedder {
      constructor() {
        if (!process.env.EMBEDDING_BASE_PATH)
          throw new Error("No embedding base path was set.");
        if (!process.env.EMBEDDING_MODEL_PREF)
          throw new Error("No embedding model was set.");

        this.client = axios.create({
          baseURL: process.env.EMBEDDING_BASE_PATH,
        });

        // Arbitrary limit to ensure we stay within reasonable POST request size.
        this.embeddingChunkLimit = 1_000;
      }

      async embedTextInput(textInput) {
        const result = await this.embedChunks(textInput);
        return result?.[0] || [];
      }

      async embedChunks(textChunks = []) {
        // Because there is a hard POST limit on how many chunks can be sent at once to OpenAI (~8mb)
        // we concurrently execute each max batch of text chunks possible.
        // Refer to constructor embeddingChunkLimit for more info.
        const embeddingRequests = [];
        for (const chunk of toChunks(textChunks, this.embeddingChunkLimit)) {
          embeddingRequests.push(
            this.client.post('/createEmbedding', {
              model: process.env.EMBEDDING_MODEL_PREF,
              input: chunk,
            })
            .then((res) => {
              return { data: res.data?.data, error: null };
            })
            .catch((e) => {
              return { data: [], error: e?.response?.data?.error };
            })
          );
        }

        const { data = [], error = null } = await Promise.all(
          embeddingRequests
        ).then((results) => {
          // If any errors were returned from LocalAI abort the entire sequence because the embeddings
          // will be incomplete.
          const errors = results
            .filter((res) => !!res.error)
            .map((res) => res.error)
            .flat();
          if (errors.length > 0) {
            return {
              data: [],
              error: `(${errors.length}) Embedding Errors! ${errors
                .map((error) => `[${error.type}]: ${error.message}`)
                .join(", ")}`,
            };
          }
          return {
            data: results.map((res) => res?.data || []).flat(),
            error: null,
          };
        });

        if (!!error) throw new Error(`LocalAI Failed to embed: ${error}`);
        return data.length > 0 &&
        data.every((embd) => embd.hasOwnProperty("embedding"))
          ? data.map((embd) => embd.embedding)
          : null;
      }
    }

    module.exports = {
      LocalAiEmbedder,
    };

frontend/src/pages/GeneralSettings/EmbeddingPreference/index.jsx

Consider using destructuring assignment to make the code more readable and concise.
Create Issue

    async function fetchKeys() {
      const { EmbeddingEngine = "openai" } = await System.keys();
      setSettings(_settings);
      setEmbeddingChoice(EmbeddingEngine);
      setLoading(false);
    }
    fetchKeys();

frontend/src/pages/OnboardingFlow/OnboardingModal/Steps/EmbeddingSelection/index.jsx

Consider using destructuring assignment to make the code more readable and concise.
Create Issue

    async function fetchKeys() {
      const { EmbeddingEngine = "openai" } = await System.keys();
      setSettings(_settings);
      setEmbeddingChoice(EmbeddingEngine);
      setLoading(false);
    }
    fetchKeys();

@tlandenberger tlandenberger marked this pull request as ready for review November 14, 2023 07:05
@timothycarambat timothycarambat marked this pull request as draft November 14, 2023 17:04
pull models from localai API
Dont show cost estimation on UI
@timothycarambat timothycarambat marked this pull request as ready for review November 14, 2023 21:37
@timothycarambat timothycarambat merged commit a96a9d4 into Mintplex-Labs:master Nov 14, 2023
@Amejonah1200
Copy link

Amejonah1200 commented Nov 14, 2023

👀👀👀👀👀👀👀🎉🎉

cabwds pushed a commit to cabwds/anything-llm that referenced this pull request Jul 3, 2025
* feature: add localAi as embedding provider

* chore: add LocalAI image

* chore: add localai embedding examples to docker .env.example

* update setting env
pull models from localai API

* update comments on embedder
Dont show cost estimation on UI

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants