这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@timothycarambat
Copy link
Member

Add support for gpt-4-turbo 128K model
resolves #336

Add support for gpt-4-turbo 128K model
@review-agent-prime
Copy link

frontend/src/components/LLMSelection/OpenAiOptions/index.jsx

It's great that you're adding new models to the options. However, to improve readability and maintainability, consider storing the model names in a constant array at the top of the file. This way, if you need to add or remove models in the future, you only need to modify the array, and the changes will propagate throughout the code.
Create Issue

    const MODELS = ["gpt-3.5-turbo", "gpt-4", "gpt-4-1106-preview", "gpt-4-32k"];

    // Then use it in your code like this:
    {MODELS.map((model) => {
      return (
        <option
          key={model}
          value={model}
          selected={settings.OpenAiModelPref === model}
        >
          {model}
        </option>
      );
    })}

frontend/src/components/LLMSelection/AzureAiOptions/index.jsx

Similar to the previous suggestion, consider storing the model names and their corresponding token limits in a constant object at the top of the file. This will improve readability and maintainability of your code.
Create Issue

    const MODELS = {
      "gpt-3.5-turbo": 4096,
      "gpt-4": 8192,
      "gpt-4-1106-preview": 128000,
      "gpt-4-32k": 32000,
    };

    // Then use it in your code like this:
    {Object.entries(MODELS).map(([model, limit]) => (
      <option key={model} value={limit}>
        {`${limit} (${model})`}
      </option>
    ))}

server/utils/AiProviders/openAi/index.js

Similar to the previous suggestions, consider storing the model names and their corresponding token limits in a constant object at the top of the file. This will improve readability and maintainability of your code.
Create Issue

    const MODELS = {
      "gpt-3.5-turbo": 4096,
      "gpt-4": 8192,
      "gpt-4-1106-preview": 128000,
      "gpt-4-32k": 32000,
    };

    // Then use it in your code like this:
    promptWindowLimit() {
      return MODELS[this.model] || 4096; // assume a fine-tune 3.5
    }

@timothycarambat timothycarambat merged commit d34ec68 into master Nov 6, 2023
@timothycarambat timothycarambat deleted the 336-new-open-ai-models branch November 6, 2023 22:22
franzbischoff referenced this pull request in franzbischoff/anything-llm Nov 7, 2023
resolves #336
Add support for gpt-4-turbo 128K model
franzbischoff referenced this pull request in franzbischoff/anything-llm Nov 7, 2023
resolves #336
Add support for gpt-4-turbo 128K model
timothycarambat added a commit that referenced this pull request Nov 9, 2023
* Using OpenAI API locally

* Infinite prompt input and compression implementation (#332)

* WIP on continuous prompt window summary

* wip

* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff

* Implement compression for Anythropic
Fix lancedb sources

* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources

* Resolve Weaviate citation sources not working with schema

* comment cleanup

* disable import on hosted instances (#339)

* disable import on hosted instances

* Update UI on disabled import/export

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>

* Add support for gpt-4-turbo 128K model (#340)

resolves #336
Add support for gpt-4-turbo 128K model

* 315 show citations based on relevancy score (#316)

* settings for similarity score threshold and prisma schema updated

* prisma schema migration for adding similarityScore setting

* WIP

* Min score default change

* added similarityThreshold checking for all vectordb providers

* linting

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>

* rename localai to lmstudio

* forgot files that were renamed

* normalize model interface

* add model and context window limits

* update LMStudio tagline

* Fully working LMStudio integration

---------
Co-authored-by: Francisco Bischoff <984592+franzbischoff@users.noreply.github.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
cabwds pushed a commit to cabwds/anything-llm that referenced this pull request Jul 3, 2025
cabwds pushed a commit to cabwds/anything-llm that referenced this pull request Jul 3, 2025
* Using OpenAI API locally

* Infinite prompt input and compression implementation (Mintplex-Labs#332)

* WIP on continuous prompt window summary

* wip

* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff

* Implement compression for Anythropic
Fix lancedb sources

* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources

* Resolve Weaviate citation sources not working with schema

* comment cleanup

* disable import on hosted instances (Mintplex-Labs#339)

* disable import on hosted instances

* Update UI on disabled import/export

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>

* Add support for gpt-4-turbo 128K model (Mintplex-Labs#340)

resolves Mintplex-Labs#336
Add support for gpt-4-turbo 128K model

* 315 show citations based on relevancy score (Mintplex-Labs#316)

* settings for similarity score threshold and prisma schema updated

* prisma schema migration for adding similarityScore setting

* WIP

* Min score default change

* added similarityThreshold checking for all vectordb providers

* linting

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>

* rename localai to lmstudio

* forgot files that were renamed

* normalize model interface

* add model and context window limits

* update LMStudio tagline

* Fully working LMStudio integration

---------
Co-authored-by: Francisco Bischoff <984592+franzbischoff@users.noreply.github.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add new OpenAi model for chat completions

2 participants