这是indexloc提供的服务,不要输入任何密码
Skip to content

[BUG]: MCP servers not used by any Ollama Model #4483

@danielbjornadal

Description

@danielbjornadal

How are you running AnythingLLM?

Docker (remote machine)

What happened?

I have installed mcp-server-time and mcp-atlassian MCP tool where i use Ollama.

anythingllm_mcp_servers.json: (removed default that where there (... is ofc not included in production)

    {
      "mcpServers": {
       ...
        "mcp-atlassian": {
          "command": "uvx",
          "args": [
            "mcp-atlassian",
            "--jira-url=<removed>",
            "--jira-personal-token=<removed>",
            "--confluence-<removed>",
            "--confluence-personal-token=<removed>"
          ],
        },
        "mcp-server-time": {
          "command": "uvx",
          "args": [
            "mcp-server-time",
            "--local-timezone=Europe/Oslo"
          ],
        }
      }
    }

Models i have tested: gpt-oss:120b, qwen3:32b, llama4:latest

MCP servers shows as ON and list all functions OK.

Image Image Image
[collector] info: [TikTokenTokenizer] Initialized new TikTokenTokenizer instance.
[collector] info: Collector hot directory and tmp storage wiped!
[collector] info: Document processor app listening on port 8888
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
✔ Generated Prisma Client (v5.3.1) to ./node_modules/@prisma/client in 610ms
Start using Prisma Client in Node.js (See: https://pris.ly/d/client)
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()
or start using Prisma Client at the edge (See: https://pris.ly/d/accelerate)
import { PrismaClient } from '@prisma/client/edge'
const prisma = new PrismaClient()
See other ways of importing Prisma Client: http://pris.ly/d/importing-client
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": SQLite database "anythingllm.db" at "file:../storage/anythingllm.db"
33 migrations found in prisma/migrations
No pending migrations to apply.
┌─────────────────────────────────────────────────────────┐
│  Update available 5.3.1 -> 6.16.3                       │
│                                                         │
│  This is a major update - please follow the guide at    │
│  https://pris.ly/d/major-version-upgrade                │
│                                                         │
│  Run the following to update                            │
│    npm i --save-dev prisma@latest                       │
│    npm i @prisma/client@latest                          │
└─────────────────────────────────────────────────────────┘
[backend] info: [EncryptionManager] Self-assigning key & salt for encrypting arbitrary data.
[backend] info: [TokenManager] Initialized new TokenManager instance for model: gpt-3.5-turbo
[backend] info: [TokenManager] Returning existing instance for model: gpt-3.5-turbo
[backend] info: [TELEMETRY ENABLED] Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.
[backend] info: prisma:info Starting a sqlite pool with 65 connections.
[backend] info: [TELEMETRY SENT] {"event":"server_boot","distinctId":"b1dd2af0-ec1f-4828-9249-1dd398421c94","properties":{"commit":"--","runtime":"docker"}}
[backend] info: [CommunicationKey] RSA key pair generated for signed payloads within AnythingLLM services.
[backend] info: [EncryptionManager] Loaded existing key & salt for encrypting arbitrary data.
[backend] info: [BackgroundWorkerService] Starting...
[backend] info: [BackgroundWorkerService] Service started with 1 jobs ["cleanup-orphan-documents"]
[backend] info: [Ollama] Context windows cached for all models!
[backend] info: ⚡Pre-cached context windows for Ollama
[backend] info: Primary server in HTTP mode listening on port 3001
[backend] info: [MetaGenerator] fetching custom meta tag settings...
[backend] info: [OllamaEmbedder] initialized with model gpt-oss:120b at http://ollama.ollama:11434. num_ctx: 8192
[backend] info: [fillSourceWindow] Need to backfill 4 chunks to fill in the source window for RAG!
[backend] info: [TokenManager] Initialized new TokenManager instance for model: gpt-oss:120b
[backend] info: [Ollama] Context windows cached for all models!
[backend] info: [Ollama] initialized with
model: gpt-oss:120b
perf: base
n_ctx: 8192
[backend] info: [TELEMETRY SENT] {"event":"sent_chat","distinctId":"b1dd2af0-ec1f-4828-9249-1dd398421c94","properties":{"multiUserMode":false,"LLMSelection":"ollama","Embedder":"ollama","VectorDbSelection":"lancedb","multiModal":false,"TTSSelection":"native","LLMModel":"gpt-oss:120b","runtime":"docker"}}
[backend] info: [Event Logged] - sent_chat
[backend] info: [MCPHypervisor] Initializing MCP Hypervisor - subsequent calls will boot faster
[backend] info: [MCPHypervisor] MCP Config File: /app/server/storage/plugins/anythingllm_mcp_servers.json
[backend] info: [MCPHypervisor] Attempting to start MCP server: face-generator
npm warn exec The following package was not found and will be installed: @dasheck0/face-generator@1.0.1
Face Generator MCP server running on stdio
[backend] info: [MCPHypervisor] Attempting to start MCP server: mcp-atlassian
Downloading rapidfuzz (3.0MiB)
Downloading pygments (1.2MiB)
Downloading cryptography (4.3MiB)
Downloading pydantic-core (1.9MiB)
Downloading lxml (5.0MiB)
 Downloaded pydantic-core
 Downloaded rapidfuzz
 Downloaded cryptography
 Downloaded lxml
 Downloaded pygments
Installed 84 packages in 128ms
[10/03/25 12:51:43] INFO     Starting MCP server 'Atlassian MCP'   server.py:734
                             with transport 'stdio'                             
INFO - FastMCP.fastmcp.server.server - Starting MCP server 'Atlassian MCP' with transport 'stdio'
[backend] info: [MCPHypervisor] Attempting to start MCP server: mcp-server-time
Installed 30 packages in 52ms
[backend] info: [MCPHypervisor] Attempting to start MCP server: postgres-http
[backend] info: [MCPHypervisor] Failed to start MCP server: postgres-http {"error":"fetch failed","stack":"TypeError: fetch failed\n    at node:internal/deps/undici/undici:12637:11\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async StreamableHTTPClientTransport.send (/app/server/node_modules/@modelcontextprotocol/sdk/dist/cjs/client/streamableHttp.js:251:30)"}
[backend] info: [MCPHypervisor] Successfully started 3 MCP servers: ["face-generator","mcp-atlassian","mcp-server-time"]
prisma:info Starting a sqlite pool with 65 connections.
[backend] info: [101:244]: No direct uploads path found - exiting.
[bg-worker][cleanup-orphan-documents] info: [101:244]: No direct uploads path found - exiting.
[backend] warn: Child process exited with code 0 and signal null
[backend] info: Worker for job "cleanup-orphan-documents" exited with code 0
[backend] info: [OllamaEmbedder] initialized with model gpt-oss:120b at http://ollama.ollama:11434. num_ctx: 8192
[backend] info: [fillSourceWindow] Need to backfill 4 chunks to fill in the source window for RAG!
[backend] info: [TokenManager] Returning existing instance for model: gpt-oss:120b
[backend] info: [Ollama] Context windows cached for all models!
[backend] info: [Ollama] initialized with
model: gpt-oss:120b
perf: base
n_ctx: 8192
[backend] info: [Event Logged] - sent_chat
[backend] info: [MCPHypervisor] MCP Servers already running, skipping boot.
[backend] info: [MCPHypervisor] Pruning MCP server: face-generator
[backend] info: [MCPHypervisor] Pruning MCP server: mcp-server-time
[backend] info: [TELEMETRY SENT] {"event":"workspace_thread_created","distinctId":"b1dd2af0-ec1f-4828-9249-1dd398421c94","properties":{"multiUserMode":false,"LLMSelection":"ollama","Embedder":"ollama","VectorDbSelection":"lancedb","TTSSelection":"native","LLMModel":"gpt-oss:120b","runtime":"docker"}}
[backend] info: [Event Logged] - workspace_thread_created
[backend] info: [OllamaEmbedder] initialized with model gpt-oss:120b at http://ollama.ollama:11434. num_ctx: 8192
[backend] info: [fillSourceWindow] Need to backfill 4 chunks to fill in the source window for RAG!
[backend] info: [TokenManager] Returning existing instance for model: gpt-oss:120b
[backend] info: [Ollama] Context windows cached for all models!
[backend] info: [Ollama] initialized with
model: gpt-oss:120b
perf: base
n_ctx: 8192
[backend] info: [Event Logged] - sent_chat
[backend] info: [TELEMETRY SENT] {"event":"workspace_thread_created","distinctId":"b1dd2af0-ec1f-4828-9249-1dd398421c94","properties":{"multiUserMode":false,"LLMSelection":"ollama","Embedder":"ollama","VectorDbSelection":"lancedb","TTSSelection":"native","LLMModel":"gpt-oss:120b","runtime":"docker"}}
[backend] info: [Event Logged] - workspace_thread_created
[backend] info: [OllamaEmbedder] initialized with model gpt-oss:120b at http://ollama.ollama:11434. num_ctx: 8192
[backend] info: [TokenManager] Returning existing instance for model: gpt-oss:120b
[backend] info: [Ollama] Context windows cached for all models!
[backend] info: [Ollama] initialized with
model: gpt-oss:120b
perf: base
n_ctx: 8192
[backend] info: [Event Logged] - sent_chat
[backend] info: [OllamaEmbedder] initialized with model gpt-oss:120b at http://ollama.ollama:11434. num_ctx: 8192
[backend] info: [fillSourceWindow] Need to backfill 4 chunks to fill in the source window for RAG!
[backend] info: [TokenManager] Returning existing instance for model: gpt-oss:120b
[backend] info: [Ollama] Context windows cached for all models!
[backend] info: [Ollama] initialized with
model: gpt-oss:120b
perf: base
n_ctx: 8192
[backend] info: [Event Logged] - sent_chat
[backend] info: [OllamaEmbedder] initialized with model gpt-oss:120b at http://ollama.ollama:11434. num_ctx: 8192
[backend] info: [fillSourceWindow] Need to backfill 4 chunks to fill in the source window for RAG!
[backend] info: [TokenManager] Returning existing instance for model: gpt-oss:120b
[backend] info: [Ollama] Context windows cached for all models!
[backend] info: [Ollama] initialized with
model: gpt-oss:120b
perf: base
n_ctx: 8192
[backend] info: [Event Logged] - sent_chat

Are there known steps to reproduce?

Clean install.
Point towards Ollama
Add MCP

Metadata

Metadata

Assignees

No one assigned

    Labels

    possible bugBug was reported but is not confirmed or is unable to be replicated.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions