An AI-powered Language Server that provides intelligent semantic code analysis beyond what traditional LSPs can detect. Built with modern Python tools and designed to integrate seamlessly with any LSP-compatible editor.
AI LSP focuses exclusively on semantic issues that require deep understanding:
- Logic errors in algorithms and business logic
- Subtle race conditions and concurrency issues
- Architectural problems and design pattern violations
- Security vulnerabilities requiring context understanding
- Performance anti-patterns needing semantic analysis
- Complex data flow issues
- Domain-specific best practice violations
- Accessibility issues requiring UX understanding
What it doesn't do: Traditional LSP tasks like syntax errors, type errors, undefined variables, or basic formatting - your regular LSP handles those perfectly.
- 🧠 AI-Powered Analysis: Uses advanced language models for semantic code understanding
- ⚡ Debounced Analysis: Smart delays prevent AI spam during rapid typing
- 🔧 Code Actions: Provides automated fixes for detected issues
- 🎯 Multi-Language: Supports Python, JavaScript, TypeScript, Rust, Go, Java, C/C++, and Lua
- ⚖️ Async Architecture: Non-blocking analysis with race condition protection
uv add ai-lsp
# or
pip install ai-lsp
git clone https://github.com/benomahony/ai-lsp
cd ai-lsp
uv sync
Set your AI provider API key:
# For Google AI (default)
export GOOGLE_API_KEY="your-api-key"
# Or for OpenAI
export OPENAI_API_KEY="your-openai-key"
Configure via environment variables or .env
file:
AI_LSP_MODEL=google-gla:gemini-2.5-pro # Default model
DEBOUNCE_MS=1000 # Analysis delay (milliseconds)
Supported models:
google-gla:gemini-2.5-pro
(default)gpt-4o
- Any model supported by
pydantic-ai
Create ~/.config/nvim/lua/plugins/ai-lsp.lua
:
return {
{
"neovim/nvim-lspconfig",
opts = {
servers = {
["ai-lsp"] = {
mason = false,
cmd = { "ai-lsp" },
filetypes = { "python", "javascript", "typescript", "rust", "go", "lua", "java", "cpp", "c" },
},
},
},
},
}
Add to your settings.json
:
{
"ai-lsp.command": ["uvx", "ai-lsp"],
"ai-lsp.filetypes": ["python", "javascript", "typescript", "rust", "go", "lua", "java", "cpp", "c"]
}
Or if you have it installed in your active environment:
{
"ai-lsp.command": ["ai-lsp"],
"ai-lsp.filetypes": ["python", "javascript", "typescript", "rust", "go", "lua", "java", "cpp", "c"]
}
Any LSP client can use AI LSP with these settings:
{
"command": ["uvx", "ai-lsp"],
"filetypes": ["python", "javascript", "typescript", "rust", "go", "lua", "java", "cpp", "c"],
"rootPatterns": [".git", "pyproject.toml", "package.json", "Cargo.toml", "go.mod"]
}
Given this Python code:
def authenticate(username, password):
if password == 'admin123': # AI LSP will flag this
print(f'User {username} authenticated') # And suggest logging
return True
else:
print('Authentication failed')
return False
AI LSP might detect:
-
Security Issue: Hardcoded password
'admin123'
- Severity: Error
- Suggestion: Use environment variables or secure credential storage
-
Best Practice: Using
print
statements- Severity: Warning
- Code Action: Replace with
logging.info()
LSP not starting:
- Verify
ai-lsp
command is in PATH:which ai-lsp
- Check log file for errors (default:
ai-lsp.log
) - Ensure API keys are set correctly
No diagnostics appearing:
- Check file type is supported
- Verify AI model has API access
- Look for errors in log file
Performance issues:
- Increase
DEBOUNCE_MS
to reduce API calls - Use a faster model like
gpt-4o-mini
- Check network connectivity to AI provider
Enable verbose logging:
ai-lsp --log-level DEBUG --log-file debug.log
MIT License - see LICENSE for details.