这是indexloc提供的服务,不要输入任何密码
Skip to content

Free desktop SEO crawler - open source alternative to Screaming Frog and similar tools. Crawl websites, analyze links, extract SEO data, and export results without subscription fees.

License

Notifications You must be signed in to change notification settings

PhialsBasement/LibreCrawl

Repository files navigation

LibreCrawl

A web-based multi-tenant crawler for SEO analysis and website auditing.

🌐 Website: librecrawl.com Try the Live Demo: crawl.librecrawl.com

What it does

LibreCrawl crawls websites and gives you detailed information about pages, links, SEO elements, and performance. It's built as a web application using Python Flask with a modern web interface supporting multiple concurrent users.

Features

  • 🚀 Multi-tenancy - Multiple users can crawl simultaneously with isolated sessions
  • 🎨 Custom CSS styling - Personalize the UI with your own CSS themes
  • 💾 Browser localStorage persistence - Settings saved per browser
  • 🔄 JavaScript rendering for dynamic content (React, Vue, Angular, etc.)
  • 📊 SEO analysis - Extract titles, meta descriptions, headings, etc.
  • 🔗 Link analysis - Track internal and external links with detailed relationship mapping
  • 📈 PageSpeed Insights integration - Analyze Core Web Vitals
  • 💾 Multiple export formats - CSV, JSON, or XML
  • 🔍 Issue detection - Automated SEO issue identification
  • Real-time crawling progress with live statistics

Getting started

Quick Start (Automatic Installation)

The easiest way to run LibreCrawl - just run the startup script and it handles everything:

Windows:

start-librecrawl.bat

Linux/Mac:

chmod +x start-librecrawl.sh
./start-librecrawl.sh

What it does automatically:

  1. Checks for Docker - if found, runs LibreCrawl in a container (recommended)
  2. If no Docker, checks for Python - if not found, downloads and installs it (Windows only temporairly disabled since it causes some bat issues)
  3. Installs all dependencies automatically (pip install -r requirements.txt)
  4. Installs Playwright browsers for JavaScript rendering
  5. Starts LibreCrawl in local mode (no authentication)
  6. Opens your browser to http://localhost:5000

Manual Installation

If you prefer to install manually or want more control:

Option 1: Docker (Recommended)

Requirements:

  • Docker and Docker Compose

Steps:

# Clone the repository
git clone https://github.com/PhialsBasement/LibreCrawl.git
cd LibreCrawl

# Copy environment file
cp .env.example .env

# Start LibreCrawl
docker-compose up -d

# Open browser to http://localhost:5000

By default, LibreCrawl runs in local mode for easy personal use. The .env file controls this:

# .env file
LOCAL_MODE=true
HOST_BINDING=127.0.0.1

For production deployment with user authentication, edit your .env file:

# .env file
LOCAL_MODE=false
HOST_BINDING=0.0.0.0

Option 2: Python

  • Python 3.8 or later
  • Modern web browser (Chrome, Firefox, Safari, Edge)

Installation

  1. Clone or download this repository

  2. Install dependencies:

pip install -r requirements.txt
  1. For JavaScript rendering support (optional):
playwright install chromium
  1. Run the application:
# Standard mode (with authentication and tier system)
python main.py

# Local mode (all users get admin tier, no rate limits)
python main.py --local
# or
python main.py -l
  1. Open your browser and navigate to:
    • Local: http://localhost:5000
    • Network: http://<your-ip>:5000

Running Modes

Standard Mode (default):

  • Full authentication system with login/register
  • Tier-based access control (Guest, User, Extra, Admin)
  • Guest users limited to 3 crawls per 24 hours (IP-based)
  • Ideal for public-facing demos or shared hosting

Local Mode (--local or -l):

  • All users automatically get admin tier access
  • No rate limits or tier restrictions
  • Perfect for personal use or single-user self-hosting
  • Recommended for local development and testing

Configuration

Click "Settings" to configure:

  • Crawler settings: depth (up to 5M URLs), delays, external links
  • Request settings: user agent, timeouts, proxy, robots.txt
  • JavaScript rendering: browser engine, wait times, viewport size
  • Filters: file types and URL patterns to include/exclude
  • Export options: formats and fields to export
  • Custom CSS: personalize the UI appearance with custom styles
  • Issue exclusion: patterns to exclude from SEO issue detection

For PageSpeed analysis, add a Google API key in Settings > Requests for higher rate limits (25k/day vs limited).

Export formats

  • CSV: Spreadsheet-friendly format
  • JSON: Structured data with all details
  • XML: Markup format for other tools

Multi-tenancy

LibreCrawl supports multiple concurrent users with isolated sessions:

  • Each browser session gets its own crawler instance and data
  • Settings are stored in browser localStorage (persistent across restarts)
  • Custom CSS themes are per-browser
  • Sessions expire after 1 hour of inactivity
  • Crawl data is isolated between users

Known limitations

  • PageSpeed API has rate limits (works better with API key)
  • Large sites may take time to crawl completely
  • JavaScript rendering is slower than HTTP-only crawling
  • Settings stored in localStorage (cleared if browser data is cleared)

Files

  • main.py - Main application and Flask server
  • src/crawler.py - Core crawling engine
  • src/settings_manager.py - Configuration management
  • web/ - Frontend interface files

License

MIT License - see LICENSE file for details.

About

Free desktop SEO crawler - open source alternative to Screaming Frog and similar tools. Crawl websites, analyze links, extract SEO data, and export results without subscription fees.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published