COPRA: An in-COntext PRoof Agent which uses LLMs like GPTs to prove theorems in formal languages.
COPRA now supports vLLM for running open-source LLMs locally! This enables you to use models like Llama, Mistral, DeepSeek, and more for theorem proving.
Features:
- ✅ OpenAI-Compatible API: Works with existing COPRA code via OpenAI-compatible interface
- ✅ GPU Acceleration: Full CUDA support with multi-GPU tensor parallelism
- ✅ Automatic Server Management: Start/stop vLLM servers programmatically
- ✅ Model Flexibility: Support for any Hugging Face model compatible with vLLM
⚠️ Python Requirement: vLLM requires Python ≤ 3.12 (not compatible with 3.14t)
COPRA now supports executing proof search for multiple theorems in parallel! This significantly speeds up evaluation on multi-core systems.
Features:
- ✅ Automatic Python Version Detection: Uses threading for Python 3.14t+ and multiprocessing for older versions
- ✅ Configurable Worker Count: Control parallelism with environment variables
- ✅ Backward Compatible: Disabled by default - opt-in via environment variable
- ✅ Better Resource Utilization: Leverage all CPU cores for faster evaluations
Jump to Parallel Execution Setup →
COPRA now fully supports Python 3.14+ with free-threaded (GIL-free) execution for improved performance!
Automatic Execution Mode Selection:
- Python 3.14t+ with GIL disabled: Native free-threading for true parallel execution
- Python 3.14t+ with GIL enabled: Threading (still benefits from improved performance)
- Python < 3.14: Multiprocessing (traditional approach)
- Install itp-interface using the following command: (Our package is available on PyPI: https://pypi.org/project/copra-theorem-prover/)
pip install copra-theorem-prover
- Run the following command to prepare the REPL for Lean 4. (The default version is 4.7.0-rc2. You can change the version by setting the
LEAN_VERSION
environment variable. If no version is set, then 4.7.0-rc2 is used.)
NOTE: The Lean 4 version must match the version of the Lean 4 project you are working with.
export LEAN_VERSION="4.15.0"
install-lean-repl
- Run the following command to build the REPL for Lean 4. Make sure that
lean --version
returns the correct version before running the command below. If not then check if$HOME/.elan/bin
is in your path. Recommended to runsource $HOME/.elan/env
before running the command below.
install-itp-interface
NOTE: These steps are only tested on Linux. For Windows, you can use WSL. These steps will not setup the Coq interface.
🆕 NEW: COPRA now supports Python 3.14+ with free-threaded (GIL-free) support for improved performance!
- Create a Conda environment with Python 3.14t (free-threaded):
# Create environment with free-threaded Python 3.14
conda create -n py314-ft python=3.14 python-freethreading -c conda-forge
# Activate the environment
conda activate py314-ft
# Verify Python version and free-threading support
python --version # Should show Python 3.14.x
- Install COPRA theorem prover:
# Install from PyPI
pip install copra-theorem-prover
# OR for development, install from source
pip install -e .
- Run experiments with Python 3.14t:
# Use run.py which automatically detects Python 3.14+ and uses Hydra-free mode
python -m copra.main.run --config-name lean4_simple_experiment
# Or if installed from source
python src/copra/main/run.py --config-name lean4_simple_experiment
Note: Python 3.14t is experimental. Some packages may show compatibility warnings (especially Pydantic and OpenAI), but COPRA has been refactored to work with Python 3.14t.
🚀 Automatic Parallel Execution Selection: COPRA automatically detects your Python version and chooses the best parallel execution strategy:
- Python 3.14t+ with GIL disabled: Uses native free-threading for true parallel execution
- Python 3.14t+ with GIL enabled: Uses threading (still benefits from improved performance)
- Python < 3.14: Uses multiprocessing (traditional approach)
Check your execution mode:
python demo_parallel_execution.py
-
Install OCaml first. Use the instructions here: https://opam.ocaml.org/doc/Install.html. Note that OCaml officially only supports Linux installations. One can use WSL on Windows machines.
-
Run the following to install Coq on Linux.
sudo apt install build-essential unzip bubblewrap
sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
- Add the following to your
.bashrc
file: (sometimes the path~/.opam/default
might not exist, so use the directory with version number present in the~/.opam
directory)
export PATH="/home/$USER/.opam/default/bin:$PATH"
-
Create a
Miniconda
environment and activate it. -
Run the commands for installing the Lean 4 interface as mentioned in Quick Setup for Lean 4.
-
Add the following to your
.bashrc
file for Lean:
export PATH="/home/$USER/.elan/bin:$PATH"
🆕 NEW: Run open-source LLMs locally with GPU acceleration via vLLM!
Quick Setup:
# Install with vLLM support
pip install copra-theorem-prover[os_models]
# Or for development
pip install -e .[os_models]
Usage:
Create a config file with vllm:
prefix:
# config/eval_settings/my_vllm_config.yaml
gpt_model_name: vllm:codellama/CodeLlama-7b-Instruct-hf
temperature: 0.0
max_tokens_per_action: 100
# ... other settings
Run your experiment:
python -m copra.main.eval_benchmark eval_settings=my_vllm_config benchmark=miniF2F
The vLLM server starts automatically on port 48000. Override with VLLM_PORT
environment variable if needed.
Supported Models:
vllm:codellama/CodeLlama-7b-Instruct-hf
(open-source code model)vllm:meta-llama/Llama-2-7b-chat-hf
(general LLM)vllm:EleutherAI/llemma_7b
(math-focused LLM)vllm:deepseek-ai/deepseek-math-7b-instruct
(math reasoning)- Any HuggingFace model compatible with vLLM
Note: vLLM requires Python ≤ 3.12 and CUDA-capable GPU
- You need to create a file
.secrets/openai_key.json
in the root directory of the project with the OpenAI API key. The file should contain the following:
{
"organization": "<your-organization-id>",
"api_key": "<your-api-key>"
}
-
The experiments are not necessarily thread safe. So, it is recommended to run them sequentially. The commands to run the desired experiments can be found in the file
./src/copra/main/config/experiments.yaml
. -
Run the following command to run the experiments:
For Python 3.14+ (with free-threaded support):
python src/copra/main/run.py --config-name lean4_simple_experiment
# This uses a Hydra-free implementation compatible with Python 3.14+
# You can change the config name to run different experiments
For Python < 3.14:
python src/copra/main/eval_benchmark.py
# This will run the experiments mentioned in the file `./src/copra/main/config/experiments.yaml`.
# Change the file path in the command above to run other experiments.
Universal command (auto-detects Python version):
python src/copra/main/run.py --config-name lean4_simple_experiment
# This automatically uses Hydra-free mode for Python 3.14+ and Hydra mode for older versions
Note:
run.py
is the recommended entry point for all Python versions. It automatically detects your Python version and uses the appropriate implementation (Hydra-free for 3.14+, standard Hydra for older versions).
Before running COPRA, you need to ensure the required services are running based on your proof language:
No additional services are required. Lean 4 uses a built-in REPL that COPRA manages automatically.
You need to start the PISA (Portal for Isabelle) service:
- Set the PISA port (optional, defaults to 17000):
export PISA_PORT=17000
- Start the PISA service:
# COPRA will automatically start the PISA service when needed
# You can also manually start it if required by your setup
- Check if PISA is running:
# COPRA will log whether PISA service is up or needs to be restarted
Note: COPRA automatically manages the PISA service lifecycle, including health checks and restarts. You'll see log messages about PISA service status during execution.
Ensure the correct Coq version is active:
- Check your Coq version:
coqc --version
- Switch Coq version if needed (using opam):
opam switch <your-coq-version>
eval $(opam env)
- Build your Coq project before running COPRA:
cd /path/to/your/coq/project
make # or your project's build command
If using non-OpenAI models (like Llama):
- COPRA will automatically initialize the Llama service when needed
- Check the logs in
.log/evals/benchmark/<benchmark-name>/<timestamp>/llama.log
- If the service goes down, COPRA will automatically restart it
Important: The ITP projects must be built before running COPRA. Make sure the correct version/switch is active for Coq projects, as different projects may use different Coq versions.
COPRA now supports executing proof search for multiple theorems in parallel, significantly speeding up evaluations on multi-core systems.
Enable parallel execution:
export ENABLE_PARALLEL_THEOREMS=True
python src/copra/main/run.py --config-name lean4_simple_experiment
Control number of workers:
export ENABLE_PARALLEL_THEOREMS=True
export MAX_PARALLEL_WORKERS=4 # Use 4 parallel workers
python src/copra/main/run.py --config-name lean4_simple_experiment
Environment Variable | Description | Default | Example |
---|---|---|---|
ENABLE_PARALLEL_THEOREMS |
Enable/disable parallel theorem execution | False |
True or False |
MAX_PARALLEL_WORKERS |
Maximum number of parallel workers | Auto (CPU cores / 2) | 4 , 8 , 16 |
Sequential Execution (Default):
- Theorems are processed one at a time
- Each theorem proof runs in isolation with timeout
- Original behavior is preserved
Parallel Execution (Enabled):
- Multiple theorems are processed concurrently
- Each theorem proof still runs in its own process/thread with timeout
- Results are collected as they complete
- Automatic Python version detection:
- Python < 3.14: Uses
ProcessPoolExecutor
(multiprocessing) - Python 3.14t+: Uses
ThreadPoolExecutor
(threading with free-threading support)
- Python < 3.14: Uses
Example 1: Basic parallel execution
# Enable parallel execution with auto-detected worker count
export ENABLE_PARALLEL_THEOREMS=True
python -m copra.main.run --config-name lean4_simple_experiment
Example 2: Custom worker count
# Use 8 parallel workers for large theorem sets
export ENABLE_PARALLEL_THEOREMS=True
export MAX_PARALLEL_WORKERS=8
python -m copra.main.run --config-name lean4_simple_experiment
Example 3: Python 3.14t with free-threading
# On Python 3.14t, this will automatically use threading
conda activate py314-ft
export ENABLE_PARALLEL_THEOREMS=True
python -m copra.main.run --config-name lean4_simple_experiment
Example 4: Conservative parallel execution
# Use only 2 workers for resource-constrained systems
export ENABLE_PARALLEL_THEOREMS=True
export MAX_PARALLEL_WORKERS=2
python -m copra.main.run --config-name lean4_simple_experiment
- Match workers to CPU cores: Set
MAX_PARALLEL_WORKERS
to your number of physical CPU cores - Consider theorem complexity: Complex theorems benefit more from parallelism
- Monitor resources: Watch CPU, memory, and I/O usage during execution
- Service capacity: Ensure LLM services can handle concurrent requests
- Sequential for small sets: Use sequential execution for fewer than 4 theorems (less overhead)
- Memory usage: Parallel execution uses more memory (multiple processes/threads running simultaneously)
- Shared resources: Services (Llama, Isabelle) must handle concurrent requests
- Time budget tracking: Time budgets are tracked per-theorem, not globally across parallel executions
Issue: Out of memory errors
- Solution: Reduce
MAX_PARALLEL_WORKERS
to2
or4
Issue: Service connection errors
- Solution: Reduce parallel workers or use sequential execution for service-heavy workloads
Issue: Slower than sequential execution
- Cause: Small theorem sets, I/O-bound workloads, or limited CPU cores
- Solution: Disable parallel execution or reduce worker count
- The ITP projects must be built before running COPRA
- For Coq projects, ensure the correct switch/version is active
- Services (PISA, Llama) are automatically managed by COPRA
- Check logs in
.log/evals/benchmark/<name>/<timestamp>/
for debugging
You can cite our paper:
@inproceedings{thakur2024context,
title={An in-context learning agent for formal theorem-proving},
author={Thakur, Amitayush and Tsoukalas, George and Wen, Yeming and Xin, Jimmy and Chaudhuri, Swarat},
booktitle={First Conference on Language Modeling},
year={2024}
}
Our paper can be found here: OpenReview and ArXiv