This project implements and compares three cache replacement algorithms:
- ARC (Adaptive Replacement Cache)
- LRU (Least Recently Used)
- LFU (Least Frequently Used)
The project contains the following key components:
cache.hpp
: Base interface for all cache implementationsarc_cache.hpp
: Implementation of the ARC algorithmlru_cache.hpp
: Implementation of the LRU algorithmlfu_cache.hpp
: Implementation of the LFU algorithmtest_cache.cpp
: Test program that compares the performance of all three algorithms
The C++ implementation is header-only and can be easily integrated into your project. The main implementation is in arc_cache.hpp
.
Key features:
- Template-based implementation for flexibility
- Header-only for easy integration
- Efficient memory management
- Thread-safe operations
A Python implementation is also available in the python_version
directory, which includes a comparison script to evaluate ARC against other caching algorithms (LRU, LFU).
Key features:
- Comparative analysis with LRU and LFU
- Visual performance metrics
- Multiple access pattern tests
- Automated performance reporting
To compile the test program:
g++ -std=c++17 test_cache.cpp -o test_cache
To run the tests:
./test_cache
#include "arc_cache.hpp"
// Create an ARC cache with size 1000
ARCache<int, int> cache(1000);
// Add or update items
cache.put(key, value);
// Get items
auto value = cache.get(key);
# Navigate to python_version directory
cd python_version
# Create and activate virtual environment (Windows)
python -m venv venv
venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the comparison script
python compare_caches.py
The Python script will:
- Compare ARC with LRU and LFU caches
- Test different access patterns:
- Random access
- Locality-based access
- Periodic access
- Zipfian distribution
- Generate performance metrics showing:
- Hit rates for each algorithm
- Performance comparison with ARC
- Summary statistics
- Plot the results in
cache_comparison.png
Example output:
=== Cache Performance Comparison ===
Pattern Type Cache Type Hit Rate Time (s) vs ARC Performance
---------------------------------------------------------------------------
Random LRU 9.89% 0.027 -0.08% ↑ ARC Better
LFU 9.92% 0.048 -0.05% ↑ ARC Better
ARC 9.97% 0.082
...
=== Summary of ARC Performance ===
Total scenarios where ARC performs better: 6/8 (75.0%)
Breakdown by pattern:
Random : ARC better in 2/2 cases (100.0%)
Locality : ARC better in 2/2 cases (100.0%)
Periodic : Equal performance in all cases
Zipfian : ARC better in 2/2 cases (100.0%)
The test program generates a mixed workload that includes:
- Frequently accessed items
- Recently accessed items
- Random access patterns
This mixed workload demonstrates ARC's ability to adapt and perform better than both LRU and LFU in real-world scenarios.
Example output:
Testing cache performance with:
Cache size: 100
Data range: 1000
Access pattern length: 10000
Hit rates:
ARC: 13.26%
LRU: 5.77%
LFU: 12.90%
ARC (Adaptive Replacement Cache) outperforms traditional algorithms because:
- It dynamically balances between recency and frequency
- It maintains ghost lists to track recently evicted items
- It automatically adapts its policy based on the workload pattern
- C++17 or later
- A C++ compiler (e.g., g++, clang++)
- Python 3.7 or later (for Python version)