GestureBind transforms how you interact with your computer by mapping hand gestures to system actions. Wave goodbye to keyboard shortcuts and mouse clicks—literally!
Using your webcam and advanced computer vision, GestureBind recognizes hand gestures and instantly triggers corresponding actions, from launching apps to executing keyboard shortcuts.
Continuous webcam monitoring using MediaPipe and OpenCV for instant response to your gestures. |
Map gestures to keyboard shortcuts, mouse movements, app launches, or system commands. |
Create and switch between gesture maps for different applications and contexts. |
All processing happens locally—your camera feed never leaves your device. |
- Predefined Gestures: Out-of-the-box support for common hand positions
- Custom Gesture Training: (Coming soon) Create your own personalized gestures
- Visual Feedback: On-screen confirmation when gestures are detected
- Adjustable Settings: Fine-tune sensitivity, cooldown periods, and detection thresholds
- Background Operation: Runs quietly in your system tray
- Cross-Platform: Works on Windows, Linux, and macOS
- Python 3.8 or later
- Webcam/camera device
- Required Python packages (installed automatically):
- OpenCV
- MediaPipe
- PyQt5
- PyAutoGUI
- TensorFlow
- PyYAML
- NumPy
- pynput
# Clone the repository
git clone https://github.com/yourusername/guess.git
cd guess/gesturebind
# Create a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the application
python gesturebind/main.py
Windowsscripts\install.bat |
Linux/macOSchmod +x scripts/install.sh |
-
Launch GestureBind
Run
python gesturebind/main.py
and allow camera access -
Start Detection
Click the "Start Detection" button in the main window
-
Control with Gestures
Perform gestures in front of your camera to trigger actions
- Navigate to the Settings panel
- Select or create a profile
- Configure gesture mappings:
- Choose from predefined gestures
- Select an action type (hotkey, mouse action, app launch)
- Configure the specific parameters
- Save your mapping
GestureBind uses YAML configuration files:
- Default configuration:
gesturebind/config/default_config.yaml
- User configuration:
gesturebind/data/config/user_config.yaml
# Core detection settings
detection:
engine: mediapipe # Options: mediapipe, yolov8 (placeholder)
confidence_threshold: 0.7
min_detection_duration_ms: 200
# UI preferences
ui:
theme: system # Options: system, dark, light
show_preview: true
camera_preview_fps: 15
overlay_feedback: true
minimize_to_tray: true
start_minimized: false
# Gesture mappings by profile
profiles:
default:
gesture_mappings:
peace:
type: hotkey
data: Ctrl+V
description: 'Paste clipboard content'
thumb:
type: shell_command
data:
command: flatpak run com.spotify.Client
terminal: false
description: 'Launch Spotify'
- MediaPipe hands integration with landmark detection
- Landmark smoothing for more stable gesture recognition
- Rule-based gesture classification for basic gestures
- Action mapping system (keyboard, apps, shell commands)
- Configuration management with YAML
- PyQt5 UI framework with system tray integration
- Cross-platform support foundation
- Visual feedback overlay for gestures
- UI organization and navigation improvements
- YOLOv8 integration (placeholder implementation)
- Profile management interface enhancements
- Custom gesture training interface
- Machine learning-based gesture classification
- Gesture embedding storage and similarity matching
- Built-in application presets
- Import/Export for gesture profiles
- Drag-and-drop action mapping interface
- Enhanced action confirmation feedback
- Implement Gesture Training UI
- Integrate ML-based gesture classification
- Complete action configuration interface
- Enhance visual feedback system
- Complete YOLOv8 integration
- Add built-in application presets
- Implement import/export functionality
- Improve test coverage
Contributions are welcome! Here's how to help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Run the tests (
pytest gesturebind/tests/
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Check our implementation status and priorities for areas that need attention.
GestureBind is available under the MIT License. See the LICENSE file for details.
We take your privacy seriously. GestureBind processes all camera feeds locally on your device. No video or gesture data is uploaded externally, ensuring your privacy is maintained at all times.
Have questions or feedback? Open an issue on GitHub or reach out to the maintainers.