Engineered a low-latency, computer vision-based desktop utility that replaces traditional mouse and keyboard inputs with real-time hand tracking. Built using Python, Google MediaPipe, and OpenCV, the system leverages a split-hand architecture to map complex human gestures to precise OS-level commands (cursor control, volume mapping, macros). Wrapped in a custom PyQt6 frameless UI for real-time skeletal feedback.