I engineered a high-performance, browser-based 3D experience that allows users to manipulate a system of 15,000+ particles using real-time hand gestures. This project explores the intersection of Web Graphics and Computer Vision, turning a standard webcam into a spatial controller.
The Technical Challenge
The main hurdle was achieving a "haptic" feel ensuring the particles reacted instantly and smoothly to physical movements without lag. This required mapping 2D webcam coordinates into a 3D coordinate system and calculating volumetric transformations on the fly.
The Solution & Workflow
High-Performance Rendering: Used Three.js and WebGL with additive blending and custom color mapping to render a massive particle count at a consistent 60 FPS.
Real-time Inference: Integrated Google MediaPipe to track 21 3D landmarks per hand directly in the client-side browser, eliminating the need for a backend.
Dynamic Math: Implemented custom lerping (Linear Interpolation) and gesture thresholds to create fluid movements, such as a "fist-clench" collapsing the particles into a sphere and an "open-palm" expanding them into a galaxy.
The Results
Created a fully immersive, zero-latency interface that demonstrates the power of the modern web for interactive AI applications.
Like this project
Posted Apr 23, 2026
I engineered a high-performance, browser-based 3D experience that allows users to manipulate a system of 15,000+ particles using real-time hand gestures. Thi...