I engineered a high-performance, browser-based 3D experience that allows users to manipulate a system of 15,000+ particles using real-time hand gestures. This project explores the intersection of Web Graphics and Computer Vision, turning a standard webcam into a spatial controller.
The Technical Challenge
The main hurdle was achieving a "haptic" feel ensuring the particles reacted instantly and smoothly to physical movements without lag. This required mapping 2D webcam coordinates into a 3D coordinate system and calculating volumetric transformations on the fly.
The Solution & Workflow
High-Performance Rendering: Used Three.js and WebGL with additive blending and custom color mapping to render a massive particle count at a consistent 60 FPS.
Real-time Inference: Integrated Google MediaPipe to track 21 3D landmarks per hand directly in the client-side browser, eliminating the need for a backend.
Dynamic Math: Implemented custom lerping (Linear Interpolation) and gesture thresholds to create fluid movements, such as a "fist-clench" collapsing the particles into a sphere and an "open-palm" expanding them into a galaxy.
The Results
Created a fully immersive, zero-latency interface that demonstrates the power of the modern web for interactive AI applications.
0
6
I built a custom AI Voice Agent for a service-based business to automate their appointment booking process. The system handles inbound/outbound calls, qualifies customers, and manages scheduling without any human intervention.
The Technical Challenge
The goal was to create a "robust" agent that could handle natural human speech patterns including stammers or mid-sentence changes while maintaining the conversational flow. I needed to bridge the gap between high-fidelity voice synthesis and real-time database management.
The Solution & Workflow
Voice & Logic: Integrated ElevenLabs for realistic vocal performance and Retell AI (or similar) for low-latency conversational handling.
Automation Engine: Built complex workflows in n8n to process the data gathered during the call.
Real-time Integration: The system automatically cross-references and updates Google Calendar, sending instant confirmations to both the client and the customer.
The Results
100% Automation: Successfully moved the client from manual phone bookings to a fully autonomous system.
Error Handling: Programmed the agent to stay on track even when users provide vague or self-correcting input.
0
9
I developed Chezify, a high-performance mobile application tailored for a local fast-food restaurant. The goal was to bridge the gap between a traditional physical storefront and a modern digital presence by providing a seamless, branded ordering interface.
The Technical Challenge
The focus was on creating a Modular Architecture that allows the business to scale its menu effortlessly. I designed a library of reusable Flutter widgets (FoodCards, DealCards) that ensure UI consistency across the entire app while maintaining a strict 60 FPS performance target for a premium user experience.
The Results
Optimized Performance: Achieved silky-smooth scrolling and transitions, even with high-resolution imagery.
Scalable Design: Built a custom UI/UX from scratch using Dart, avoiding generic templates to ensure brand uniqueness.
Business Integration: Developed an "Exclusive Deals" logic to drive higher order values through targeted combo offers.
0
13
I developed a real-time Air Drawing Application that uses Computer Vision to transform hand gestures into digital input. By utilizing a standard webcam, the app allows users to draw, erase, and select colors in mid-air without any physical contact with the screen.
Key Technical Features:
Hand Tracking: Integrated MediaPipe to track 21 specific hand landmarks with high precision and near-zero latency.
Gesture Logic: Programmed custom state-switching—using a single index finger for "Drawing Mode" and a two-finger gesture for "Selection Mode" to navigate the UI.
Real-time Rendering: Leveraged OpenCV for video processing and to create a dynamic, responsive canvas overlay.
This project demonstrates my ability to build Human-Computer Interaction (HCI) tools and implement real-time AI solutions using Python.