Computer Vision for Manufacturing - Defect Detection & QA by Sanket Sabharwal, PhDComputer Vision for Manufacturing - Defect Detection & QA by Sanket Sabharwal, PhD

Computer Vision for Manufacturing - Defect Detection & QA

Sanket Sabharwal, PhD

Sanket Sabharwal, PhD

Computer Vision for Manufacturing - Defect Detection and Quality Assurance

The Setup

A global FMCG manufacturer ships thousands of packaged products per hour across multiple conveyor lines. At that volume, quality assurance becomes a math problem with brutal consequences. If even half a percent of defective units slip through inspection, that translates to hundreds of damaged or dimensionally incorrect packages reaching distribution centers every single day. Those packages generate customer returns, retailer chargebacks, and brand damage that compounds over time like unpaid interest on a credit card.
The client's existing QA setup relied on legacy vision systems and manual spot checks by line operators. That combination worked well enough when the operation ran a handful of SKUs at moderate speed. But as their product catalog expanded and line speeds increased, the cracks started showing. Subtle surface damage like small tears, scuffs, corner dents, and label misprints passed through undetected. Dimensional errors where a box was two millimeters too narrow or a lid sat slightly crooked cleared inspection because the legacy thresholds were calibrated for a single SKU format and couldn't keep pace with constant packaging changes.
Every time the line switched to a new box configuration, an operator had to manually recalibrate the inspection parameters. That process ate time, introduced human error into the reference data, and created a window during every changeover where defective units could pass through unchecked.
The client needed an automated visual inspection system that could detect surface defects and dimensional deviations in real time, handle rapid SKU changeovers without manual intervention, and run entirely on edge hardware at the conveyor station without depending on cloud connectivity or remote servers.

What We Built

We designed and deployed an edge-first computer vision quality assurance platform that combines 3D depth sensing for dimensional verification with high-resolution RGB imaging for surface defect detection, all running on industrial-grade edge compute hardware mounted directly at each conveyor station.
The sensor layer at each inspection station includes a 3D Time-of-Flight scanner mounted overhead that captures dense point cloud data of every passing package at high frequency, paired with multiple RGB cameras positioned at different angles to photograph the top and sides of each unit under controlled strobe lighting. An industrial microcontroller at each station handles raw data intake and routes it to the onboard edge processor.
The edge compute node at each station runs two parallel quality assurance modules simultaneously. The first is a local patch analysis module for surface damage detection, which splits each incoming image into localized patches and runs each patch through a trained anomaly detection model that compares it against learned reference images of acceptable packaging. Every patch receives a defect probability score for damage types including tears, dents, scuffs, label misprints, and adhesive failures. The second is a global dimension analysis module that processes the 3D point cloud data, filters out noise and outlier points using statistical rejection algorithms, computes estimated package dimensions for length, width, and height, and compares those measurements against allowable tolerances for the expected SKU specification.
When either module flags a unit as out of specification, the system triggers an automated mechanical diverter that removes the defective package from the main conveyor flow and routes it to a manual review station before it can proceed downstream toward shipping.
The entire detection and diversion cycle happens locally on the edge hardware with zero cloud round-trips, which means inspection latency stays below the threshold required to keep the conveyor running at full production speed regardless of network conditions at the facility.

How It Handles SKU Changeovers

One of the most expensive problems in manufacturing QA is the recalibration tax that comes with every product format change. In a facility running dozens of SKU configurations across the same conveyor lines, static inspection thresholds become obsolete the moment a new box size hits the belt.
We built the system with automated baseline learning, where the dimensional verification module captures reference measurements from a small sample of conforming units whenever a new SKU is introduced and automatically sets the acceptable tolerance window around those measurements. Operators confirm the baseline once, and from that point forward the system continuously adjusts to minor variations in packaging material, print alignment, and box construction without requiring any manual recalibration.
This automated calibration capability is what allows the platform to maintain consistent detection accuracy across a constantly changing product mix without generating false positives every time a new format appears on the line.

Integration with Manufacturing Execution Systems

The computer vision QA platform connects directly to the client's existing Manufacturing Execution System through a REST API interface that posts inspection results for every single unit in real time. Each record includes the full dimensional measurement data, the surface damage detection scores, the pass/fail classification, and a timestamp that ties the inspection event to the specific production run and shift.
This live data feed replaced a manual manifest entry process where operators logged package dimensions and QA results by hand at the end of each shift. That manual process was slow, error-prone, and created a lag between when a defect occurred on the line and when it appeared in the production record. With the automated feed, the MES dashboard shows defect percentages, damage type distributions, dimensional compliance rates, and line performance metrics updating continuously throughout each shift.

The Results

The system processes over 50K units per day across the client's production lines and maintains a 96% defect detection accuracy rate across all damage types and dimensional deviation categories.
To put that detection rate in physical terms, imagine a warehouse floor covered with 50K tennis balls, where 500 of those balls have a small crack or are slightly the wrong size. The system finds 480 of them while they roll past on a conveyor belt at full production speed. The 20 it misses are the ones with defects so minor that even a trained human inspector holding the package in their hands under good lighting would have a coin-flip chance of catching them.
The operational impact breaks down across three measurable outcomes.
Manual QA labor dropped by 60 percent. The line operators who previously spent the majority of their shifts performing visual spot checks and manual measurement verification were redeployed to higher-value tasks. The system handles the repetitive, high-volume inspection work that human eyes and hands simply cannot sustain at production speed without fatigue-driven accuracy loss.
Product returns decreased by nearly one third. Catching defective and dimensionally incorrect packages before they leave the facility instead of after they reach retailers eliminated a category of returns that had been a persistent and growing cost center for the client.
Rework time dropped by over 40 percent compared to the legacy system. Instant detection and automated diversion of non-conforming units prevented entire batches of defective packaging from traveling further down the production line, which under the old system would require manual sorting, re-inspection, and repackaging at the end of the shift.

Why Manufacturing Defect Detection Is a Hard Computer Vision Problem

Running computer vision defect detection in a production manufacturing environment is a fundamentally different engineering challenge than running image classification in a controlled laboratory setting.
The first difficulty is speed. At 50K units per day, the system has roughly one to two seconds per package to capture sensor data, process the point cloud, run the RGB image through the damage detection model, compare dimensions against tolerances, generate a pass/fail decision, and trigger the mechanical diverter if needed. Every millisecond of processing latency either slows the line or creates a window where a defective unit passes through uncaught. Building a detection pipeline that fits inside that time budget on edge-grade hardware rather than data center GPUs requires aggressive model optimization and careful architecture choices at every layer of the stack.
The second difficulty is variability in what "normal" looks like. Packaging materials vary in color, texture, reflectivity, and print quality from one production batch to the next, even within the same SKU. Lighting conditions on the factory floor shift throughout the day. Conveyor vibration and package orientation introduce noise into both the 3D point cloud data and the RGB imagery. A defect detection model that was trained on clean, well-lit reference images in a lab will generate false positives constantly when it encounters the messy, variable conditions of an actual production line. The model needs to learn the boundary between acceptable variation and actual damage, and that boundary is narrow and shifts with every environmental change on the floor.
The third difficulty is maintaining accuracy across a wide and constantly changing SKU catalog. Every new product format introduces a new definition of what "correct" dimensions and "acceptable" surface appearance look like. A system that requires manual recalibration for every new SKU creates operational overhead that scales linearly with product catalog size, and at some point that overhead exceeds the cost of the defects it was built to catch.

How We Solved It

We addressed the speed constraint by designing lightweight, purpose-built neural network architectures optimized specifically for edge inference on industrial-grade processors rather than porting large general-purpose vision models onto hardware that cannot run them at the required throughput. The damage detection model and the dimensional verification module run in parallel on the same edge node, which means neither one waits for the other to finish before the system generates its pass/fail decision.
We addressed the variability problem through the local patch analysis approach, where the damage detection model evaluates small regions of each image independently rather than classifying the entire package as a single unit. This makes the model robust to global variation in lighting, material color, and print quality because it only needs to detect whether a specific local patch deviates from the learned reference for that patch location, not whether the entire package image matches a single template.
We addressed the SKU changeover problem through the automated baseline learning system described above, which removes the manual recalibration bottleneck entirely and allows the platform to onboard new product formats with minimal operator involvement and zero downtime on the inspection line.

The Takeaway

This edge-deployed computer vision QA system processes 50K units per day at 96% defect detection accuracy, cut manual quality assurance labor by 60 percent, reduced product returns by nearly a third, and dropped rework time by over 40 percent. It runs entirely on local edge hardware at each conveyor station, connects directly to the client's manufacturing execution system, and handles SKU changeovers automatically without manual recalibration. The client now operates it as a permanent part of their production infrastructure across their packaging lines.

Building something that must work?

Algorithmic is a senior-led software engineering studio that specializes in Full Product Builds, Applied AI & Machine Learning Systems, and Data Science & Analytics. Our team includes PhDs and Masters with patents and peer-reviewed publications, bringing senior-level expertise in data, software, and visual design. We support businesses across all stages of business growth.
If you’d like to follow our research, perspectives, and case insights, connect with us on LinkedIn, Instagram, Facebook, X or simply write to us at info@algorithmic.co

Source

Like this project

Posted Feb 5, 2026

Automated inspection system processing 50K units per day at 96% defect detection accuracy. Cut manual QA labor by 60% and reduced returns by nearly a third.

Likes

0

Views

5

Timeline

Jan 6, 2026 - Feb 2, 2026

Clients

FMCG