AI Bias & Fairness Audit – Retail Object Detection Model
My role: AI Bias & Fairness Auditor | ML Risk & Compliance Analyst
Project description:
Conducted a full AI Bias & Fairness audit on a retail object detection model using demographic evaluation (gender-based analysis). Measured Disparate Impact, Statistical Parity, Equal Opportunity Difference, and performed statistical significance testing (Chi-Square). Identified category-specific bias and business risk impact. Implemented mitigation via threshold calibration and reweighting. Delivered a professional compliance-ready report with risk matrix and mitigation roadmap.
0
7
AI-Powered QA Automation Suite
My role: Senior QA Automation Engineer (Lead)
Project description:
AI-Powered QA Test Case Generator & Execution Dashboard is a comprehensive, enterprise-level quality assurance solution designed to bridge the gap between AI-driven test generation and robust execution reporting. The suite provides a full-cycle automation workflow: from creating test cases using GPT-4 from natural language user stories to executing them across API and UI layers, finishing with high-impact business metric analysis.
0
11
AI Bias Audit - Object Detection Model (Retail Analytics)
My role :AI Bias & Fairness Specialist / ML Model Auditor
Project description:
Conducted a comprehensive fairness audit of an object detection model (YOLOv8-Large) for retail analytics, identifying gender, skin tone, and age biases. Delivered actionable mitigation recommendations including data rebalancing, adversarial debiasing, and post-processing calibration. Provided Python-based metrics, visualizations, and a detailed report to ensure ethical, reliable, and regulation-compliant AI
0
16
AI Bias Detection & Mitigation Tool for Hiring Models
My role: AI/ML Engineer & Fairness Auditor
Project description:
AI Bias Detection and Mitigation Framework for Recruitment Systems.
Built a complete, production-ready tool that:
Detects gender, race, and age bias in hiring ML models
Uses industry-standard fairness metrics (Disparate Impact, Demographic Parity, Equal Opportunity)
Applies Reweighing mitigation (AIF360) → improves fairness while keeping accuracy loss < 2%
Includes interactive Gradio demo for live testing
Full business impact section + compliance notes (EEOC 4/5th rule, GDPR, EU AI Act)