Freelancers using KerasFreelancers using Keras
Power BI Data Analyst + ML AI Automation Expert
5.0
Rating
94
Followers
Power BI Data Analyst + ML AI Automation Expert
Cover image for End-to-End Machine Learning Pipeline for
End-to-End Machine Learning Pipeline for Telecom Customer Churn 1. The Business Problem Customer churn is a major challenge for telecommunications companies, driven by competition, service issues, and changing consumer preferences. This project was designed to transition the company from reactive support to proactive retention using data-driven strategies such as customer segmentation, personalized offers, and loyalty programs,. 2. Data Exploration & Insights (EDA) I performed a comprehensive descriptive analysis on a database of 7,043 customers with 21 distinct variables,. Key findings included: Contractual Risk: Customers on month-to-month contracts showed significantly higher churn compared to those on one- or two-year commitments,. Service Preference: While Fiber Optic plans were the most popular, they also represented a critical segment for monitoring due to their higher price points,. Financial Indicators: Churned customers had a higher average monthly charge of $74.44, compared to $61.27 for retained customers. Payment Behavior: The "Electronic Check" payment method was most strongly associated with service cancellation,. 3. Engineering & Preprocessing Pipeline To prepare the data for high-performance modeling, I implemented a rigorous preprocessing workflow: Data Cleaning: Removed irrelevant identifiers like customerID and addressed potential data quality issues. The dataset was verified to have zero missing or NaN values,. Feature Engineering: Applied Label Encoding to transform categorical text variables into a numerical format suitable for machine learning algorithms,. Data Splitting: Adopted a standard 80/20 train-test split to ensure the model could generalize effectively to unseen data,. 4. Model Development & Benchmarking I developed and benchmarked eight distinct machine learning algorithms to identify the most effective solution for this specific application: Linear & Probabilistic: Logistic Regression, Naive Bayes. Tree-Based: Decision Tree, Random Forest. Boosting Frameworks: AdaBoost, Gradient Boosting, XGBoost, and LightGBM,. 5. Performance Evaluation & Results Models were evaluated using ROC curves, confusion matrices, and detailed classification reports,. Winner: Logistic Regression achieved the highest accuracy at 81.83%,. Secondary Performers: Gradient Boosting (81.05%) and AdaBoost (80.98%) also showed strong predictive power. 6. Technical Conclusion This data-driven approach proves that proactive churn prediction is essential for business sustainability. By identifying that customers prioritize high-speed fiber optic services but are sensitive to pricing and contract terms, the company can now optimize its pricing and retention strategies to maximize user satisfaction and revenue.
4
769
Data Analyst turning Data insights into business impact.
5.0
Rating
2
Followers
Data Analyst turning Data insights into business impact.
Senior Software Engineer | 10+ Yrs Across Industries
7
Followers
Senior Software Engineer | 10+ Yrs Across Industries
I’m an AI & Machine Learning engineer with expertise in deve
I’m an AI & Machine Learning engineer with expertise in deve
Cover image for Everyone's talking about quantum computing.
Everyone's talking about quantum computing. Nobody's using it to feed farmers. India loses 20–30% of its crop yield every year to diseases and pests. Not because farmers don't care — but because early detection is hard, expensive, and inaccessible to the people who need it most. The existing solutions? Either a basic image classifier trained on lab-perfect photos that fail in real field conditions, or an agronomist visit that costs time and money most small farmers don't have. So I built QuantumEdge AgriGuard — a hybrid Quantum Neural Network app where a farmer can photograph a diseased leaf on their phone and get an instant diagnosis in under 5 seconds. Here's what makes it different from just another plant disease detector: Instead of a pure classical CNN, I built a hybrid architecture — a ResNet/EfficientNet backbone extracts visual features, then passes them into a Variational Quantum Circuit (VQC) for the final classification. The quantum layer uses angle embedding + StronglyEntanglingLayers, which gives it a measurable edge on small, noisy datasets — exactly the kind of data you get from Indian field conditions. The app doesn't just tell you what disease it is. It gives you: → Confidence score → Organic + chemical remedies (India-specific) → Yield impact estimate → A live classical vs quantum accuracy comparison so you can see the difference yourself I tested the quantum advantage claim honestly — ran both models on the same downsampled PlantVillage dataset and tracked accuracy, F1-score, and inference time side by side. The results are on the dashboard. No hand-waving. Built with PennyLane + PyTorch + Plotly Dash. Designed to run on simulators today and on QpiAI-Indus 25-qubit hardware tomorrow.
0
11
Building scalable Next.js, Flutter & AI applications
New to Contra
Building scalable Next.js, Flutter & AI applications
Cover image for RAG is only as good
RAG is only as good as the data you feed it. 📄➡️🤖 I am excited to share that I’ve completed the Build an AI-Powered Document Retrieval System with IBM Granite and Docling lab from IBM SkillsBuild! While my previous work focused on the RAG pipeline, this lab went deeper into the most critical step: Document Parsing. We often forget that real-world data isn't clean text—it's locked in complex PDFs and formatted documents. What I built in this hands-on lab: 🔹 Advanced Parsing with Docling: I used Docling to not just "read" text, but to understand the structure of documents, preserving the context for the AI. 🔹 Granite Power: Leveraged IBM Granite models (granite-embedding-30m-english) to create high-quality vector embeddings. 🔹 Seamless Integration: Orchestrated the entire workflow using LangChain to connect the parsed data with the retrieval engine. This skill allows me to build AI agents that don't just "guess" answers but can accurately retrieve information from complex business documents. Technical breakdown of what I built: 🔹 Orchestration: Used LangChain to manage the flow between the user, the database, and the model. 🔹 Embeddings: Leveraged IBM Granite models (granite-embedding-30m-english) to convert text into vector representations. 🔹 Data Processing: Implemented document loading and chunking strategies to optimize context windows. 🔹 Synthesis: Created a system that retrieves relevant data and generates accurate, fact-based summaries. This experience has given me the practical skills to build AI applications that are not just "smart," but also accurate and domain-specific.
1
21
Cover image for VitaCare🚀 

1. Immutable Health Records
VitaCare🚀 1. Immutable Health Records (Blockchain & AES-256 Encryption) I moved beyond standard database storage to build a Tamper-Proof Medical Ledger. I learned how to implement a hybrid storage strategy where sensitive patient data is encrypted via AES-256 at the application layer before being anchored to a blockchain. This taught me how to ensure absolute data integrity, making medical histories immutable while providing a verifiable audit trail for every access request. 2. Privacy-First Consent Logic (Granular Data Sharing) Architecting the "Time-Limited Access" protocol taught me how to handle high-stakes privacy. I engineered a system where patients can issue temporary, scoped decryption keys to doctors via smart contracts. This taught me how to implement a Zero-Trust architecture, ensuring that healthcare providers only see what they need, exactly when they need it, with access automatically revoking after a set TTL (Time-To-Live). 3. Edge-Optimized Backend & Secure Validation By leveraging Supabase Edge Functions, I learned how to move critical business logic closer to the user while maintaining a "Thick-Client, Secure-Server" model. I architected isolated server-side environments for data validation and healthcare-specific compliance checks, which taught me how to drastically reduce latency in high-volume environments without compromising on server-side security. 4. Proactive Health Intelligence (Predictive Monitoring) I leveled up my AI integration skills by building an Advanced Command Center for Disease Surveillance. I learned how to aggregate anonymized, real-time data from disparate sources—including IoT wearable integrations—to generate heatmaps for disease outbreaks. This taught me the complexity of Geospatial Data Engineering and how to turn passive monitoring into proactive healthcare interventions. 5. Multi-Platform Synchronization (Unified Digital Ecosystem) Building a system that bridges Citizens, Doctors, and Government officials taught me the challenges of Cross-Stakeholder State Management. I learned how to maintain a "Single Source of Truth" across a multilingual Next.js web ecosystem and mobile interfaces, ensuring that a life-saving update on a doctor's portal is reflected on a patient's mobile dashboard in near real-time. 6. Inclusive Design & Localized Accessibility To tackle the diversity of the Indian healthcare landscape, I implemented a Multilingual UI Framework. I learned how to architect a scalable localization layer that supports regional languages, ensuring that the platform is accessible to rural citizens. This taught me the importance of Inclusive UX Engineering—where the technical complexity is hidden behind a simple, high-impact interface for non-technical users.
0
10
Cover image for MIT Connect🎉 

1. Hierarchical Access
MIT Connect🎉 1. Hierarchical Access Control & Multi-Tenant Architecture I moved beyond basic authentication to implement a granular Role-Based Access Control (RBAC) system. By architecting a "Portal-Switch" logic, I learned how to serve distinct frontend environments (Admin vs. Student) from a unified backend, ensuring that administrative actions like fee management and academic overrides are cryptographically isolated from student-level access. 2. Predictive Academic Logic & Real-time Analytics Instead of static data display, I engineered a Proactive Attendance Engine. I learned how to write complex backend aggregation pipelines that don't just calculate percentages, but run "Safe-Miss" simulations. This taught me how to transform raw timestamped logs into actionable insights, helping users predict eligibility before it becomes a critical failure point. 3. Optimized Grid Scheduling & Sparse Data Handling Building the Dynamic Timetable Matrix taught me how to manage high-density relational data with significant "empty states." I learned how to optimize frontend rendering for a 2D coordinate-based schedule (Time vs. Day), ensuring that the UI remains performant and responsive even when mapping hundreds of unique course-section combinations across a decentralized database. 4. Financial Integrity & Transactional Consistency Handling the Invoices and Fee Administration module taught me the importance of ACID compliance. I learned how to architect transactional workflows in the database to ensure that financial records—from generation to payment status—remain immutable and consistent, preventing data drift in multi-step billing cycles. 5. Component-Driven Design & Scalable UI Systems To maintain consistency across the Analytics and Academic modules, I developed a proprietary library of reusable UI components. I learned how to build "Data-Agnostic" widgets—such as the Stat Cards and the Weekly Trend Bar Charts—that can be hot-swapped across different dashboards, drastically reducing technical debt and ensuring a uniform brand identity. 6. High-Throughput State Management Building the Intelligence & Analytics suite taught me how to manage global state across a complex dashboard ecosystem. I learned how to implement optimized fetching strategies (like SWR or React Query) to ensure that when an Admin updates an event or a student marks an attendance hour, the change propagates across the entire system without requiring manual refreshes or redundant API overhead.
1
31
Cover image for Cognitive Guardian: Building Cognitive Guardian
Cognitive Guardian: Building Cognitive Guardian was a massive undertaking that pushed me to bridge the gap between physical hardware and cloud infrastructure. Transitioning this from a conceptual idea to a fully integrated digital tether taught me invaluable lessons in full-stack architecture, IoT, and edge computing. Architecting Resilient Systems (Failover Logic): I learned how to design a system that doesn't just fail gracefully, but adapts. Building the "Offline Handshake" protocol taught me how to seamlessly hand over session logic from a smartphone (Cellular/BLE) to a microcontroller (LoRaWAN) when entering network dead zones. Edge AI & Hardware-Software Integration: Instead of relying on cloud-based machine learning (which introduces latency), I learned how to program Edge AI natively on a microcontroller. Writing C++ state machines to calculate 3D acceleration vector magnitudes (via an MPU6050 sensor) taught me how to achieve zero-latency anomaly detection while operating under strict hardware constraints. Polyglot Database Strategy: I leveled up my data engineering skills by realizing one database doesn't fit all. I learned how to route high-throughput, real-time GPS telemetry into MongoDB (leveraging 2dsphere indexes for geospatial queries), while using PostgreSQL for strict relational state tracking, and Hyperledger Fabric for immutable audit logs. Privacy by Design (Self-Sovereign Identity): Handling sensitive medical data taught me modern compliance and security. I learned how to implement Decentralized Identifiers (DIDs) on a permissioned blockchain, ensuring that user data remains encrypted and is only temporarily accessible to authorities via smart contracts during an active SOS. Mobile Battery Optimization & Background Tasks: On the Flutter side, I learned how to handle intensive background processes without killing the user's device. Implementing dynamic location polling tied to the phone's internal accelerometer taught me deep, native-level power optimization for both Android and iOS. Managing High-Velocity Data Streams: Building the Node.js/Express backend taught me how to handle asynchronous data spikes. I learned to implement rate-limiting and use Socket.IO (http://Socket.IO) to bypass standard HTTP request cycles, successfully pushing critical hardware SOS alerts to a React web dashboard in under two seconds.
1
59
Tech Renaissance Leader: AI, FinTech, Web
Tech Renaissance Leader: AI, FinTech, Web