Quant_Model_Testbench is a lightweight experimentation framework for systematically evaluating machine learning models across feature subsets and hyperparameter combinations.
Instead of manually trying different model configurations, the testbench automates experiment generation, execution, and logging. Results are stored incrementally so experiments can be analyzed later and promising configurations can be refined through deeper searches.
The repository currently demonstrates the framework using the Titanic survival prediction dataset, but the testbench itself is dataset-agnostic and can be applied to any structured dataset.
Motivation
Machine learning experimentation often becomes disorganized:
repeated manual testing
inconsistent experiment tracking
hyperparameter tuning done ad-hoc
results scattered across notebooks
Quant_Model_Testbench addresses this by providing a simple system that:
enumerates feature combinations
tests hyperparameter grids
logs structured experiment results
supports iterative model refinement
The goal is to make model experimentation systematic, reproducible, and analyzable.
Core Idea
The testbench explores model performance along two primary axes.
Feature Subsets
Different combinations of dataset features are tested to determine which subsets contain the strongest predictive signal.
Example feature combinations:
[Pclass, Sex]
[Pclass, Sex, Fare]
[Sex, Age, Fare, Parch]
Hyperparameter Combinations
Each model is evaluated across different hyperparameter settings.
1. Load dataset 2. Run quick feature sweep 3. Identify top feature sets 4. Select promising model 5. Lock feature subset 6. Run full hyperparameter grid 7. Evaluate best configuration 8. Iterate or deploy
This workflow helps prevent:
ad-hoc tuning
lost experiment configurations
unreproducible results
Design Goals
Quant_Model_Testbench focuses on:
Reproducibility
Every experiment is logged and recoverable.
Structured Exploration
Feature and hyperparameter combinations are generated systematically.
Incremental Research
Broad exploration first, followed by focused optimization.