deep learning notebook, lots of fixes

This commit is contained in:
Pawel Sarkowicz
2026-03-29 16:56:30 -04:00
parent 3ab9b77bb7
commit e7b5db56d2
22 changed files with 1444 additions and 10206 deletions

View File

@@ -0,0 +1,33 @@
### Neural Network Model Summary
**Architecture:**
- Input: 48 features
- Hidden layers: [256, 128, 64]
- Dropout rate: 0.2
- Total parameters: 54,657
**Training:**
- Optimizer: Adam (lr=0.001)
- Early stopping: 25 epochs patience
- Best epoch: 109
**Test Set Performance:**
- MAE: 1.885
- RMSE: 2.401
- R²: 0.644
- Accuracy within ±1 grade: 33.1%
- Accuracy within ±2 grades: 60.8%
- Exact grouped V-grade accuracy: 27.1%
- Accuracy within ±1 V-grade: 69.4%
- Accuracy within ±2 V-grades: 89.5%
**Key Findings:**
1. The neural network is competitive, but not clearly stronger than the best tree-based baseline.
2. Fine-grained score prediction remains harder than grouped grade prediction.
3. The grouped V-grade metrics show that the model captures broader difficulty bands more reliably than exact score labels.
4. This makes the neural network useful as a comparison model, and potentially valuable in an ensemble.
**Portfolio Interpretation:**
This deep learning notebook extends the classical modelling pipeline by testing whether a neural architecture can improve prediction quality on engineered climbing features.
The main result is not that deep learning wins outright, but that it provides a meaningful benchmark and helps clarify where model complexity does and does not add value.