03 — Modeling

Model families, training objectives, inductive biases, and model selection strategies. Answers: how do we choose and train a model that generalises?

Guiding question: Which model captures the structure of this problem, and how do we train it to generalise?

This layer does NOT cover: mathematical derivations (→ 01_foundations), production pipelines or serving (→ 05_ml_engineering), foundation-model system design (→ 06_ai_engineering), or data preparation (→ 02_data_science).

Sublayers

01 — Supervised Learning

Linear and GLM models, tree ensembles, kernel methods, and instance-based methods.

02 — Unsupervised Learning

Clustering, dimensionality reduction, density estimation, and representation learning.

03 — Probabilistic Models

Graphical models, latent variable models, and Bayesian methods.

04 — Deep Learning

MLPs, CNNs, sequence models, transformers, and multimodal architectures.

05 — Time Series

Classical forecasting (ARIMA, ETS), state-space models, and ML/DL approaches to temporal prediction.

06 — Training and Regularization

Loss functions, optimisation algorithms, regularisation strategies, hyperparameter tuning, and early stopping.

07 — Evaluation and Model Selection

Cross-validation, classification and regression metrics, calibration, SHAP, PDP/ICE, and model risk.

Relationship to Other Layers

  • 01 Foundations: mathematical prerequisites — linear algebra, calculus, probability, optimization theory.
  • 02 Data Science: data preparation, feature engineering, and problem framing feed into model training.
  • 05 ML Engineering: productionizing trained models: serving, monitoring, retraining.
  • 06 AI Engineering: foundation models and LLM-based systems.

7 items under this folder.