End-to-End Examples
Complete ML/AI system reference architectures.
What does a full working system look like for this use case?
Each note is a self-contained system walkthrough combining ≥3 components from ≥2 source layers. Notes include component diagrams, implementation sequences, integration code, and links to all constituent patterns and concepts.
Notes
ML Pipelines
- Tabular Classification Pipeline — feature engineering + tree ensembles + SHAP + FastAPI + MLflow
- Batch ML Prediction Pipeline — scheduled batch scoring with DVC, MLflow, and Airflow
- Continuous Training Pipeline — trigger-based retraining with drift detection and registry promotion
- Demand Forecasting Pipeline — hierarchical time series forecasting with SARIMAX/LightGBM + monitoring
- Anomaly Detection Pipeline — unsupervised + rule-based anomaly detection with alerting
Deep Learning & LLM Systems
- Deep Learning Training Workflow — PyTorch + Accelerate + MLflow + distributed training
- NLP Text Classification — transformer fine-tuning + PEFT + evaluation + vLLM serving
- LLM Fine-tuning Pipeline — data curation + SFT + DPO + vLLM with safety guardrails
- RAG Q&A System — document ingestion + vector store + LLM generation + observability
- LLM Coding Assistant — code-specialised RAG + agent + structured output
- Production LLM Serving with Safety — vLLM + LlamaGuard + LangSmith + rate limiting
APIs & Infrastructure
- Containerised API Service — FastAPI + Docker + Kubernetes + CI/CD