The central challenge of modern AI lies in scaling models while controlling resource consumption. This section explores fundamental breakthroughs that decouple superior model performance from prohibitive computational costs, analyzing techniques by their phase of implementation—whether Applied During Training or Applied Post-Training / Inference—and their primary goal: reducing Training Time, Inference Time, or Both. Discover the methods and trade-offs necessary to create powerful, yet practical, solutions for large-scale AI deployment.


Efficient AI Methods: Implementation and Benefit Phase

Category (Implementation)MethodPrimary BenefitRationale / Key Trade-off
Applied During TrainingEfficient OptimizersTraining TimeOptimizers (e.g., Sophia, AdamW) converge faster or use less memory, directly cutting training resources.
Applied During TrainingDistributed TrainingTraining TimeTechniques (FSDP, ZeRO) distribute memory/compute across devices to allow for faster, larger training runs.
Applied During TrainingPEFT (LoRA, etc.)Training TimeDrastically reduces the number of parameters that need to be trained/updated during the fine-tuning process.
Applied During TrainingEfficient Architectures (MoE, SSMs)BothArchitectures like MoE and Mamba are inherently more efficient, improving throughput in both training and inference.
Applied During TrainingKnowledge Distillation (KD)Inference TimeThe goal is to generate a smaller, faster model (the student) that is cheap to run for prediction.
Applied During TrainingQuantization-Aware Training (QAT)Inference TimeThe training is modified only to ensure the resulting low-precision model performs well during inference.
Applied During TrainingGradient-based Neural Architecture Search (NAS)Inference Time (Net)Trade-off: NAS significantly increases the total training time (search cost) to find an architecture that maximizes inference speed.
Applied During TrainingMixed Precision (MP)Training TimeUses lower precision (e.g., FP16/BF16) for non-critical calculations during the training loop to reduce memory and accelerate throughput.
Applied Post-Training / InferencePost-Training Quantization (PTQ)Inference TimeWeights are reduced in precision after training to immediately reduce model size and speed up prediction.
Applied Post-Training / InferencePruningInference TimeRemoves redundant structure after the full model is trained to achieve a smaller, faster deployment model.
Applied Post-Training / InferenceLow-Rank Factorization (LRF)Inference TimeDecomposes weight matrices post-training to reduce parameters and FLOPs for deployment.
Applied Post-Training / InferenceModel Compilers (TVM, XLA)Inference TimeSoftware-level optimization of the computational graph tailored for specific deployment hardware.
Applied Post-Training / InferenceNeural Architecture Search (NAS)Inference Time (Net)Trade-off: NAS significantly increases the total training time (search cost) to find an architecture that maximizes inference speed.