Learning Roadmap
- Transformers Architecture (Attention, Positional Encoding)
- Pretraining (Next token prediction, scaling laws)
- Supervised Fine-Tuning (SFT)
- Reinforcement Learning (RLHF, PPO, DPO)
- Constitutional AI and alignment
- Inference optimization (quantization, speculative decoding)
- Multimodal models (vision-language)
- Emerging research (MoE, long context, reasoning, agents)
Documentation and learning materials will be added here as progress is made.