Exploring the mathematics, theory, and trade-offs behind AI/ML — written for practitioners who want to understand the "why".
Beyond tutorials — exploring the mathematics, theory, and trade-offs behind AI/ML techniques. Written for practitioners who want to understand not just what works, but why.
From Gaussian priors to disentangled representations — a visual guide
Latent space is the compressed, continuous manifold learned by generative models. This article derives the Evidence Lower Bound (ELBO) from first principles, unpacks the role of KL divergence as a regulariser, and explores how the geometry of the latent manifold directly governs sample quality, interpolation smoothness, and disentanglement.
A deep dive into the intrinsic dimensionality hypothesis and rank decomposition
Full fine-tuning updates billions of parameters to adapt a pre-trained LLM — expensive in compute, memory, and storage. LoRA proposes that weight updates during adaptation have a low intrinsic rank, enabling effective fine-tuning with <1% of the original parameters. This article explains why this works from the lens of intrinsic dimensionality theory and gradient geometry.
A systematic comparison of orchestration, tool use, and multi-agent coordination
Agentic AI frameworks have proliferated rapidly in 2024-2025. This article systematically benchmarks LangChain, CrewAI, and AutoGen across four dimensions: task completion rate, token efficiency, latency, and ease of multi-agent coordination. The goal is to help practitioners choose the right framework for their use case — not to declare a winner.
More articles coming soon
New deep-dive every 2-3 weeks