Deep-Dive Blog

Exploring the mathematics, theory, and trade-offs behind AI/ML — written for practitioners who want to understand the "why".

Deep-Dive Technical Blog

Beyond tutorials — exploring the mathematics, theory, and trade-offs behind AI/ML techniques. Written for practitioners who want to understand not just what works, but why.

FeaturedMathematics

The Mathematics of Latent Space in Generative Models

From Gaussian priors to disentangled representations — a visual guide

Latent space is the compressed, continuous manifold learned by generative models. This article derives the Evidence Lower Bound (ELBO) from first principles, unpacks the role of KL divergence as a regulariser, and explores how the geometry of the latent manifold directly governs sample quality, interpolation smoothness, and disentanglement.

#VAE#Latent Space#Information Theory#Generative Models
Feb 5, 2026 20 min read
Read Article
Efficiency

Why LoRA Is More Efficient Than Full Fine-Tuning

A deep dive into the intrinsic dimensionality hypothesis and rank decomposition

Full fine-tuning updates billions of parameters to adapt a pre-trained LLM — expensive in compute, memory, and storage. LoRA proposes that weight updates during adaptation have a low intrinsic rank, enabling effective fine-tuning with <1% of the original parameters. This article explains why this works from the lens of intrinsic dimensionality theory and gradient geometry.

#LoRA#Fine-tuning#PEFT
Feb 12, 2026 16 min
Read
Agentic AI

Benchmarking Agentic Frameworks: LangChain vs. CrewAI vs. AutoGen

A systematic comparison of orchestration, tool use, and multi-agent coordination

Agentic AI frameworks have proliferated rapidly in 2024-2025. This article systematically benchmarks LangChain, CrewAI, and AutoGen across four dimensions: task completion rate, token efficiency, latency, and ease of multi-agent coordination. The goal is to help practitioners choose the right framework for their use case — not to declare a winner.

#LangChain#CrewAI#AutoGen
Mar 1, 2026 22 min
Read

More articles coming soon

New deep-dive every 2-3 weeks