AI Logbook
Live Learning Feed

AI Logbook

Understanding intelligent systems from first principles.

The Derivative & Gradient Descent

Gradient Descent (Mathematical Gravity)The DerivativeThe Gradient Descent Loop

🧠The Theory

AI/ML Concept: Gradient Descent

This concept is the core intelligence of every neural network in existence, from the simplest regressor to Large Language Models. It is called Gradient Descent.

Imagine dropping a marble into a smooth, curved bowl (your Loss Landscape). Gravity automatically pulls the marble down the steepest slope until it rests perfectly at the bottom center (where Loss = 00).

Gradient Descent is how we program mathematical gravity.

  1. Calculate the Gradient (Slope): Where are we in the bowl?
  2. Take a Step: Update our weight by moving against the slope.
  3. The Learning Rate (α\alpha): We don't want to jump too far and fly out the other side of the bowl! We multiply our step by a tiny fraction (like 0.10.1) to slowly roll down to the bottom.

The weight update formula is the single most important equation in AI training:
wnew=wold(learning_rate×gradient)w_{new} = w_{old} - (\text{learning\_rate} \times \text{gradient})

📐The Math

Math: The Derivative

To stop guessing random numbers, we need to know exactly how changing a weight will affect our error. In calculus, we measure this using a derivative.

A derivative is simply the exact slope of a curve at one specific point.

Let's imagine the simplest possible error function. Your Loss (LL) is just your Weight (ww) squared:
L=w2L = w^2

If w=4w = 4, your loss is 1616. If we want to reduce the loss, should we increase ww or decrease ww?
By taking the derivative of w2w^2 (using the basic power rule of calculus, where you multiply the coefficient by the exponent and subtract one from the exponent), we get the slope equation:
dLdw=2w\frac{dL}{dw} = 2w

If we plug our weight (w=4w=4) into our derivative equation (242 \cdot 4), the slope is 88. Because the slope is positive, it tells us the curve is currently going up. To get to the bottom of the curve (00), we must go in the opposite direction. We need to decrease ww.

💡Insights and Mistakes

Developer's Insight: The Geometry of the Minus Sign

While implementing the gradient descent loop, I wanted to build a visual intuition for why the mathematical formula works without needing conditional logic.

I reached this intuition by approximating a curve using nearby points and shrinking the gap to one point. The tangent line gives the exact direction at that point.

Moving left→right on a tangent line for a parabola (x2x^2):

  • At x=1x = -1, the slope is negative (downhill).
  • At x=1x = 1, the slope is positive (uphill).

If the tangent line goes down left→right, the change is negative. To find the bottom of the curve (the minimum loss), we must move in the opposite direction.

This beautifully explains the minus sign in the core Gradient Descent formula:
w=w(learning_rate×gradient)w = w - (\text{learning\_rate} \times \text{gradient})

If the gradient is negative, subtracting a negative becomes an addition, pushing the weight to the right (towards 00). If the gradient is positive, subtracting a positive pushes the weight to the left (towards 00). The minus sign acts as a natural, self-correcting directional switch, allowing the loop to find the minimum without any if/else checks.

I ran a few experiments to see how the learning_rate (α\alpha) parameter actually controls the mathematical gravity. I found three distinct behaviors:

1. Slow Convergence: With a standard learning rate (lr=0.1), the model takes small, safe steps. It smoothly rides the curve down but requires more epochs to finally reach 0.00000.0000 loss.
2. The Oscillation Trap: I pushed the learning rate up to 1.0 with a starting weight of 10.010.0. The model instantly broke. The weight updated to 10.0-10.0, then back to 10.010.0, bouncing back and forth endlessly while the loss remained frozen at 100.0100.0. The steps were so massive that it stepped completely over the bottom of the bowl and landed on the opposite wall.
3. Proportional Stepping: I tested w=10.0 and w=15.0 using the same lr=0.2. Both reached zero loss at the exact same time (Epoch 15). This perfectly illustrates how the gradient works: higher up the curve, the slope is steeper, so the formula automatically takes a much larger initial step to cover the distance. As it nears the bottom, the slope flattens, and the steps naturally shrink to avoid overshooting.

This proves that tuning the learning rate is a delicate balancing act: too small, and the AI takes forever to train. Too large, and it violently oscillates and never learns anything.

⚙️The Code

def get_gradient(w: float) -> float:
    """Returns the derivative of the loss function L = w^2"""
    return 2 * w

# Example weights
w = 10.0 
learning_rate = 0.1
epochs = 50

print(f"Starting weight: {w}, Starting Loss: {w**2}")

for epoch in range(epochs):
    # Gradient for the current weight
    gradient = get_gradient(w)
    # Gradient Descent formula
    w = w - learning_rate * gradient
    # Current epoch, weight, and loss (w^2)
    print(f"Epoch {epoch + 1}: Weight = {w:.4f}, Loss = {w**2:.4f}")

Code Breakdown

  • def get_gradient(w: float) -> float: This function represents our exact mathematical compass, returning the derivative of w^2.
  • w = w - learning_rate * gradient: The Gradient Descent update rule. Notice there is no if/else statement checking if the error improved. The math guarantees we are stepping in the correct direction.