AI Theory and CodingGradient Descent: Building Optimization Algorithms from Scratch

Delve into the intricacies of optimization techniques with this immersive course that focuses on the implementation of various algorithms from scratch. Bypass high-level libraries to explore Stochastic Gradient Descent, Mini-Batch Gradient Descent, and advanced optimization methods such as Momentum, RMSProp, and Adam.

Observing Stochastic Gradient Descent in Action

Tuning the Learning Rate in SGD

Stochastic Sidesteps: Updating Model Parameters

Updating the Linear Regression Model Params with SGD

Mini-Batch Gradient Descent in Action

Calculating Gradients and Errors in MBGD

Calculating Gradients for Mini-Batch Gradient Descent

Adjust the Batch Size in Mini-Batch Gradient Descent

Visualizing Momentum in Gradient Descent

Adjusting Momentum in Gradient Descent

Adding Momentum to Gradient Descent

Optimizing the Roll: Momentum in Gradient Descent

RMSProp Assisted Space Navigation

Scaling the Optimizer: Adjusting RMSProp with Gamma

Adjust the Decay Rate in RMSProp Algorithm

Implement RMSProp Update

Implement RMSProp's Squared Gradient Update

Optimizing Robot Movements with ADAM Algorithm

Adjusting the Learning Rate in ADAM Optimization

Optimize the Orbit: Tuning the ADAM Optimizer's Epsilon Parameter

ADAM Optimizer: Implement the Coordinate Update