The link to Andrej Karpathy's video: The spelled-out intro to neural networks and backpropagation: building micrograd
And for those wondering Why did I reimplement micrograd? To refine my understanding of backpropogation and check if I truly understood the content of the video (which is absolutely amazing btw), and maybe later on build simple MLP's from scratch using this for Autograd.