Copyright © infotec016 . Powered by Blogger.

Friday, May 5, 2023

Derivation of Backprogation


 

  1. Forward Pass: The input example is fed through the neural network one layer at a time, with each layer computing a weighted sum of its inputs, applying an activation function to this sum, and passing the result to the next layer of neurons. This process continues until the output layer of the network is reached, at which point the predicted output of the network is obtained.

  2. Cost Function: The cost function measures the difference between the predicted output of the network and the true output for the given input example. There are many different types of cost functions that can be used, but the most common is the mean squared error (MSE), which is simply the average of the squared differences between the predicted output and the true output.

  3. Backward Pass: The gradient of the cost function with respect to the weights of the network is calculated using the chain rule of calculus. This involves computing the derivative of the cost function with respect to the output of each neuron in the network, and then propagating this error backwards through the network using the chain rule. This results in a set of gradients that can be used to adjust the weights of the network.

  4. Update Weights: The weights of the network are adjusted in the direction of the negative gradient of the cost function, using an optimization algorithm such as stochastic gradient descent. This involves updating each weight by a small amount proportional to the gradient of the cost function with respect to that weight. This process is repeated for each input example in the training set, and the weights are adjusted after each iteration.

By repeating these four steps over many iterations, the backpropagation algorithm can learn to adjust the weights of the network in order to minimize the cost function and produce accurate predictions for new input examples.

0 comments:

Post a Comment