An overview of gradient descent optimization algorithms.
Gradient descent variants
Challenges
Choosing a proper learning rate can be difficult.
Additionally, the same learning rate applies to all parameter updates. If our data is sparse and our features have very different frequencies, we might not want to update all of them to the same extent, but perform a larger update for rarely occurring features.
Another key challenge of minimizing highly non-convex error functions common for neural networks is avoiding getting trapped in their numerous suboptimal local minima. Dauphin et al. [3] argue that the difficulty arises in fact not from local minima but from saddle points, i.e. points where one dimension slopes up and another slopes down. These saddle points are usually surrounded by a plateau of the same error, which makes it notoriously hard for SGD to escape, as the gradient is close to zero in all dimensions.
Gradient descent optimization algorithms
动量(Momentum)
SGD has trouble navigating ravines, i.e. areas where the surface curves much more steeply in one dimension than in another [4], which are common around local optima. In these scenarios, SGD oscillates across the slopes of the ravine while only making hesitant progress along the bottom towards the local optimum as in Image 2.
![]() | ![]() |
Momentum [5] is a method that helps accelerate SGD in the relevant direction and dampens oscillations as can be seen in Image 3. It does this by adding a fraction of the update vector of the past time step to the current update vector:
Note: Some implementations exchange the signs in the equations. The momentum term is usually set to 0.9 or a similar value.
Nesterov accelerated gradient(NAG 改进版动量)
However, a ball that rolls down a hill, blindly following the slope, is highly unsatisfactory. We'd like to have a smarter ball, a ball that has a notion of where it is going so that it knows to slow down before the hill slopes up again.
Nesterov accelerated gradient (NAG) [6] is a way to give our momentum term this kind of prescience. We know that we will use our momentum term to move the parameters . Computing thus gives us an approximation of the next position of the parameters (the gradient is missing for the full update), a rough idea where our parameters are going to be. We can now effectively look ahead by calculating the gradient not w.r.t. to our current parameters but w.r.t. the approximate future position of our parameters:
Again, we set the momentum term to a value of around 0.9.
Adagrad(自适应学习率,不同参数采用不同学习率)
Adagrad [9] is an algorithm for gradient-based optimization that does just this: It adapts the learning rate to the parameters, performing smaller updates
(i.e. low learning rates) for parameters associated with frequently occurring features, and larger updates (i.e. high learning rates) for parameters associated with infrequent features. For this reason, it is well-suited for dealing with sparse data. Dean et al. [10] have found that Adagrad greatly improved the robustness of SGD and used it for training large-scale neural nets at Google, which -- among other things -- learned to recognize cats in Youtube videos. Moreover, Pennington et al. [11] used Adagrad to train GloVe word embeddings, as infrequent words require much larger updates than frequent ones.
Previously, we performed an update for all parameters at once as every parameter used the same learning rate . As Adagrad uses a different learning rate for every parameter at every time step , we first show Adagrad's per-parameter update, which we then vectorize. For brevity, we use to denote the gradient at time step . is then the partial derivative of the objective function w.r.t. to the parameter at time step :
.
The SGD update for every parameter at each time step then becomes:
.
In its update rule, Adagrad modifies the general learning rate at each time step for every parameter based on the past gradients that have been computed for :
.
here is a diagonal matrix where each diagonal element is the sum of the squares of the gradients w.r.t. up to time step [12], while is a smoothing term that avoids division by zero (usually on the order of ). Interestingly, without the square root operation, the algorithm performs much worse.
As contains the sum of the squares of the past gradients w.r.t. to all parameters along its diagonal, we can now vectorize our implementation by performing a matrix-vector product between and :
.
One of Adagrad's main benefits is that it eliminates the need to manually tune the learning rate. Most implementations use a default value of 0.01 and leave it at that.
Adagrad's main weakness is its accumulation of the squared gradients in the denominator: Since every added term is positive, the accumulated sum keeps growing during training. This in turn causes the learning rate to shrink and eventually become infinitesimally small, at which point the algorithm is no longer able to acquire additional knowledge. The following algorithms aim to resolve this flaw.
Adadelta(Adagrad的改进版)
Adadelta [13] is an extension of Adagrad that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gradients to some fixed size .
Instead of inefficiently storing previous squared gradients, the sum of gradients is recursively defined as a decaying average of all past squared gradients. The running average at time step then depends (as a fraction similarly to the Momentum term) only on the previous average and the current gradient:
.
We set to a similar value as the momentum term, around 0.9. For clarity, we now rewrite our vanilla SGD update in terms of the parameter update vector :
The parameter update vector of Adagrad that we derived previously thus takes the form:
.
We now simply replace the diagonal matrix with the decaying average over past squared gradients :
.
As the denominator is just the root mean squared (RMS) error criterion of the gradient, we can replace it with the criterion short-hand:
.
The authors note that the units in this update (as well as in SGD, Momentum, or Adagrad) do not match, i.e. the update should have the same hypothetical units as the parameter. To realize this, they first define another exponentially decaying average, this time not of squared gradients but of squared parameter updates:
.
The root mean squared error of parameter updates is thus:
.
Since is unknown, we approximate it with the RMS of parameter updates until the previous time step. Replacing the learning rate in the previous update rule with finally yields the Adadelta update rule:
With Adadelta, we do not even need to set a default learning rate, as it has been eliminated from the update rule.
RMSprop(Adadelta的简化版,在前几次更新与Adadelta相似)
RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in Lecture 6e of his Coursera Class.
RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad's radically diminishing learning rates. RMSprop in fact is identical to the first update vector of Adadelta that we derived above:
RMSprop as well divides the learning rate by an exponentially decaying average of squared gradients. Hinton suggests to be set to 0.9, while a good default value for the learning rate is 0.001.
Adam(动量与Adadelta的结合)
Adaptive Moment Estimation (Adam) [14] is another method that computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients like Adadelta and RMSprop, Adam also keeps an exponentially decaying average of past gradients , similar to momentum. Whereas momentum can be seen as a ball running down a slope, Adam behaves like a heavy ball with friction, which thus prefers flat minima in the error surface [15]. We compute the decaying averages of past and past squared gradients and respectively as follows:
and are estimates of the first moment (the mean) and the second moment (the uncentered variance) of the gradients respectively, hence the name of the method. As and are initialized as vectors of 0's, the authors of Adam observe that they are biased towards zero, especially during the initial time steps, and especially when the decay rates are small (i.e. and are close to 1).
They counteract these biases by computing bias-corrected first and second moment estimates:
They then use these to update the parameters just as we have seen in Adadelta and RMSprop, which yields the Adam update rule:
The authors propose default values of 0.9 for , 0.999 for , and for . They show empirically that Adam works well in practice and compares favorably to other adaptive learning-method algorithms.
AdaMax
The factor in the Adam update rule scales the gradient inversely proportionally to the norm of the past gradients (via the term) and current gradient :
We can generalize this update to the norm. Note that Kingma and Ba also parameterize as :
Norms for large values generally become numerically unstable, which is why and norms are most common in practice. However, also generally exhibits stable behavior. For this reason, the authors propose AdaMax (Kingma and Ba, 2015) and show that with converges to the following more stable value. To avoid confusion with Adam, we use to denote the infinity norm-constrained :
We can now plug this into the Adam update equation by replacing with to obtain the AdaMax update rule:
Note that as relies on the operation, it is not as suggestible to bias towards zero as and in Adam, which is why we do not need to compute a bias correction for . Good default values are again , , and .
Nadam
As we have seen before, Adam can be viewed as a combination of RMSprop and momentum: RMSprop contributes the exponentially decaying average of past squared gradients , while momentum accounts for the exponentially decaying average of past gradients . We have also seen that Nesterov accelerated gradient (NAG) is superior to vanilla momentum.
Nadam (Nesterov-accelerated Adaptive Moment Estimation) [16] thus combines Adam and NAG. In order to incorporate NAG into Adam, we need to modify its momentum term .
First, let us recall the momentum update rule using our current notation :
where is our objective function, is the momentum decay term, and is our step size. Expanding the third equation above yields:
This demonstrates again that momentum involves taking a step in the direction of the previous momentum vector and a step in the direction of the current gradient.
NAG then allows us to perform a more accurate step in the gradient direction by updating the parameters with the momentum step before computing the gradient. We thus only need to modify the gradient to arrive at NAG:
Dozat proposes to modify NAG the following way: Rather than applying the momentum step twice -- one time for updating the gradient and a second time for updating the parameters -- we now apply the look-ahead momentum vector directly to update the current parameters:
Notice that rather than utilizing the previous momentum vector as in the equation of the expanded momentum update rule above, we now use the current momentum vector to look ahead. In order to add Nesterov momentum to Adam, we can thus similarly replace the previous momentum vector with the current momentum vector. First, recall that the Adam update rule is the following (note that we do not need to modify ):
Expanding the second equation with the definitions of and in turn gives us:
Note that is just the bias-corrected estimate of the momentum vector of the previous time step. We can thus replace it with :
Note that for simplicity, we ignore that the denominator is and not as we will replace the denominator in the next step anyway. This equation again looks very similar to our expanded momentum update rule above. We can now add Nesterov momentum just as we did previously by simply replacing this bias-corrected estimate of the momentum vector of the previous time step with the bias-corrected estimate of the current momentum vector , which gives us the Nadam update rule:





5RGKH%5DD26ZBSV8V%7DFX.png)
评论
发表评论