Saddle-free Hessian-free Optimization

Martin Arjovsky Courant Institute of Mathematical Sciences New York University [email protected]

Abstract Nonconvex optimization problems such as the ones in training deep neural networks suffer from a phenomenon called saddle point proliferation. This means that there are a vast number of high error saddle points present in the loss function. Second order methods have been tremendously successful and widely adopted in the convex optimization community, while their usefulness in deep learning remains limited. This is due to two problems: computational complexity and the methods being driven towards the high error saddle points. We introduce a novel algorithm specially designed to solve these two issues, providing a crucial first step to take the widely known advantages of Newton’s method to the nonconvex optimization community, especially in high dimensional settings.

1

Introduction

The loss functions arizing from learning deep neural networks are incredibly nonconvex, so the fact that they can be successfully optimized in a lot of problems remains a partial mistery. However, some recent work has started to shed light on this issue [2, 3, 4], leading to three likely conclusions: • There appears to be an exponential number of local minima. • However, all local minima lie within a small range of error with overwhelming proability. Almost all local minima will therefore have similar error to the global minimum. • There are exponentially more saddle points than minima, a phenomenon called saddle point proliferation. These consequences point to the fact that the low dimensional picture of getting "stuck" in a high error local minima is mistaken, and that finding a local minimum is actually a good thing. However, Newton’s method (the core component of all second order methods), is biased towards finding a critical point, any critical point. In the presence of an overwhelming number of saddle points, it is likely that it will get stuck in one of them instead of going to a minimum. Let f be our loss function, ∇f and H be it’s gradient and Hessian respectively, and α our learning rate. The step taken by an algorithm at iteration k is denoted by ∆θk . The property of Newton being driven towards a close critical point can easily be seen by noting that its update equation ∆θk = −αH(θk )−1 ∇f (θk )

(1)

comes from taking a second order approximation of our loss function, and solving for the closest critical point of this approximation (i.e. setting its gradient to 0). To overcome this problem of Newton’s method, [4] proposes a different algorithm, called saddle-free Newton, or SFN. The update equation for SFN is defined as −1

∆θk = −α|H(θk )|

∇f (θk )

30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.

(2)

The absolute value notation in equation 2 means that |A| is obtained by replacing the eigenvalues of A with their absolute values. In the convex case, this changes nothing from Newton. However, in the nonconvex case, this allows one to keep the very smart rescaling of Newton’s method, but still going on a descent direction when the eigenvalues are negative. While saddle-free Newton showed great promise, its main problem is the computational complexity it carries. Let m be the number of parameters, or more generally, the dimension of f ’s domain. The cost of calculating the update in equation 2 is the cost of diagonalizing (and then inverting) the matrix |H|, namely O(m3 ). Furthermore, this has a memory cost of O(m2 ), because it needs to store the full Hessian. Since in neural network problems m is typically bigger than 106 , both costs are prohibitive, which is the reason [4] employs a low-rank approximation. Using a rank k approximation, the algorithm has O(k 2 m) time cost and O(km) memory cost. While this is clearly cheaper than the full method, it’s still intractable for current problems, since in order to get a useful approximation the k required becomes prohibitively large, especially for the memory cost. Another line of work is the one followed by Hessian-free optimization [7], popularly known as HF. This method centers in three core ideas: • The Gauss-Newton method, which consists in replacing the use of the Hessian in Newton’s algorithm (1) for the Gauss-Newton matrix G. This matrix is a positive definite approximation of the Hessian, and it has achieved a good level of applicability in convex problems. However, the behaviour when the loss is nonconvex is not well understood. Furthermore, [9] argues against using the Gauss-Newton matrix on neural networks, showing it suffers from poor conditioning and drops the negative curvature information, which is argued to be crucial. Note that this is a major difference with SFN, which leverages the negative curvature information, keeping the scaling in these directions. • Using conjugate gradients (CG) to solve the system G(θk )−1 ∇f (θk ). One key advantage of CG is that it doesn’t require to store G(θk ), only to calculate matrix-vector products of the form G(θk )v for any vector v. The other advantage of this method is that it’s iterative, allowing for early stopping when the solution to the system is good enough. • When using neural networks, the R-operator [10, 11] is an algorithm to calculate matrixvector products of the form Hv and Gv in O(m) time without storing any matrix. This is obviously very efficient, since normally multiplying an m-by-m matrix with a vector has O(m2 ) time and memory cost. While Hessian-free optimization is computationally efficient, the use of the Gauss-Newton matrix in nonconvex objectives is thought to be inneffective. The update equation of saddle-free Newton is specially designed for this kind of problems, but current implementations lack computational efficiency. In the following section, we propose a new algorithm that takes the advantages of both approaches. This renders a novel second order method that’s computationally efficient, and specially designed for nonconvex optimization problems.

2

Saddle-free Hessian-free Optimization

Something that comes to mind is the possibility of using conjugate gradients to solve the system −1 |H| ∇f appearing in equation (2). This would allow us to have an iterative method, and possibly do early stopping when the solution to the system is good enough. However, in order to do that we would need to calculate |H|v for any vector v. While this was easy with Hv and Gv via the R-operator, it doesn’t extend to calculating |H|v, so we arrive at an impass. The first step towards our new method comes from the following simple but important observation, that we state as a Lemma. 2 Lemma 1. Let H be a real symmetric m-by-m matrix. Then, |H| = H2 . Proof. First we prove this for a real diagonal matrix D. We denote Di,i = λi . By definition, we have that |D|i,i = |λi | and it vanishes on the off-diagonal entries. Therefore, it is trivially verified  2 2 that (|D| )i,i = |λi | = λ2i = D2 i,i and both matrices are diagonal, which makes them coincide. 2

Let H be a real symmetric m-by-m matrix. By the spectral theorem, there is a real diagonal matrix D and an orthogonal matrix U such that H = UDU−1 . Therefore, 2 2 |H| = U|D|U−1 = U|D|U−1 U|D|U−1 2

= U|D| U−1 = UD2 U−1 = UDU−1 UDU−1 = UDU−1

2

= H2

1

Let A be a (semi-)positive definite matrix. Recalling that the square root of A (noted as A 2 ) is defined as the only (semi-)positive definite matrix B such that B2 = A, we have the following corollary. Corollary 2.1. Let H be a real square matrix. Then, |H| is the square root of H2 . Namely, 1 |H| = H2 2 . Note that our main impass is not knowing how to calculate |H|v for any vector v. However, we know how to calculate H2 v = H(Hv) by applying the R-operator twice. Therefore, the problem can be reformulated as: given a positive definite matrix A, of which we know how to calculate Au for any 1 vector u, can we calculate A 2 v for a given vector v? The answer to this question is yes. As illustrated by [1], we can define the following initial value problem:  0 −1 x (t) = − 12 (tA + (1 − t)I) (I − A) x(t) (3) x(0) = v When the norm of A is small enough (which can be trivially rescaled), one can show that the ordinary differential equation (3) has the unique solution 1

x(t) = (tA + (1 − t)I) 2 v 1

1

This solution has the crucial property that x(1) = A 2 v. Therefore, to calculate A 2 v we can initialize x(0) = v, and plug in equation (3) to an ODE solver such as the different Runge Kutta methods. The second core property of this formulation is that in order to do the derivative evaluations required to solve the ODE, we only need to multiply by (I − A) and solve systems by (tA + (1 − t)I), both of which can be done only with products of the form Au without storing any matrix, using conjugate gradients for the linear systems. In order to solve our problem of approximating SFN in a Hessian-free way, we could calculate |H|v using Au := H2 u in the previous method and do conjugate gradients to solve the system in (2). However, this would require solving an ODE for every iteration of conjugate gradients, which would be quite expensive. Therefore, we propose to calculate update (2) in a two-step manner. First, we 2 multiply by |H| and then we divide by H2 = |H| : y ← |H(θk )|∇f (θk ) ∆θk ← −α(H(θk )2 )−1 y Combining this approach with our approximation schemes, we derive our final algorithm, that we deem saddle-free Hessian-free optimization:  y ← ODE-solve Equation (3), Au := H(θk )2 u, v = ∇f (θk )  ∆θk ← CG-Solve H(θk )2 , −αy If l is the number of Runge Kutta steps we take to solve the ODE (3), and k is the number of CG iterations used to solve the linear systems, then the overall cost of the algorithm is O(mlk). Since l is close to 20 in the successful experiments done by [1] on random matrices (independently of m), and k is no larger than 250 in typical Hessian-free implementations, this is substantially lower than the O(m3 ) cost of saddle-free Newton. Furthermore, one critical advantage is that the memory cost of the algorithm is O(m), since at no moment it is required to store more than a small constant number of vectors of size m. 3

3

Conclusion and Future Work

We presented a new algorithm called saddle-free Hessian-free optimization. This algorithm provides a first step towards merging the benefits of computationally efficient Hessian-free approaches and methods like saddle-free Newton, which are specially designed for nonconvex objectives. Further work will be focused on taking these ideas to real world applications, and adding more speed and stability improvements to the core algorithm, such as the preconditioners of [7, 8] and damping with Levenberg-Marquardt[5, 6] style heuristics.

References [1] E. Allen, J. Baglama, and S. Boyd. Numerical approximation of the product of the square root of a matrix with a vector. Linear Algebra and its Applications, 2000. [2] A. Auffinger, G. Arous, and J. Cerny. Random matrices and complexity of spin glasses. Communications in Pure and Applied Mathematics, 66:165–201, 2013. [3] A. Choromanska, M. Henaff, M. Mathieu, G. Arous, and Y. LeCun. The loss surfaces of multilayer networks. Journal of Machine Learning Research, 38:192–204, 2015. [4] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in Neural Information Processing Systems 27, pages 2933–2941. Curran Associates, Inc., 2014. [5] K. Levenberg. A method for the solution of certain non-linear problems in least squares. The Quarterly of Applied Mathematics, (2):164–168, 1944. [6] D. W. Marquardt. An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics, 11(2):431–441, 1963. [7] J. Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 735–742, 2010. [8] J. Martens, I. Sutskever, and K. Swersky. Estimating the hessian by back-propagating curvature. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, 2012. [9] E. Mizutani and S. E. Dreyfus. 2008 special issue: Second-order stagewise backpropagation for hessian-matrix analyses and investigation of negative curvature. Neural Networks, 21(2-3):193– 203, Mar. 2008. [10] B. A. Pearlmutter. Fast exact multiplication by the hessian. Neural Computation, 6:147–160, 1994. [11] N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural Computation, 14(7):1723–1738, 2002.

4

Saddle-free Hessian-free Optimization

In the presence of an overwhelming number of saddle points, it is ... Let f be our loss function, ∇f and H be it's gradient and Hessian respectively, and α our ...

161KB Sizes 3 Downloads 92 Views

Recommend Documents

Optimization Problems: Refinements
Complete problem, several computer scientists have concentrated their efforts in ... some structural parameter -such as treewidth, cliquewidth, maximum degree, .... PR, is the following decision problem: Given an instance x ∈ IP and a solution.

SAP System Landscape Optimization
Application Platform . ... The SAP Web Application Server . ...... possible customizing settings, the results of an implementation can prove very frustrating. In trying ...

SAP System Landscape Optimization
tent master data, even beyond system boundaries. Data quality is impor- tant for all systems, but it is critical for the SAP key systems and the related CRM or ...

Optimization Services
Targeting all products - i.e. no filters used under "product extensions". Check settings ... Google Confidential and Proprietary. Segment. Example. Products. ROI. Ad Group. Product ... keywords to prevent unnecessary cost. Use findings to break ...

Chang, Particle Swarm Optimization and Ant Colony Optimization, A ...
Chang, Particle Swarm Optimization and Ant Colony Optimization, A Gentle Introduction.pdf. Chang, Particle Swarm Optimization and Ant Colony Optimization, ...

Stochastic Program Optimization - GitHub
114 COMMUNICATIONS OF THE ACM. | FEBRUARY 2016 | VOL. 59 | NO. 2 research ..... formed per second using current symbolic validator tech- nology is quite low. ... strained to sample from identical equivalence classes before and after ...

SAP System Landscape Optimization
addition, numerous enterprises also use other applications (best-of- breed) to ..... fore best suited for occasional users. ...... for the support of sales campaigns.

Convex Optimization
Mar 15, 1999 - 5.1 Unconstrained minimization and extensions . ..... It is (as the name implies) a convex cone. Example. ..... and lies in the domain of f (i.e., c. T.

chaotic optimization
search (SS) with the hallmark of memory retention ... included in the solutions, with regards to its interaction ... occur, if a “good” starting point is achieved. ..... His email address is [email protected]. GODFREY C. ONWUBOLU is the Professo

Optimization in
Library of Congress Cataloging in Publication Data. Đata not available .... sumption and labor supply, firms” production, and governments' policies. But all ...

linear optimization
Jun 30, 2005 - recommended that the reader try these examples in Excel while working .... As a final observation, notice how the data relating to the alloys was ... While this is not necessary, it does make the formula entry much easier,.

SAP System Landscape Optimization
Examples of customizing settings are company codes, plant ...... ios, which can always be restarted and easily adjusted to new constraints, must always be in ...

Semidefinite Optimization
Feb 11, 2012 - Page 1. Semidefinite Optimization. Mastermath. Spring 2012. Monique Laurent. Centrum Wiskunde & Informatica. Science Park 123. 1098 XG ...

Local Search and Optimization
Simulated Annealing = physics inspired twist on random walk. • Basic ideas: – like hill-climbing identify the quality of the local improvements. – instead of picking ...

pdf optimization tool
Sign in. Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect ...

DoubleClick for Publishers Optimization
data being incorporated within a matter of hours, the system continually ... realize instant time savings from not having to manually collate and analyze data.

Exercises on Optimization
y x y xy x. yxC. Suppose that the firm sells all its output at a price per unit of 15 for A and 9 for. B. Find the daily production levels x and y that maximize profit per ...

Using Topology Optimization
Nov 16, 2016 - Weiyang Lin, James C. Newman III, W. Kyle Anderson. Design of Broadband Acoustic Cloak. Using Topology Optimization. IMECE2016-68135 ...

OPTIMIZATION OF INTENSITY-MODULATED RADIOTHERAPY ...
NTCPs based on EUD formalism with corresponding ob- Fig. 1. (a) Sample DVH used for EUD calculation. (b) EUD for the. DVH in (a) as a function of parameter a. Tumors generally have. large negative values of a, whereas critical element normal struc- t

CONSTRAINED POLYNOMIAL OPTIMIZATION ...
The implementation of these procedures in our computer algebra system .... plemented our algorithms in our open source Matlab toolbox NCSOStools freely ...

Search Engine Optimization
Every website on the internet is created using a programming language called "html". ... As we view the source file from this website, we need to look for a few things. .... Next, click on 2 more graphics throughout your webpage and enter your ...

PDF Numerical Optimization
Numerical Recipes 3rd Edition: The Art of Scientific Computing ... Bayesian Data Analysis, Third Edition (Chapman & Hall/CRC Texts in Statistical Science).