A stochastic quasi-Newton method for online convex optimization

Loading...
Thumbnail Image

Date

Authors

Schraudolph, Nicol
Yu, Jin
Guenter, Simon

Journal Title

Journal ISSN

Volume Title

Publisher

OmniPress

Abstract

We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.

Description

Citation

Source

Proceedings of The 11th International Conference on Artificial Intelligence and Statistics (AISTATS 2007)

Book Title

Entity type

Access Statement

License Rights

DOI

Restricted until

2037-12-31