Logarithmic regret algorithms for online convex optimization

Elad E. Hazan, Amit Agarwal, Satyen Kale

Research output: Contribution to journalArticle

282 Citations (Scopus)

Abstract

In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret O(√T), for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log∈(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1-19, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.

Original languageEnglish (US)
Pages (from-to)169-192
Number of pages24
JournalMachine Learning
Volume69
Issue number2-3
DOIs
StatePublished - Dec 1 2007

Fingerprint

Convex optimization
Newton-Raphson method
Cost functions
Finance
Decision making
Derivatives

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Cite this

Hazan, Elad E. ; Agarwal, Amit ; Kale, Satyen. / Logarithmic regret algorithms for online convex optimization. In: Machine Learning. 2007 ; Vol. 69, No. 2-3. pp. 169-192.
@article{54737df2f3bb48fca16a165699b15f99,
title = "Logarithmic regret algorithms for online convex optimization",
abstract = "In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret O(√T), for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log∈(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1-19, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.",
author = "Hazan, {Elad E.} and Amit Agarwal and Satyen Kale",
year = "2007",
month = "12",
day = "1",
doi = "https://doi.org/10.1007/s10994-007-5016-8",
language = "English (US)",
volume = "69",
pages = "169--192",
journal = "Machine Learning",
issn = "0885-6125",
publisher = "Springer Netherlands",
number = "2-3",

}

Logarithmic regret algorithms for online convex optimization. / Hazan, Elad E.; Agarwal, Amit; Kale, Satyen.

In: Machine Learning, Vol. 69, No. 2-3, 01.12.2007, p. 169-192.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Logarithmic regret algorithms for online convex optimization

AU - Hazan, Elad E.

AU - Agarwal, Amit

AU - Kale, Satyen

PY - 2007/12/1

Y1 - 2007/12/1

N2 - In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret O(√T), for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log∈(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1-19, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.

AB - In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret O(√T), for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log∈(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1-19, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.

UR - http://www.scopus.com/inward/record.url?scp=35348918820&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=35348918820&partnerID=8YFLogxK

U2 - https://doi.org/10.1007/s10994-007-5016-8

DO - https://doi.org/10.1007/s10994-007-5016-8

M3 - Article

VL - 69

SP - 169

EP - 192

JO - Machine Learning

JF - Machine Learning

SN - 0885-6125

IS - 2-3

ER -