Introduction

[math] \newcommand{\LS}{\mathtt{line\_search}} \newcommand{\mI}{\mathrm{I}} \newcommand{\mB}{\mathrm{B}} \newcommand{\conv}{\mathrm{conv}} \newcommand{\inte}{\mathrm{int}} \newcommand{\tg}{\tilde{g}} \newcommand{\tx}{\tilde{x}} \newcommand{\ty}{\tilde{y}} \newcommand{\tz}{\tilde{z}} \newcommand{\id}{\mathrm{id}} \newcommand{\K}{\mathrm{KL}} \newcommand{\kl}{\mathrm{kl}} \newcommand{\bp}{\boldsymbol{p}} \newcommand{\tr}{\mathrm{Tr}} \newcommand{\E}{\mathbb{E}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\KL}{\mathrm{KL}} \newcommand{\LG}{\overline{\log}(K)} \newcommand{\ocP}{\overline{\mathcal{P}}} \newcommand{\cZ}{\mathcal{Z}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cR}{\mathcal{R}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cN}{\mathcal{N}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cL}{\mathcal{L}} \newcommand{\cX}{\mathcal{X}} \newcommand{\cI}{\mathcal{I}} \newcommand{\cJ}{\mathcal{J}} \newcommand{\cY}{\mathcal{Y}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cT}{\mathcal{T}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cD}{\mathcal{D}} \newcommand{\oD}{\overline{\mathcal{D}}} \newcommand{\oR}{\overline{R}} \newcommand{\wh}{\widehat} \newcommand{\argmax}{\mathop{\mathrm{argmax}}} \newcommand{\argmin}{\mathop{\mathrm{argmin}}} \newcommand{\todo}{{\bf TO DO } } \newcommand{\Ber}{\mathop{\mathrm{Ber}}} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\beqa}{\begin{eqnarray}} \newcommand{\eeqa}{\end{eqnarray}} \newcommand{\beqan}{\begin{eqnarray*}} \newcommand{\eeqan}{\end{eqnarray*}} \newcommand{\qed}{\hfill\BlackBox} \newcommand{\charfct}{\ds1} % \newcommand{\Fcal}{\mathcal{F}} \newcommand{\Xcal}{\mathcal{X}} \newcommand{\Hcal}{\mathcal{H}} \newcommand{\Gcal}{\mathcal{G}} \newcommand{\Nat}{\mathbb{N}} \newcommand{\mathds}{\mathbb}[/math]

The central objects of our study are convex functions and convex sets in [math]\R^n[/math].

Definition (Convex sets and convex functions)

A set [math]\cX \subset \R^n[/math] is said to be convex if it contains all of its segments, that is

[[math]] \forall (x,y,\gamma) \in \cX \times \cX \times [0,1], \; (1-\gamma) x + \gamma y \in \mathcal{X}. [[/math]]
A function [math]f : \mathcal{X} \rightarrow \R[/math] is said to be convex if it always lies below its chords, that is

[[math]] \forall (x,y,\gamma) \in \cX \times \cX \times [0,1], \; f((1-\gamma) x + \gamma y) \leq (1-\gamma)f(x) + \gamma f(y) . [[/math]]

We are interested in algorithms that take as input a convex set [math]\cX[/math] and a convex function [math]f[/math] and output an approximate minimum of [math]f[/math] over [math]\cX[/math]. We write compactly the problem of finding the minimum of [math]f[/math] over [math]\cX[/math] as

[[math]] \begin{align*} & \mathrm{min.} \; f(x) \\ & \text{s.t.} \; x \in \cX . \end{align*} [[/math]]

In the following we will make more precise how the set of constraints [math]\cX[/math] and the objective function [math]f[/math] are specified to the algorithm. Before that we proceed to give a few important examples of convex optimization problems in machine learning.

Some convex optimization problems in machine learning

Many fundamental convex optimization problems in machine learning take the following form:

[[math]] \begin{equation} \label{eq:veryfirst} \underset{x \in \R^n}{\mathrm{min.}} \; \sum_{i=1}^m f_i(x) + \lambda \cR(x) , \end{equation} [[/math]]

where the functions [math]f_1, \dots, f_m, \cR[/math] are convex and [math]\lambda \geq 0[/math] is a fixed parameter. The interpretation is that [math]f_i(x)[/math] represents the cost of using [math]x[/math] on the [math]i^{th}[/math] element of some data set, and [math]\cR(x)[/math] is a regularization term which enforces some “simplicity” in [math]x[/math]. We discuss now major instances of \eqref{eq:veryfirst}. In all cases one has a data set of the form [math](w_i, y_i) \in \R^n \times \cY, i=1, \dots, m[/math] and the cost function [math]f_i[/math] depends only on the pair [math](w_i, y_i)[/math]. We refer to [1][2][3] for more details on the origin of these important problems. The mere objective of this section is to expose the reader to a few concrete convex optimization problems which are routinely solved.

In classification one has [math]\cY = \{-1,1\}[/math]. Taking [math]f_i(x) = \max(0, 1- y_i x^{\top} w_i)[/math] (the so-called hinge loss) and [math]\cR(x) = \|x\|_2^2[/math] one obtains the SVM problem. On the other hand taking [math]f_i(x) = \log(1 + \exp(-y_i x^{\top} w_i) )[/math] (the logistic loss) and again [math]\cR(x) = \|x\|_2^2[/math] one obtains the (regularized) logistic regression problem. In regression one has [math]\cY = \R[/math]. Taking [math]f_i(x) = (x^{\top} w_i - y_i)^2[/math] and [math]\cR(x) = 0[/math] one obtains the vanilla least-squares problem which can be rewritten in vector notation as

[[math]] \underset{x \in \R^n}{\mathrm{min.}} \; \|W x - Y\|_2^2 , [[/math]]

where [math]W \in \R^{m \times n}[/math] is the matrix with [math]w_i^{\top}[/math] on the [math]i^{th}[/math] row and [math]Y =(y_1, \dots, y_n)^{\top}[/math]. With [math]\cR(x) = \|x\|_2^2[/math] one obtains the ridge regression problem, while with [math]\cR(x) = \|x\|_1[/math] this is the LASSO problem [4]. Our last two examples are of a slightly different flavor. In particular the design variable [math]x[/math] is now best viewed as a matrix, and thus we denote it by a capital letter [math]X[/math]. The sparse inverse covariance estimation problem can be written as follows, given some empirical covariance matrix [math]Y[/math],

[[math]] \begin{align*} & \mathrm{min.} \; \mathrm{Tr}(X Y) - \mathrm{logdet}(X) + \lambda \|X\|_1 \\ & \text{s.t.} \; X \in \R^{n \times n}, X^{\top} = X, X \succeq 0 . \end{align*} [[/math]]

Intuitively the above problem is simply a regularized maximum likelihood estimator (under a Gaussian assumption). Finally we introduce the convex version of the matrix completion problem. Here our data set consists of observations of some of the entries of an unknown matrix [math]Y[/math], and we want to “complete" the unobserved entries of [math]Y[/math] in such a way that the resulting matrix is ‘`simple" (in the sense that it has low rank). After some massaging (see [5]) the (convex) matrix completion problem can be formulated as follows:

[[math]] \begin{align*} & \mathrm{min.} \; \mathrm{Tr}(X) \\ & \text{s.t.} \; X \in \R^{n \times n}, X^{\top} = X, X \succeq 0, X_{i,j} = Y_{i,j} \; \text{for} \; (i,j) \in \Omega , \end{align*} [[/math]]

where [math]\Omega \subset [n]^2[/math] and [math](Y_{i,j})_{(i,j) \in \Omega}[/math] are given.

Basic properties of convexity

A basic result about convex sets that we shall use extensively is the Separation Theorem.

Theorem (Separation Theorem)

Let [math]\mathcal{X} \subset \R^n[/math] be a closed convex set, and [math]x_0 \in \R^n \setminus \mathcal{X}[/math]. Then, there exists [math]w \in \R^n[/math] and [math]t \in \R[/math] such that

[[math]] w^{\top} x_0 \lt t, \; \text{and} \; \forall x \in \mathcal{X}, w^{\top} x \geq t. [[/math]]

Note that if [math]\mathcal{X}[/math] is not closed then one can only guarantee that [math]w^{\top} x_0 \leq w^{\top} x, \forall x \in \mathcal{X}[/math] (and [math]w \neq 0[/math]). This immediately implies the Supporting Hyperplane Theorem ([math]\partial \cX[/math] denotes the boundary of [math]\cX[/math], that is the closure without the interior):

Theorem (Supporting Hyperplane Theorem)

Let [math]\mathcal{X} \subset \R^n[/math] be a convex set, and [math]x_0 \in \partial \mathcal{X}[/math]. Then, there exists [math]w \in \R^n, w \neq 0[/math] such that

[[math]] \forall x \in \mathcal{X}, w^{\top} x \geq w^{\top} x_0. [[/math]]

We introduce now the key notion of subgradients.

Definition (Subgradients)

Let [math]\mathcal{X} \subset \R^n[/math], and [math]f : \mathcal{X} \rightarrow \R[/math]. Then [math]g \in \R^n[/math] is a subgradient of [math]f[/math] at [math]x \in \mathcal{X}[/math] if for any [math]y \in \mathcal{X}[/math] one has

[[math]] f(x) - f(y) \leq g^{\top} (x - y) . [[/math]]
The set of subgradients of [math]f[/math] at [math]x[/math] is denoted [math]\partial f (x)[/math].

To put it differently, for any [math]x \in \cX[/math] and [math]g \in \partial f(x)[/math], [math]f[/math] is above the linear function [math]y \mapsto f(x) + g^{\top} (y-x)[/math]. The next result shows (essentially) that a convex functions always admit subgradients.

Proposition (Existence of subgradients)

Let [math]\mathcal{X} \subset \R^n[/math] be convex, and [math]f : \mathcal{X} \rightarrow \R[/math]. If [math]\forall x \in \mathcal{X}, \partial f(x) \neq \emptyset[/math] then [math]f[/math] is convex. Conversely if [math]f[/math] is convex then for any [math]x \in \mathrm{int}(\mathcal{X}), \partial f(x) \neq \emptyset[/math]. Furthermore if [math]f[/math] is convex and differentiable at [math]x[/math] then [math]\nabla f(x) \in \partial f(x)[/math]. \end{proposition} Before going to the proof we recall the definition of the epigraph of a function [math]f : \mathcal{X} \rightarrow \R[/math]:

[[math]] \mathrm{epi}(f) = \{(x,t) \in \mathcal{X} \times \R : t \geq f(x) \} . [[/math]]
It is obvious that a function is convex if and only if its epigraph is a convex set.


Show Proof

The first claim is almost trivial: let [math]g \in \partial f((1-\gamma) x + \gamma y)[/math], then by definition one has

[[math]] \begin{eqnarray*} & & f((1-\gamma) x + \gamma y) \leq f(x) + \gamma g^{\top} (y - x) , \\ & & f((1-\gamma) x + \gamma y) \leq f(y) + (1-\gamma) g^{\top} (x - y) , \end{eqnarray*} [[/math]]
which clearly shows that [math]f[/math] is convex by adding the two (appropriately rescaled) inequalities.

Now let us prove that a convex function [math]f[/math] has subgradients in the interior of [math]\mathcal{X}[/math]. We build a subgradient by using a supporting hyperplane to the epigraph of the function. Let [math]x \in \mathcal{X}[/math]. Then clearly [math](x,f(x)) \in \partial \mathrm{epi}(f)[/math], and [math]\mathrm{epi}(f)[/math] is a convex set. Thus by using the Supporting Hyperplane Theorem, there exists [math](a,b) \in \R^n \times \R[/math] such that

[[math]] \begin{equation} \label{eq:supphyp} a^{\top} x + b f(x) \geq a^{\top} y + b t, \forall (y,t) \in \mathrm{epi}(f) . \end{equation} [[/math]]
Clearly, by letting [math]t[/math] tend to infinity, one can see that [math]b \leq 0[/math]. Now let us assume that [math]x[/math] is in the interior of [math]\mathcal{X}[/math]. Then for [math]\epsilon \gt 0[/math] small enough, [math]y=x + \epsilon a \in \mathcal{X}[/math], which implies that [math]b[/math] cannot be equal to [math]0[/math] (recall that if [math]b=0[/math] then necessarily [math]a \neq 0[/math] which allows to conclude by contradiction). Thus rewriting \eqref{eq:supphyp} for [math]t=f(y)[/math] one obtains

[[math]] f(x) - f(y) \leq \frac{1}{|b|} a^{\top} (x - y) . [[/math]]
Thus [math]a / |b| \in \partial f(x)[/math] which concludes the proof of the second claim.

Finally let [math]f[/math] be a convex and differentiable function. Then by definition:

[[math]] \begin{eqnarray*} f(y) & \geq & \frac{f((1-\gamma) x + \gamma y) - (1- \gamma) f(x)}{\gamma} \\ & = & f(x) + \frac{f(x + \gamma (y - x)) - f(x)}{\gamma} \\ & \underset{\gamma \to 0}{\to} & f(x) + \nabla f(x)^{\top} (y-x), \end{eqnarray*} [[/math]]
which shows that [math]\nabla f(x) \in \partial f(x)[/math].

In several cases of interest the set of contraints can have an empty interior, in which case the above proposition does not yield any information. However it is easy to replace [math]\mathrm{int}(\cX)[/math] by [math]\mathrm{ri}(\cX)[/math] -the relative interior of [math]\cX[/math]- which is defined as the interior of [math]\cX[/math] when we view it as subset of the affine subspace it generates. Other notions of convex analysis will prove to be useful in some parts of this text. In particular the notion of {\em closed convex functions} is convenient to exclude pathological cases: these are the convex functions with closed epigraphs. Sometimes it is also useful to consider the extension of a convex function [math]f: \cX \rightarrow \R[/math] to a function from [math]\R^n[/math] to [math]\overline{\R}[/math] by setting [math]f(x)= + \infty[/math] for [math]x \not\in \cX[/math]. In convex analysis one uses the term {\em proper convex function} to denote a convex function with values in [math]\R \cup \{+\infty\}[/math] such that there exists [math]x \in \R^n[/math] with [math]f(x) \lt +\infty[/math]. From now on all convex functions will be closed, and if necessary we consider also their proper extension. We refer the reader to [6] for an extensive discussion of these notions.

Why convexity?

The key to the algorithmic success in minimizing convex functions is that these functions exhibit a {\em local to global} phenomenon. We have already seen one instance of this in Proposition, where we showed that [math]\nabla f(x) \in \partial f(x)[/math]: the gradient [math]\nabla f(x)[/math] contains a priori only local information about the function [math]f[/math] around [math]x[/math] while the subdifferential [math]\partial f(x)[/math] gives a global information in the form of a linear lower bound on the entire function. Another instance of this local to global phenomenon is that local minima of convex functions are in fact global minima: \begin{proposition}[Local minima are global minima] Let [math]f[/math] be convex. If [math]x[/math] is a local minimum of [math]f[/math] then [math]x[/math] is a global minimum of [math]f[/math]. Furthermore this happens if and only if [math]0 \in \partial f(x)[/math]. \end{proposition} \begin{proof} Clearly [math]0 \in \partial f(x)[/math] if and only if [math]x[/math] is a global minimum of [math]f[/math]. Now assume that [math]x[/math] is local minimum of [math]f[/math]. Then for [math]\gamma[/math] small enough one has for any [math]y[/math],

[[math]] f(x) \leq f((1-\gamma) x + \gamma y) \leq (1-\gamma) f(x) + \gamma f(y) , [[/math]]

which implies [math]f(x) \leq f(y)[/math] and thus [math]x[/math] is a global minimum of [math]f[/math]. \end{proof} The nice behavior of convex functions will allow for very fast algorithms to optimize them. This alone would not be sufficient to justify the importance of this class of functions (after all constant functions are pretty easy to optimize). However it turns out that surprisingly many optimization problems admit a convex (re)formulation. The excellent book [7] describes in great details the various methods that one can employ to uncover the convex aspects of an optimization problem. We will not repeat these arguments here, but we have already seen that many famous machine learning problems (SVM, ridge regression, logistic regression, LASSO, sparse covariance estimation, and matrix completion) are formulated as convex problems. We conclude this section with a simple extension of the optimality condition ``[math]0 \in \partial f(x)[/math]” to the case of constrained optimization. We state this result in the case of a differentiable function for sake of simplicity. \begin{proposition}[First order optimality condition] \label{prop:firstorder} Let [math]f[/math] be convex and [math]\cX[/math] a closed convex set on which [math]f[/math] is differentiable. Then

[[math]] x^* \in \argmin_{x \in \cX} f(x) , [[/math]]

if and only if one has

[[math]] \nabla f(x^*)^{\top}(x^*-y) \leq 0, \forall y \in \cX . [[/math]]

\end{proposition} \begin{proof} The ``if" direction is trivial by using that a gradient is also a subgradient. For the ``only if" direction it suffices to note that if [math]\nabla f(x)^{\top} (y-x) \lt 0[/math], then [math]f[/math] is locally decreasing around [math]x[/math] on the line to [math]y[/math] (simply consider [math]h(t) = f(x + t (y-x))[/math] and note that [math]h’(0) = \nabla f(x)^{\top} (y-x)[/math]). \end{proof}

Black-box model

We now describe our first model of ‘`input" for the objective function and the set of constraints. In the black-box model we assume that we have unlimited computational resources, the set of constraint [math]\cX[/math] is known, and the objective function [math]f: \cX \rightarrow \R[/math] is unknown but can be accessed through queries to {\em oracles}:

  • A zeroth order oracle takes as input a point [math]x \in \cX[/math] and outputs the value of [math]f[/math] at [math]x[/math].
  • A first order oracle takes as input a point [math]x \in \cX[/math] and outputs a subgradient of [math]f[/math] at [math]x[/math].

In this context we are interested in understanding the {\em oracle complexity} of convex optimization, that is how many queries to the oracles are necessary and sufficient to find an [math]\epsilon[/math]-approximate minima of a convex function. To show an upper bound on the sample complexity we need to propose an algorithm, while lower bounds are obtained by information theoretic reasoning (we need to argue that if the number of queries is ``too small" then we don’t have enough information about the function to identify an [math]\epsilon[/math]-approximate solution). From a mathematical point of view, the strength of the black-box model is that it will allow us to derive a {\em complete} theory of convex optimization, in the sense that we will obtain matching upper and lower bounds on the oracle complexity for various subclasses of interesting convex functions. While the model by itself does not limit our computational resources (for instance any operation on the constraint set [math]\cX[/math] is allowed) we will of course pay special attention to the algorithms' {\em computational complexity} (i.e., the number of elementary operations that the algorithm needs to do). We will also be interested in the situation where the set of constraint [math]\cX[/math] is unknown and can only be accessed through a {\em separation oracle}: given [math]x \in \R^n[/math], it outputs either that [math]x[/math] is in [math]\mathcal{X}[/math], or if [math]x \not\in \mathcal{X}[/math] then it outputs a separating hyperplane between [math]x[/math] and [math]\mathcal{X}[/math]. The black-box model was essentially developed in the early days of convex optimization (in the Seventies) with [8] being still an important reference for this theory (see also [9]). In the recent years this model and the corresponding algorithms have regained a lot of popularity, essentially for two reasons:

  • It is possible to develop algorithms with dimension-free oracle complexity which is quite attractive for optimization problems in very high dimension.
  • Many algorithms developed in this model are robust to noise in the output of the oracles. This is especially interesting for stochastic optimization, and very relevant to machine learning applications. We will explore this in details in Chapter.

Chapter, Chapter and Chapter are dedicated to the study of the black-box model (noisy oracles are discussed in Chapter). We do not cover the setting where only a zeroth order oracle is available, also called derivative free optimization, and we refer to [10][11] for further references on this.

Structured optimization

The black-box model described in the previous section seems extremely wasteful for the applications we discussed in Section Some convex optimization problems in machine learning. Consider for instance the LASSO objective: [math]x \mapsto \|W x - y\|_2^2 + \|x\|_1[/math]. We know this function {\em globally}, and assuming that we can only make local queries through oracles seem like an artificial constraint for the design of algorithms. Structured optimization tries to address this observation. Ultimately one would like to take into account the global structure of both [math]f[/math] and [math]\cX[/math] in order to propose the most efficient optimization procedure. An extremely powerful hammer for this task are the Interior Point Methods. We will describe this technique in Chapter alongside with other more recent techniques such as FISTA or Mirror Prox. We briefly describe now two classes of optimization problems for which we will be able to exploit the structure very efficiently, these are the LPs (Linear Programs) and SDPs (Semi-Definite Programs). [12] describe a more general class of Conic Programs but we will not go in that direction here. The class LP consists of problems where [math]f(x) = c^{\top} x[/math] for some [math]c \in \R^n[/math], and [math]\mathcal{X} = \{x \in \R^n : A x \leq b \}[/math] for some [math]A \in \R^{m \times n}[/math] and [math]b \in \R^m[/math]. The class SDP consists of problems where the optimization variable is a symmetric matrix [math]X \in \R^{n \times n}[/math]. Let [math]\mathbb{S}^n[/math] be the space of [math]n\times n[/math] symmetric matrices (respectively [math]\mathbb{S}^n_+[/math] is the space of positive semi-definite matrices), and let [math]\langle \cdot, \cdot \rangle[/math] be the Frobenius inner product (recall that it can be written as [math]\langle A, B \rangle = \tr(A^{\top} B)[/math]). In the class SDP the problems are of the following form: [math]f(x) = \langle X, C \rangle[/math] for some [math]C \in \R^{n \times n}[/math], and [math]\mathcal{X} = \{X \in \mathbb{S}^n_+ : \langle X, A_i \rangle \leq b_i, i \in \{1, \dots, m\} \}[/math] for some [math]A_1, \dots, A_m \in \R^{n \times n}[/math] and [math]b \in \R^m[/math]. Note that the matrix completion problem described in Section Some convex optimization problems in machine learning is an example of an SDP.

Overview of the results and disclaimer

The overarching aim of this monograph is to present the main complexity theorems in convex optimization and the corresponding algorithms. We focus on five major results in convex optimization which give the overall structure of the text: the existence of efficient cutting-plane methods with optimal oracle complexity (Chapter), a complete characterization of the relation between first order oracle complexity and curvature in the objective function (Chapter), first order methods beyond Euclidean spaces (Chapter), non-black box methods (such as interior point methods) can give a quadratic improvement in the number of iterations with respect to optimal black-box methods (Chapter), and finally noise robustness of first order methods (Chapter). Table can be used as a quick reference to the results proved in Chapter to Chapter, as well as some of the results of Chapter (this last chapter is the most relevant to machine learning but the results are also slightly more specific which make them harder to summarize).

An important disclaimer is that the above selection leaves out methods derived from duality arguments, as well as the two most popular research avenues in convex optimization: (i) using convex optimization in non-convex settings, and (ii) practical large-scale algorithms. Entire books have been written on these topics, and new books have yet to be written on the impressive collection of new results obtained for both (i) and (ii) in the past five years.

A few of the blatant omissions regarding (i) include (a) the theory of submodular optimization (see [13]), (b) convex relaxations of combinatorial problems (a short example is given in Section Convex relaxation and randomized rounding), and (c) methods inspired from convex optimization for non-convex problems such as low-rank matrix factorization (see e.g. [14] and references therein), neural networks optimization, etc.

With respect to (ii) the most glaring omissions include (a) heuristics (the only heuristic briefly discussed here is the non-linear conjugate gradient in Section Conjugate gradient), (b) methods for distributed systems, and (c) adaptivity to unknown parameters. Regarding (a) we refer to [15] where the most practical algorithms are discussed in great details (e.g., quasi-newton methods such as BFGS and L-BFGS, primal-dual interior point methods, etc.). The recent survey [16] discusses the alternating direction method of multipliers (ADMM) which is a popular method to address (b). Finally (c) is a subtle and important issue. In the entire monograph the emphasis is on presenting the algorithms and proofs in the simplest way, and thus for sake of convenience we assume that the relevant parameters describing the regularity and curvature of the objective function (Lipschitz constant, smoothness constant, strong convexity parameter) are known and can be used to tune the algorithm's own parameters. Line search is a powerful technique to replace the knowledge of these parameters and it is heavily used in practice, see again [15]. We observe however that from a theoretical point of view (c) is only a matter of logarithmic factors as one can always run in parallel several copies of the algorithm with different guesses for the values of the parameters[Notes 1]. Overall the attitude of this text with respect to (ii) is best summarized by a quote of Thomas Cover: “theory is the first term in the Taylor series of practice”, [17].

Notation. We always denote by [math]x^*[/math] a point in [math]\cX[/math] such that [math]f(x^*) = \min_{x \in \cX} f(x)[/math] (note that the optimization problem under consideration will always be clear from the context). In particular we always assume that [math]x^*[/math] exists. For a vector [math]x \in \R^n[/math] we denote by [math]x(i)[/math] its [math]i^{th}[/math] coordinate. The dual of a norm [math]\|\cdot\|[/math] (defined later) will be denoted either [math]\|\cdot\|_*[/math] or [math]\|\cdot\|^*[/math] (depending on whether the norm already comes with a subscript). Other notation are standard (e.g., [math]\mI_n[/math] for the [math]n \times n[/math] identity matrix, [math]\succeq[/math] for the positive semi-definite order on matrices, etc).

General references

Bubeck, Sébastien (2015). "Convex Optimization: Algorithms and Complexity". arXiv:1405.4980 [math.OC].

References

  1. T.Hastie, R.Tibshirani, and J.Friedman.The Elements of Statistical Learning.Springer, 2001.
  2. "B.Schölkopf and A.Smola.Learning with kernels.MIT Press, 2002."
  3. "S.Shalev-Shwartz and S.Ben-David.Understanding Machine Learning: From Theory to Algorithms.Cambridge University Press, 2014."
  4. R.Tibshirani.Regression shrinkage and selection via the lasso.Journal of the Royal Statistical Society. Series B (Methodological), 58 (1): pp. 267--288, 1996.
  5. E.Candès and B.Recht.Exact matrix completion via convex optimization.Foundations of Computational mathematics, 9 (6): 717--772, 2009.
  6. R.Rockafellar.Convex Analysis.Princeton University Press, 1970.
  7. S.Boyd and L.Vandenberghe.Convex Optimization.Cambridge University Press, 2004.
  8. A.Nemirovski and D.Yudin.Problem Complexity and Method Efficiency in Optimization.Wiley Interscience, 1983.
  9. A.Nemirovski.Information-based complexity of convex programming.Lecture Notes, 1995.
  10. A.Conn, K.Scheinberg, and L.Vicente.Introduction to Derivative-Free Optimization.Society for Industrial and Applied Mathematics (SIAM), 2009.
  11. "J.Y Audibert, S.Bubeck, and R.Munos.Bandit view on noisy optimization.In S.Sra, S.Nowozin, and S.Wright, editors, Optimization for Machine Learning. MIT press, 2011."
  12. A.Ben-Tal and A.Nemirovski.Lectures on modern convex optimization: analysis, algorithms, and engineering applications.Society for Industrial and Applied Mathematics (SIAM), 2001.
  13. F.Bach.Learning with submodular functions: A convex optimization perspective.Foundations and Trends in Machine Learning, 6 (2-3): 145--373, 2013.
  14. P.Jain, P.Netrapalli, and S.Sanghavi.Low-rank matrix completion using alternating minimization.In Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC '13, pages 665--674, 2013.
  15. 15.0 15.1 J.Nocedal and S.J. Wright.Numerical Optimization.Springer, 2006.
  16. S.Boyd, N.Parikh, E.Chu, B.Peleato, and J.Eckstein.Distributed optimization and statistical learning via the alternating direction method of multipliers.Foundations and Trends in Machine Learning, 3 (1): 1--122, 2011.
  17. T.M. Cover.1990 shannon lecture.IEEE information theory society newsletter, 42 (4), 1992.

Notes

  1. Note that this trick does not work in the context of Chapter.