Introduction
The central objects of our study are convex functions and convex sets in [math]\R^n[/math].
A set [math]\cX \subset \R^n[/math] is said to be convex if it contains all of its segments, that is
We are interested in algorithms that take as input a convex set [math]\cX[/math] and a convex function [math]f[/math] and output an approximate minimum of [math]f[/math] over [math]\cX[/math]. We write compactly the problem of finding the minimum of [math]f[/math] over [math]\cX[/math] as
In the following we will make more precise how the set of constraints [math]\cX[/math] and the objective function [math]f[/math] are specified to the algorithm. Before that we proceed to give a few important examples of convex optimization problems in machine learning.
Some convex optimization problems in machine learning
Many fundamental convex optimization problems in machine learning take the following form:
where the functions [math]f_1, \dots, f_m, \cR[/math] are convex and [math]\lambda \geq 0[/math] is a fixed parameter. The interpretation is that [math]f_i(x)[/math] represents the cost of using [math]x[/math] on the [math]i^{th}[/math] element of some data set, and [math]\cR(x)[/math] is a regularization term which enforces some “simplicity” in [math]x[/math]. We discuss now major instances of \eqref{eq:veryfirst}. In all cases one has a data set of the form [math](w_i, y_i) \in \R^n \times \cY, i=1, \dots, m[/math] and the cost function [math]f_i[/math] depends only on the pair [math](w_i, y_i)[/math]. We refer to [1][2][3] for more details on the origin of these important problems. The mere objective of this section is to expose the reader to a few concrete convex optimization problems which are routinely solved.
In classification one has [math]\cY = \{-1,1\}[/math]. Taking [math]f_i(x) = \max(0, 1- y_i x^{\top} w_i)[/math] (the so-called hinge loss) and [math]\cR(x) = \|x\|_2^2[/math] one obtains the SVM problem. On the other hand taking [math]f_i(x) = \log(1 + \exp(-y_i x^{\top} w_i) )[/math] (the logistic loss) and again [math]\cR(x) = \|x\|_2^2[/math] one obtains the (regularized) logistic regression problem. In regression one has [math]\cY = \R[/math]. Taking [math]f_i(x) = (x^{\top} w_i - y_i)^2[/math] and [math]\cR(x) = 0[/math] one obtains the vanilla least-squares problem which can be rewritten in vector notation as
where [math]W \in \R^{m \times n}[/math] is the matrix with [math]w_i^{\top}[/math] on the [math]i^{th}[/math] row and [math]Y =(y_1, \dots, y_n)^{\top}[/math]. With [math]\cR(x) = \|x\|_2^2[/math] one obtains the ridge regression problem, while with [math]\cR(x) = \|x\|_1[/math] this is the LASSO problem [4]. Our last two examples are of a slightly different flavor. In particular the design variable [math]x[/math] is now best viewed as a matrix, and thus we denote it by a capital letter [math]X[/math]. The sparse inverse covariance estimation problem can be written as follows, given some empirical covariance matrix [math]Y[/math],
Intuitively the above problem is simply a regularized maximum likelihood estimator (under a Gaussian assumption). Finally we introduce the convex version of the matrix completion problem. Here our data set consists of observations of some of the entries of an unknown matrix [math]Y[/math], and we want to “complete" the unobserved entries of [math]Y[/math] in such a way that the resulting matrix is ‘`simple" (in the sense that it has low rank). After some massaging (see [5]) the (convex) matrix completion problem can be formulated as follows:
where [math]\Omega \subset [n]^2[/math] and [math](Y_{i,j})_{(i,j) \in \Omega}[/math] are given.
Basic properties of convexity
A basic result about convex sets that we shall use extensively is the Separation Theorem.
Let [math]\mathcal{X} \subset \R^n[/math] be a closed convex set, and [math]x_0 \in \R^n \setminus \mathcal{X}[/math]. Then, there exists [math]w \in \R^n[/math] and [math]t \in \R[/math] such that
Note that if [math]\mathcal{X}[/math] is not closed then one can only guarantee that [math]w^{\top} x_0 \leq w^{\top} x, \forall x \in \mathcal{X}[/math] (and [math]w \neq 0[/math]). This immediately implies the Supporting Hyperplane Theorem ([math]\partial \cX[/math] denotes the boundary of [math]\cX[/math], that is the closure without the interior):
Let [math]\mathcal{X} \subset \R^n[/math] be a convex set, and [math]x_0 \in \partial \mathcal{X}[/math]. Then, there exists [math]w \in \R^n, w \neq 0[/math] such that
We introduce now the key notion of subgradients.
Let [math]\mathcal{X} \subset \R^n[/math], and [math]f : \mathcal{X} \rightarrow \R[/math]. Then [math]g \in \R^n[/math] is a subgradient of [math]f[/math] at [math]x \in \mathcal{X}[/math] if for any [math]y \in \mathcal{X}[/math] one has
To put it differently, for any [math]x \in \cX[/math] and [math]g \in \partial f(x)[/math], [math]f[/math] is above the linear function [math]y \mapsto f(x) + g^{\top} (y-x)[/math]. The next result shows (essentially) that a convex functions always admit subgradients.
Let [math]\mathcal{X} \subset \R^n[/math] be convex, and [math]f : \mathcal{X} \rightarrow \R[/math]. If [math]\forall x \in \mathcal{X}, \partial f(x) \neq \emptyset[/math] then [math]f[/math] is convex. Conversely if [math]f[/math] is convex then for any [math]x \in \mathrm{int}(\mathcal{X}), \partial f(x) \neq \emptyset[/math]. Furthermore if [math]f[/math] is convex and differentiable at [math]x[/math] then [math]\nabla f(x) \in \partial f(x)[/math]. \end{proposition} Before going to the proof we recall the definition of the epigraph of a function [math]f : \mathcal{X} \rightarrow \R[/math]:
The first claim is almost trivial: let [math]g \in \partial f((1-\gamma) x + \gamma y)[/math], then by definition one has
Now let us prove that a convex function [math]f[/math] has subgradients in the interior of [math]\mathcal{X}[/math]. We build a subgradient by using a supporting hyperplane to the epigraph of the function. Let [math]x \in \mathcal{X}[/math]. Then clearly [math](x,f(x)) \in \partial \mathrm{epi}(f)[/math], and [math]\mathrm{epi}(f)[/math] is a convex set. Thus by using the Supporting Hyperplane Theorem, there exists [math](a,b) \in \R^n \times \R[/math] such that
Finally let [math]f[/math] be a convex and differentiable function. Then by definition:
In several cases of interest the set of contraints can have an empty interior, in which case the above proposition does not yield any information. However it is easy to replace [math]\mathrm{int}(\cX)[/math] by [math]\mathrm{ri}(\cX)[/math] -the relative interior of [math]\cX[/math]- which is defined as the interior of [math]\cX[/math] when we view it as subset of the affine subspace it generates. Other notions of convex analysis will prove to be useful in some parts of this text. In particular the notion of {\em closed convex functions} is convenient to exclude pathological cases: these are the convex functions with closed epigraphs. Sometimes it is also useful to consider the extension of a convex function [math]f: \cX \rightarrow \R[/math] to a function from [math]\R^n[/math] to [math]\overline{\R}[/math] by setting [math]f(x)= + \infty[/math] for [math]x \not\in \cX[/math]. In convex analysis one uses the term {\em proper convex function} to denote a convex function with values in [math]\R \cup \{+\infty\}[/math] such that there exists [math]x \in \R^n[/math] with [math]f(x) \lt +\infty[/math]. From now on all convex functions will be closed, and if necessary we consider also their proper extension. We refer the reader to [6] for an extensive discussion of these notions.
Why convexity?
The key to the algorithmic success in minimizing convex functions is that these functions exhibit a {\em local to global} phenomenon. We have already seen one instance of this in Proposition, where we showed that [math]\nabla f(x) \in \partial f(x)[/math]: the gradient [math]\nabla f(x)[/math] contains a priori only local information about the function [math]f[/math] around [math]x[/math] while the subdifferential [math]\partial f(x)[/math] gives a global information in the form of a linear lower bound on the entire function. Another instance of this local to global phenomenon is that local minima of convex functions are in fact global minima: \begin{proposition}[Local minima are global minima] Let [math]f[/math] be convex. If [math]x[/math] is a local minimum of [math]f[/math] then [math]x[/math] is a global minimum of [math]f[/math]. Furthermore this happens if and only if [math]0 \in \partial f(x)[/math]. \end{proposition} \begin{proof} Clearly [math]0 \in \partial f(x)[/math] if and only if [math]x[/math] is a global minimum of [math]f[/math]. Now assume that [math]x[/math] is local minimum of [math]f[/math]. Then for [math]\gamma[/math] small enough one has for any [math]y[/math],
which implies [math]f(x) \leq f(y)[/math] and thus [math]x[/math] is a global minimum of [math]f[/math]. \end{proof} The nice behavior of convex functions will allow for very fast algorithms to optimize them. This alone would not be sufficient to justify the importance of this class of functions (after all constant functions are pretty easy to optimize). However it turns out that surprisingly many optimization problems admit a convex (re)formulation. The excellent book [7] describes in great details the various methods that one can employ to uncover the convex aspects of an optimization problem. We will not repeat these arguments here, but we have already seen that many famous machine learning problems (SVM, ridge regression, logistic regression, LASSO, sparse covariance estimation, and matrix completion) are formulated as convex problems. We conclude this section with a simple extension of the optimality condition ``[math]0 \in \partial f(x)[/math]” to the case of constrained optimization. We state this result in the case of a differentiable function for sake of simplicity. \begin{proposition}[First order optimality condition] \label{prop:firstorder} Let [math]f[/math] be convex and [math]\cX[/math] a closed convex set on which [math]f[/math] is differentiable. Then
if and only if one has
\end{proposition} \begin{proof} The ``if" direction is trivial by using that a gradient is also a subgradient. For the ``only if" direction it suffices to note that if [math]\nabla f(x)^{\top} (y-x) \lt 0[/math], then [math]f[/math] is locally decreasing around [math]x[/math] on the line to [math]y[/math] (simply consider [math]h(t) = f(x + t (y-x))[/math] and note that [math]h’(0) = \nabla f(x)^{\top} (y-x)[/math]). \end{proof}
Black-box model
We now describe our first model of ‘`input" for the objective function and the set of constraints. In the black-box model we assume that we have unlimited computational resources, the set of constraint [math]\cX[/math] is known, and the objective function [math]f: \cX \rightarrow \R[/math] is unknown but can be accessed through queries to {\em oracles}:
- A zeroth order oracle takes as input a point [math]x \in \cX[/math] and outputs the value of [math]f[/math] at [math]x[/math].
- A first order oracle takes as input a point [math]x \in \cX[/math] and outputs a subgradient of [math]f[/math] at [math]x[/math].
In this context we are interested in understanding the {\em oracle complexity} of convex optimization, that is how many queries to the oracles are necessary and sufficient to find an [math]\epsilon[/math]-approximate minima of a convex function. To show an upper bound on the sample complexity we need to propose an algorithm, while lower bounds are obtained by information theoretic reasoning (we need to argue that if the number of queries is ``too small" then we don’t have enough information about the function to identify an [math]\epsilon[/math]-approximate solution). From a mathematical point of view, the strength of the black-box model is that it will allow us to derive a {\em complete} theory of convex optimization, in the sense that we will obtain matching upper and lower bounds on the oracle complexity for various subclasses of interesting convex functions. While the model by itself does not limit our computational resources (for instance any operation on the constraint set [math]\cX[/math] is allowed) we will of course pay special attention to the algorithms' {\em computational complexity} (i.e., the number of elementary operations that the algorithm needs to do). We will also be interested in the situation where the set of constraint [math]\cX[/math] is unknown and can only be accessed through a {\em separation oracle}: given [math]x \in \R^n[/math], it outputs either that [math]x[/math] is in [math]\mathcal{X}[/math], or if [math]x \not\in \mathcal{X}[/math] then it outputs a separating hyperplane between [math]x[/math] and [math]\mathcal{X}[/math]. The black-box model was essentially developed in the early days of convex optimization (in the Seventies) with [8] being still an important reference for this theory (see also [9]). In the recent years this model and the corresponding algorithms have regained a lot of popularity, essentially for two reasons:
- It is possible to develop algorithms with dimension-free oracle complexity which is quite attractive for optimization problems in very high dimension.
- Many algorithms developed in this model are robust to noise in the output of the oracles. This is especially interesting for stochastic optimization, and very relevant to machine learning applications. We will explore this in details in Chapter.
Chapter, Chapter and Chapter are dedicated to the study of the black-box model (noisy oracles are discussed in Chapter). We do not cover the setting where only a zeroth order oracle is available, also called derivative free optimization, and we refer to [10][11] for further references on this.
Structured optimization
The black-box model described in the previous section seems extremely wasteful for the applications we discussed in Section Some convex optimization problems in machine learning. Consider for instance the LASSO objective: [math]x \mapsto \|W x - y\|_2^2 + \|x\|_1[/math]. We know this function {\em globally}, and assuming that we can only make local queries through oracles seem like an artificial constraint for the design of algorithms. Structured optimization tries to address this observation. Ultimately one would like to take into account the global structure of both [math]f[/math] and [math]\cX[/math] in order to propose the most efficient optimization procedure. An extremely powerful hammer for this task are the Interior Point Methods. We will describe this technique in Chapter alongside with other more recent techniques such as FISTA or Mirror Prox. We briefly describe now two classes of optimization problems for which we will be able to exploit the structure very efficiently, these are the LPs (Linear Programs) and SDPs (Semi-Definite Programs). [12] describe a more general class of Conic Programs but we will not go in that direction here. The class LP consists of problems where [math]f(x) = c^{\top} x[/math] for some [math]c \in \R^n[/math], and [math]\mathcal{X} = \{x \in \R^n : A x \leq b \}[/math] for some [math]A \in \R^{m \times n}[/math] and [math]b \in \R^m[/math]. The class SDP consists of problems where the optimization variable is a symmetric matrix [math]X \in \R^{n \times n}[/math]. Let [math]\mathbb{S}^n[/math] be the space of [math]n\times n[/math] symmetric matrices (respectively [math]\mathbb{S}^n_+[/math] is the space of positive semi-definite matrices), and let [math]\langle \cdot, \cdot \rangle[/math] be the Frobenius inner product (recall that it can be written as [math]\langle A, B \rangle = \tr(A^{\top} B)[/math]). In the class SDP the problems are of the following form: [math]f(x) = \langle X, C \rangle[/math] for some [math]C \in \R^{n \times n}[/math], and [math]\mathcal{X} = \{X \in \mathbb{S}^n_+ : \langle X, A_i \rangle \leq b_i, i \in \{1, \dots, m\} \}[/math] for some [math]A_1, \dots, A_m \in \R^{n \times n}[/math] and [math]b \in \R^m[/math]. Note that the matrix completion problem described in Section Some convex optimization problems in machine learning is an example of an SDP.
Overview of the results and disclaimer
The overarching aim of this monograph is to present the main complexity theorems in convex optimization and the corresponding algorithms. We focus on five major results in convex optimization which give the overall structure of the text: the existence of efficient cutting-plane methods with optimal oracle complexity (Chapter), a complete characterization of the relation between first order oracle complexity and curvature in the objective function (Chapter), first order methods beyond Euclidean spaces (Chapter), non-black box methods (such as interior point methods) can give a quadratic improvement in the number of iterations with respect to optimal black-box methods (Chapter), and finally noise robustness of first order methods (Chapter). Table can be used as a quick reference to the results proved in Chapter to Chapter, as well as some of the results of Chapter (this last chapter is the most relevant to machine learning but the results are also slightly more specific which make them harder to summarize).
An important disclaimer is that the above selection leaves out methods derived from duality arguments, as well as the two most popular research avenues in convex optimization: (i) using convex optimization in non-convex settings, and (ii) practical large-scale algorithms. Entire books have been written on these topics, and new books have yet to be written on the impressive collection of new results obtained for both (i) and (ii) in the past five years.
A few of the blatant omissions regarding (i) include (a) the theory of submodular optimization (see [13]), (b) convex relaxations of combinatorial problems (a short example is given in Section Convex relaxation and randomized rounding), and (c) methods inspired from convex optimization for non-convex problems such as low-rank matrix factorization (see e.g. [14] and references therein), neural networks optimization, etc.
With respect to (ii) the most glaring omissions include (a) heuristics (the only heuristic briefly discussed here is the non-linear conjugate gradient in Section Conjugate gradient), (b) methods for distributed systems, and (c) adaptivity to unknown parameters. Regarding (a) we refer to [15] where the most practical algorithms are discussed in great details (e.g., quasi-newton methods such as BFGS and L-BFGS, primal-dual interior point methods, etc.). The recent survey [16] discusses the alternating direction method of multipliers (ADMM) which is a popular method to address (b). Finally (c) is a subtle and important issue. In the entire monograph the emphasis is on presenting the algorithms and proofs in the simplest way, and thus for sake of convenience we assume that the relevant parameters describing the regularity and curvature of the objective function (Lipschitz constant, smoothness constant, strong convexity parameter) are known and can be used to tune the algorithm's own parameters. Line search is a powerful technique to replace the knowledge of these parameters and it is heavily used in practice, see again [15]. We observe however that from a theoretical point of view (c) is only a matter of logarithmic factors as one can always run in parallel several copies of the algorithm with different guesses for the values of the parameters[Notes 1]. Overall the attitude of this text with respect to (ii) is best summarized by a quote of Thomas Cover: “theory is the first term in the Taylor series of practice”, [17].
Notation. We always denote by [math]x^*[/math] a point in [math]\cX[/math] such that [math]f(x^*) = \min_{x \in \cX} f(x)[/math] (note that the optimization problem under consideration will always be clear from the context). In particular we always assume that [math]x^*[/math] exists. For a vector [math]x \in \R^n[/math] we denote by [math]x(i)[/math] its [math]i^{th}[/math] coordinate. The dual of a norm [math]\|\cdot\|[/math] (defined later) will be denoted either [math]\|\cdot\|_*[/math] or [math]\|\cdot\|^*[/math] (depending on whether the norm already comes with a subscript). Other notation are standard (e.g., [math]\mI_n[/math] for the [math]n \times n[/math] identity matrix, [math]\succeq[/math] for the positive semi-definite order on matrices, etc).
General references
Bubeck, Sébastien (2015). "Convex Optimization: Algorithms and Complexity". arXiv:1405.4980 [math.OC].
References
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedHTF01
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedSS02
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedSSS14
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedTib96
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedCR09
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedRoc70
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedBV04
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedNY83
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedNem95
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedCSV09
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedABM11
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedBN01
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedBac13
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedJNS13
- 15.0 15.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedNW06
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedBPCPE11
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedCov92
Notes
- Note that this trick does not work in the context of Chapter.