guide:B85f6bf6f2: Difference between revisions

From Stochiki
mNo edit summary
mNo edit summary
 
(9 intermediate revisions by the same user not shown)
Line 1: Line 1:
  <div class="d-none">
  <div class="d-none">
<math>
<math>
\(
% Generic syms
% Generic syms
\newcommand\defeq{:=}
\newcommand\defeq{:=}
Line 202: Line 201:


* <span class="mw-gls" data-name ="data">data</span> as collections of individual <span class="mw-gls mw-gls-first" data-name ="datapoint">data point</span>s that are characterized by <span class="mw-gls mw-gls-first" data-name ="features">features</span> (see Section [[#sec_feature_space |Features]]) and <span class="mw-gls mw-gls-first" data-name ="label">label</span>s (see Section [[#sec_labels | Labels]])  
* <span class="mw-gls" data-name ="data">data</span> as collections of individual <span class="mw-gls mw-gls-first" data-name ="datapoint">data point</span>s that are characterized by <span class="mw-gls mw-gls-first" data-name ="features">features</span> (see Section [[#sec_feature_space |Features]]) and <span class="mw-gls mw-gls-first" data-name ="label">label</span>s (see Section [[#sec_labels | Labels]])  
* a <span class="mw-gls" data-name ="model">model</span> or <span class="mw-gls mw-gls-first" data-name ="hypospace">hypothesis space</span> that consists of computationally feasible hypothesis maps from feature space to <span class="mw-gls mw-gls-first" data-name ="labelspace">label space</span> (see Section [[guide:B85f6bf6f2#sec_hypo_space | The Model ]])  
* a <span class="mw-gls" data-name ="model">model</span> or <span class="mw-gls mw-gls-first" data-name ="hypospace">hypothesis space</span> that consists of computationally feasible hypothesis maps from feature space to <span class="mw-gls mw-gls-first" data-name ="labelspace">label space</span> (see Section [[#sec_hypo_space | The Model ]])  
* a <span class="mw-gls" data-name ="lossfunc">loss function</span> (see Section [[guide:B85f6bf6f2#sec_lossfct | The Loss ]]) to measure the quality of a hypothesis map.  
* a <span class="mw-gls" data-name ="lossfunc">loss function</span> (see Section [[#sec_lossfct | The Loss ]]) to measure the quality of a hypothesis map.  


A ML problem involves specific design choices for <span class="mw-gls" data-name ="datapoint">data point</span>s, its features and labels,  
A ML problem involves specific design choices for <span class="mw-gls" data-name ="datapoint">data point</span>s, its features and labels,  
the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> and the <span class="mw-gls" data-name ="lossfunc">loss function</span> to measure the quality of a particular hypothesis.  
the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> and the <span class="mw-gls" data-name ="lossfunc">loss function</span> to measure the quality of a particular hypothesis.  
Similar to ML problems (or applications), we can also characterize ML methods as combinations of these three components. This chapter discusses in some depth each of the above ML components and their combination within some widely-used ML methods <ref name="JMLR:v12:pedregosa11a"/>.  
Similar to ML problems (or applications), we can also characterize ML methods as combinations of these three components. This chapter discusses in some depth each of the above ML components and their combination within some widely-used ML methods <ref name="JMLR:v12:pedregosa11a">F. Pedregosa. Scikit-learn: Machine learning in python. ''Journal of Machine Learning Research'' 12(85):2825--2830, 2011</ref>.  


We detail in Chapter [[guide:013ef4b5cd | The Landscape of ML ]] how some of the most popular ML methods,  
We detail in Chapter [[guide:013ef4b5cd | The Landscape of ML ]] how some of the most popular ML methods,  
including <span class="mw-gls mw-gls-first" data-name ="linreg">linear regression</span> (see Section [[guide:013ef4b5cd#sec_lin_reg | Linear Regression ]]) as well as deep learning methods (see Section [[guide:013ef4b5cd#sec_deep_learning | Deep Learning ]]), are obtained by specific design choices for the three components. Chapter [[guide:2c0f621d22 | Empirical Risk Minimization ]] will then introduce <span class="mw-gls" data-name ="erm">ERM</span> as a main principle for how to operationally combine the individual ML components. Within the <span class="mw-gls" data-name ="erm">ERM</span> principle, ML problems become optimization problems and ML methods become optimization methods.  
including <span class="mw-gls mw-gls-first" data-name ="linreg">linear regression</span> (see Section [[guide:013ef4b5cd#sec_lin_reg | Linear Regression ]]) as well as deep learning methods (see Section [[guide:013ef4b5cd#sec_deep_learning | Deep Learning ]]), are obtained by specific design choices for the three components. Chapter [[guide:2c0f621d22 | Empirical Risk Minimization ]] will then introduce <span class="mw-gls" data-name ="erm">ERM</span> as a main principle for how to operationally combine the individual ML components. Within the <span class="mw-gls" data-name ="erm">ERM</span> principle, ML problems become optimization problems and ML methods become optimization methods.  


==The Data==  
==<span id="sec_the_data"/>The Data==  


''' Data as Collections of <span class="mw-gls mw-gls-first" data-name ="datapoint">Data point</span>s.''' Maybe the most important component of any ML problem  
''' Data as Collections of <span class="mw-gls mw-gls-first" data-name ="datapoint">Data point</span>s.''' Maybe the most important component of any ML problem  
Line 220: Line 219:
<div class="d-flex justify-content-center">
<div class="d-flex justify-content-center">
<span id="fig:image "></span>
<span id="fig:image "></span>
[[File:BergSee.jpg | 500px | thumb | A snapshot taken at the beginning of a mountain hike. ]]
[[File:BergSee.jpg | 300px | thumb | A snapshot taken at the beginning of a mountain hike. ]]
</div>
</div>


Line 233: Line 232:
Consider a <span class="mw-gls" data-name ="datapoint">data point</span> that represents a vaccine. A full characterization of such a <span class="mw-gls" data-name ="datapoint">data point</span> would require to specify its chemical composition down to level of molecules and atoms. Moreover, there are properties of a vaccine that depend on the patient who received the vaccine.  
Consider a <span class="mw-gls" data-name ="datapoint">data point</span> that represents a vaccine. A full characterization of such a <span class="mw-gls" data-name ="datapoint">data point</span> would require to specify its chemical composition down to level of molecules and atoms. Moreover, there are properties of a vaccine that depend on the patient who received the vaccine.  


We find it useful to distinguish between two different groups of properties of a <span class="mw-gls" data-name ="datapoint">data point</span>. The first group of properties is referred to as <span class="mw-gls" data-name ="features">features</span> and the second group of properties is referred to as <span class="mw-gls" data-name ="label">label</span>s. Roughly speaking, features are low-level properties of a <span class="mw-gls" data-name ="datapoint">data point</span> that can be measured or computed easily in an automated fashion. In contract, labels are high-level properties of a <span class="mw-gls" data-name ="datapoint">data point</span>s that represent some quantity of interest. Determining the label value of a <span class="mw-gls" data-name ="datapoint">data point</span> often requires human labour, e.g., a domain expert who has to examine the <span class="mw-gls" data-name ="datapoint">data point</span>. Some widely used synonyms for <span class="mw-gls" data-name ="features">features</span> are “covariate”,“explanatory variable”, “independent variable”, “input (variable)”, “predictor (variable)” or “regressor”  <ref name="Gujarati2021"/><ref name="Dodge2003"/><ref name="Everitt2022"/>. Some widely used synonyms for the label of a <span class="mw-gls" data-name ="datapoint">data point</span> are "response variable", "output variable" or "target"  <ref name="Gujarati2021"/><ref name="Dodge2003"/><ref name="Everitt2022"/>.
We find it useful to distinguish between two different groups of properties of a <span class="mw-gls" data-name ="datapoint">data point</span>. The first group of properties is referred to as <span class="mw-gls" data-name ="features">features</span> and the second group of properties is referred to as <span class="mw-gls" data-name ="label">label</span>s. Roughly speaking, features are low-level properties of a <span class="mw-gls" data-name ="datapoint">data point</span> that can be measured or computed easily in an automated fashion. In contract, labels are high-level properties of a <span class="mw-gls" data-name ="datapoint">data point</span>s that represent some quantity of interest. Determining the label value of a <span class="mw-gls" data-name ="datapoint">data point</span> often requires human labour, e.g., a domain expert who has to examine the <span class="mw-gls" data-name ="datapoint">data point</span>. Some widely used synonyms for <span class="mw-gls" data-name ="features">features</span> are “covariate”,“explanatory variable”, “independent variable”, “input (variable)”, “predictor (variable)” or “regressor”  <ref name="Gujarati2021">D. Gujarati and D. Porter. ''Basic Econometrics'' Mc-Graw Hill, 2009</ref><ref name="Dodge2003">Y. Dodge. ''The Oxford Dictionary of Statistical Terms'' Oxford University Press, 2003</ref><ref name="Everitt2022">B. Everitt. ''Cambridge Dictionary of Statistics'' Cambridge University Press, 2002</ref>. Some widely used synonyms for the label of a <span class="mw-gls" data-name ="datapoint">data point</span> are "response variable", "output variable" or "target"  <ref name="Gujarati2021"/><ref name="Dodge2003"/><ref name="Everitt2022"/>.


We will discuss the concepts of features and labels in somewhat more detail in Sections [[#sec_feature_space |Features]] and [[#sec_labels | Labels]].  
We will discuss the concepts of features and labels in somewhat more detail in Sections [[#sec_feature_space |Features]] and [[#sec_labels | Labels]].  
Line 240: Line 239:
Assume we have a list of <span class="mw-gls" data-name ="datapoint">data point</span>s each of which is characterized by several properties that could be measured easily in principles (by sensors). These properties would be first candidates for being used as features of the <span class="mw-gls" data-name ="datapoint">data point</span>s. However, some of these properties are unknown (missing) for a small set of <span class="mw-gls" data-name ="datapoint">data point</span>s  
Assume we have a list of <span class="mw-gls" data-name ="datapoint">data point</span>s each of which is characterized by several properties that could be measured easily in principles (by sensors). These properties would be first candidates for being used as features of the <span class="mw-gls" data-name ="datapoint">data point</span>s. However, some of these properties are unknown (missing) for a small set of <span class="mw-gls" data-name ="datapoint">data point</span>s  
(e.g., due to broken sensors). We could then define the properties which are missing for some <span class="mw-gls" data-name ="datapoint">data point</span>s as labels and try to predict these labels using the remaining properties (which are known for all <span class="mw-gls" data-name ="datapoint">data point</span>s)  
(e.g., due to broken sensors). We could then define the properties which are missing for some <span class="mw-gls" data-name ="datapoint">data point</span>s as labels and try to predict these labels using the remaining properties (which are known for all <span class="mw-gls" data-name ="datapoint">data point</span>s)  
as features. The task of determining missing values of properties that could be measured easily in principle is referred to as  imputation <ref name="Abayomi2008DiagnosticsFM"/>.  
as features. The task of determining missing values of properties that could be measured easily in principle is referred to as  imputation <ref name="Abayomi2008DiagnosticsFM">K. Abayomi, A. Gelman, and M. A. Levy. Diagnostics for multivariate imputations. ''Journal of The Royal Statistical Society Series C-applied
  Statistics'' 57:273--291, 2008</ref>.  


Missing data might also arise in image processing applications. Consider <span class="mw-gls" data-name ="datapoint">data point</span>s being images (snapshots)  
Missing data might also arise in image processing applications. Consider <span class="mw-gls" data-name ="datapoint">data point</span>s being images (snapshots)  
Line 261: Line 261:
<span class="mw-gls" data-name ="datapoint">data point</span>s representing audio recording (of a given duration) we might use the signal amplitudes at regular sampling instants (e.g., using sampling frequency <math>44</math> kHz) as features.  For <span class="mw-gls" data-name ="datapoint">data point</span>s representing images it seems natural to use the colour (red, green and blue) intensity levels of each pixel as a feature (see Figure [[#fig_snapshot_pixels|fig_snapshot_pixels]]).
<span class="mw-gls" data-name ="datapoint">data point</span>s representing audio recording (of a given duration) we might use the signal amplitudes at regular sampling instants (e.g., using sampling frequency <math>44</math> kHz) as features.  For <span class="mw-gls" data-name ="datapoint">data point</span>s representing images it seems natural to use the colour (red, green and blue) intensity levels of each pixel as a feature (see Figure [[#fig_snapshot_pixels|fig_snapshot_pixels]]).


The feature construction for images depicted in Figure [[#fig_snapshot_pixels|fig_snapshot_pixels]] can be extended to other types of <span class="mw-gls" data-name ="datapoint">data point</span>s as long as they can be visualized efficiently <ref name="Friendly:06:hbook"/>.  
The feature construction for images depicted in Figure [[#fig_snapshot_pixels|fig_snapshot_pixels]] can be extended to other types of <span class="mw-gls" data-name ="datapoint">data point</span>s as long as they can be visualized efficiently <ref name="Friendly:06:hbook">M. Friendly. A brief history of data visualization. In C. Chen, W. Härdle, and A. Unwin, editors, ''Handbook of
  Computational Statistics: Data Visualization'' volume III. Springer-Verlag,
  2006</ref>.  
Audio recordings are typically available a sequence of signal amplitudes <math>a_{\timeidx}</math> collected regularly at time instants <math>\timeidx=1,\ldots,\featuredim</math> with sampling frequency <math>\approx  44</math> kHz. From a signal processing perspective, it seems natural to directly use the signal amplitudes as features, <math>\feature_{\featureidx} = a_{\featureidx}</math> for <math>\featureidx=1,\ldots,\featurelen</math>. However, another choice for the features would be the pixel RGB values of some visualization of the audio recording.  
Audio recordings are typically available a sequence of signal amplitudes <math>a_{\timeidx}</math> collected regularly at time instants <math>\timeidx=1,\ldots,\featuredim</math> with sampling frequency <math>\approx  44</math> kHz. From a signal processing perspective, it seems natural to directly use the signal amplitudes as features, <math>\feature_{\featureidx} = a_{\featureidx}</math> for <math>\featureidx=1,\ldots,\featurelen</math>. However, another choice for the features would be the pixel RGB values of some visualization of the audio recording.  


Line 270: Line 272:
</div>
</div>


Figure [[#fig_visualization_audio|fig_visualization_audio]] depicts two possible visualizations of an audio signal. The first visualization is obtained from a line plot of the signal amplitudes as a function of time <math>\timeidx</math>. Another visualization of an audio recording is obtained from an intensity plot of its <span class="mw-gls" data-name ="spectogram">spectrogram</span><ref name="TimeFrequencyAnalysisBoashash"/><ref name="MallatBook"/>. We can then use the pixel RGB intensities of these visualizations as the features for an audio recording. Using this trick we can transform any ML method for image data to an ML method for audio data. We can use the <span class="mw-gls mw-gls-first" data-name ="scatterplot">scatterplot</span> of a data set to use ML methods for image segmentation to <span class="mw-gls mw-gls-first" data-name ="cluster">cluster</span> the <span class="mw-gls" data-name ="dataset">dataset</span>(see Chapter [[guide:1a7f020d42 | Clustering ]]).  
Figure [[#fig_visualization_audio|fig_visualization_audio]] depicts two possible visualizations of an audio signal. The first visualization is obtained from a line plot of the signal amplitudes as a function of time <math>\timeidx</math>. Another visualization of an audio recording is obtained from an intensity plot of its <span class="mw-gls" data-name ="spectogram">spectrogram</span><ref name="TimeFrequencyAnalysisBoashash">B. Boashash, editor. ''Time Frequency Signal Analysis and Processing: A
  Comprehensive Reference'' Elsevier, Amsterdam, The Netherlands, 2003</ref><ref name="MallatBook">S. G. Mallat. ''A Wavelet Tour of Signal Processing -- The Sparse Way'' Academic Press, San Diego, CA, 3 edition, 2009</ref>. We can then use the pixel RGB intensities of these visualizations as the features for an audio recording. Using this trick we can transform any ML method for image data to an ML method for audio data. We can use the <span class="mw-gls mw-gls-first" data-name ="scatterplot">scatterplot</span> of a data set to use ML methods for image segmentation to <span class="mw-gls mw-gls-first" data-name ="cluster">cluster</span> the <span class="mw-gls" data-name ="dataset">dataset</span>(see Chapter [[guide:1a7f020d42 | Clustering ]]).  


<div class="d-flex justify-content-center">
<div class="d-flex justify-content-center">
Line 288: Line 291:
The feature space is a design choice as it depends on what properties of a <span class="mw-gls" data-name ="datapoint">data point</span> we use as its features. This design choice should take into account the statistical properties of the data as well as the available computational infrastructure. If the computational infrastructure allows for efficient numerical linear algebra, then using <math>\featurespace = \mathbb{R}^{\featurelen}</math> might be a good choice.
The feature space is a design choice as it depends on what properties of a <span class="mw-gls" data-name ="datapoint">data point</span> we use as its features. This design choice should take into account the statistical properties of the data as well as the available computational infrastructure. If the computational infrastructure allows for efficient numerical linear algebra, then using <math>\featurespace = \mathbb{R}^{\featurelen}</math> might be a good choice.


The <span class="mw-gls mw-gls-first" data-name ="euclidspace">Euclidean space</span> <math>\mathbb{R}^{\featuredim}</math> is an example of a feature space with a rich geometric and algebraic structure <ref name="RudinBookPrinciplesMatheAnalysis"/>.  
The <span class="mw-gls mw-gls-first" data-name ="euclidspace">Euclidean space</span> <math>\mathbb{R}^{\featuredim}</math> is an example of a feature space with a rich geometric and algebraic structure <ref name="RudinBookPrinciplesMatheAnalysis">W. Rudin. ''Principles of Mathematical Analysis'' McGraw-Hill, New York, 3 edition, 1976</ref>.  
The algebraic structure of <math>\mathbb{R}^{\featuredim}</math> is defined by vector addition and multiplication of vectors with scalars. The geometric structure of <math>\mathbb{R}^{\featuredim}</math> is obtained by the Euclidean norm as a measure for the distance between two elements of <math>\mathbb{R}^{\featuredim}</math>.  
The algebraic structure of <math>\mathbb{R}^{\featuredim}</math> is defined by vector addition and multiplication of vectors with scalars. The geometric structure of <math>\mathbb{R}^{\featuredim}</math> is obtained by the Euclidean norm as a measure for the distance between two elements of <math>\mathbb{R}^{\featuredim}</math>.  
The algebraic and geometric structure of <math>\mathbb{R}^{\featuredim}</math> often enables an efficient search over <math>\mathbb{R}^{\featuredim}</math> to find elements with desired properties. Chapter [[guide:2c0f621d22#sec_ERM_lin_reg | ERM for Linear Regression ]] discusses examples of such search problems in the context of learning an optimal hypothesis.  
The algebraic and geometric structure of <math>\mathbb{R}^{\featuredim}</math> often enables an efficient search over <math>\mathbb{R}^{\featuredim}</math> to find elements with desired properties. Chapter [[guide:2c0f621d22#sec_ERM_lin_reg | ERM for Linear Regression ]] discusses examples of such search problems in the context of learning an optimal hypothesis.  
Line 294: Line 297:
Modern information-technology, including smartphones or wearables, allows us to measure a huge number of properties about <span class="mw-gls" data-name ="datapoint">data point</span>s in many application domains. Consider a <span class="mw-gls" data-name ="datapoint">data point</span> representing the book author “Alex Jung”. Alex uses a smartphone to take roughly five snapshots per day (sometimes more, e.g., during a mountain hike) resulting in more than <math>1000</math> snapshots per year. Each snapshot contains around <math>10^{6}</math> pixels whose greyscale levels we can use as features of the <span class="mw-gls" data-name ="datapoint">data point</span>. This results in more than <math>10^{9}</math> features (per year!).  
Modern information-technology, including smartphones or wearables, allows us to measure a huge number of properties about <span class="mw-gls" data-name ="datapoint">data point</span>s in many application domains. Consider a <span class="mw-gls" data-name ="datapoint">data point</span> representing the book author “Alex Jung”. Alex uses a smartphone to take roughly five snapshots per day (sometimes more, e.g., during a mountain hike) resulting in more than <math>1000</math> snapshots per year. Each snapshot contains around <math>10^{6}</math> pixels whose greyscale levels we can use as features of the <span class="mw-gls" data-name ="datapoint">data point</span>. This results in more than <math>10^{9}</math> features (per year!).  


As indicated above, many important ML applications involve <span class="mw-gls" data-name ="datapoint">data point</span>s represented by very long feature vectors. To process such high-dimensional data, modern ML methods rely on concepts from high-dimensional statistics <ref name="BuhlGeerBook"/><ref name="Wain2019"/>. One such concept from high-dimensional statistics is the notion of sparsity. Section [[guide:013ef4b5cd#sec_lasso | The Lasso ]] discusses methods that exploit the tendency of high-dimensional <span class="mw-gls" data-name ="datapoint">data point</span>s, which are characterized by a large number <math>\featuredim</math> of features, to concentrate near low-dimensional subspaces in the <span class="mw-gls" data-name ="featurespace">feature space</span> <ref name="VidalMag"/>.   
As indicated above, many important ML applications involve <span class="mw-gls" data-name ="datapoint">data point</span>s represented by very long feature vectors. To process such high-dimensional data, modern ML methods rely on concepts from high-dimensional statistics <ref name="BuhlGeerBook">P. Bühlmann and S. van de Geer. ''Statistics for High-Dimensional Data'' Springer, New York, 2011</ref><ref name="Wain2019">M. Wainwright. ''High-Dimensional Statistics: A Non-Asymptotic Viewpoint'' Cambridge: Cambridge University Press, 2019</ref>. One such concept from high-dimensional statistics is the notion of sparsity. Section [[guide:013ef4b5cd#sec_lasso | The Lasso ]] discusses methods that exploit the tendency of high-dimensional <span class="mw-gls" data-name ="datapoint">data point</span>s, which are characterized by a large number <math>\featuredim</math> of features, to concentrate near low-dimensional subspaces in the <span class="mw-gls" data-name ="featurespace">feature space</span> <ref name="VidalMag">R. Vidal. Subspace clustering. ''IEEE Signal Processing Magazine'' 52, March 2011</ref>.   


At first sight it might seem that “the more features the better” since using more features might convey more relevant information to achieve the overall goal. However, as we discuss in Chapter [[guide:50be9327aa | Regularization ]], it can be detrimental for the performance of ML methods to use an excessive amount of (irrelevant) features.  
At first sight it might seem that “the more features the better” since using more features might convey more relevant information to achieve the overall goal. However, as we discuss in Chapter [[guide:50be9327aa | Regularization ]], it can be detrimental for the performance of ML methods to use an excessive amount of (irrelevant) features.  
Line 309: Line 312:
Beside the available computational infrastructure, also the statistical properties of datasets must be taken into account for the choices of the feature space. The linear algebraic structure of <math>\mathbb{R}^{\featuredim}</math> allows us to efficiently represent and approximate datasets that are well aligned along linear subspaces. Section [[guide:4f25e79970#sec_pca | Principal Component Analysis]] discusses a basic method to optimally approximate datasets by linear subspaces of a given dimension. The geometric structure of <math>\mathbb{R}^{\featuredim}</math> is also used in Chapter [[guide:1a7f020d42 | Clustering ]] to decompose a <span class="mw-gls" data-name ="dataset">dataset</span> into few groups or clusters that consist of similar <span class="mw-gls" data-name ="datapoint">data point</span>s.  
Beside the available computational infrastructure, also the statistical properties of datasets must be taken into account for the choices of the feature space. The linear algebraic structure of <math>\mathbb{R}^{\featuredim}</math> allows us to efficiently represent and approximate datasets that are well aligned along linear subspaces. Section [[guide:4f25e79970#sec_pca | Principal Component Analysis]] discusses a basic method to optimally approximate datasets by linear subspaces of a given dimension. The geometric structure of <math>\mathbb{R}^{\featuredim}</math> is also used in Chapter [[guide:1a7f020d42 | Clustering ]] to decompose a <span class="mw-gls" data-name ="dataset">dataset</span> into few groups or clusters that consist of similar <span class="mw-gls" data-name ="datapoint">data point</span>s.  


Throughout this book we will mainly use the feature space <math>\mathbb{R}^{\featuredim}</math> with dimension <math>\featuredim</math> being the number of features of a <span class="mw-gls" data-name ="datapoint">data point</span>. This <span class="mw-gls" data-name ="featurespace">feature space</span> has proven useful in many ML applications due to availability of efficient soft- and hardware for numerical linear algebra. Moreover, the algebraic and geometric structure of <math>\mathbb{R}^{\featuredim}</math> reflect the intrinsic structure of data arising in many important application domains. This should not be too surprising as the <span class="mw-gls" data-name ="euclidspace">Euclidean space</span> has evolved as a useful mathematical abstraction of physical phenomena <ref name="KibbleBerkshireBook"/>.
Throughout this book we will mainly use the feature space <math>\mathbb{R}^{\featuredim}</math> with dimension <math>\featuredim</math> being the number of features of a <span class="mw-gls" data-name ="datapoint">data point</span>. This <span class="mw-gls" data-name ="featurespace">feature space</span> has proven useful in many ML applications due to availability of efficient soft- and hardware for numerical linear algebra. Moreover, the algebraic and geometric structure of <math>\mathbb{R}^{\featuredim}</math> reflect the intrinsic structure of data arising in many important application domains. This should not be too surprising as the <span class="mw-gls" data-name ="euclidspace">Euclidean space</span> has evolved as a useful mathematical abstraction of physical phenomena <ref name="KibbleBerkshireBook">T. Kibble and F. Berkshire. ''Classical Mechanics'' Imperical College Press, 5 edition, 2011</ref>.


In general there is no mathematically correct choice for which properties of a <span class="mw-gls" data-name ="datapoint">data point</span> to be used as its features. Most application domains allow for some design freedom in the choice of features. Let us illustrate this design freedom with a personalized health-care applications. This application involves <span class="mw-gls" data-name ="datapoint">data point</span>s that represent audio recordings with the fixed duration of three seconds. These recordings are obtained via smartphone microphones and used to detect coughing <ref name="CougDetection2019"/>.  
In general there is no mathematically correct choice for which properties of a <span class="mw-gls" data-name ="datapoint">data point</span> to be used as its features. Most application domains allow for some design freedom in the choice of features. Let us illustrate this design freedom with a personalized health-care applications. This application involves <span class="mw-gls" data-name ="datapoint">data point</span>s that represent audio recordings with the fixed duration of three seconds. These recordings are obtained via smartphone microphones and used to detect coughing <ref name="CougDetection2019">F. Barata, K. Kipfer, M. Weber, P. Tinschert, E. Fleisch, and T. Kowatsch. Towards device-agnostic mobile cough detection with convolutional
  neural networks. In ''2019 IEEE International Conference on Healthcare Informatics
  (ICHI)'' pages 1--11, 2019</ref>.  


===<span id="sec_labels"/>Labels===
===<span id="sec_labels"/>Labels===
Line 359: Line 364:


We refer to a <span class="mw-gls" data-name ="datapoint">data point</span> as being \emph{labeled} if, besides its features <math>\featurevec</math>, the value of its label <math>\truelabel</math> is known. The acquisition of labeled <span class="mw-gls" data-name ="datapoint">data point</span>s typically involves human labour,  
We refer to a <span class="mw-gls" data-name ="datapoint">data point</span> as being \emph{labeled} if, besides its features <math>\featurevec</math>, the value of its label <math>\truelabel</math> is known. The acquisition of labeled <span class="mw-gls" data-name ="datapoint">data point</span>s typically involves human labour,  
such as verifying if an image shows a cat. In other applications, acquiring labels might require sending out a team of marine biologists to the Baltic sea <ref name="MLMarineBiology"/>, to run a particle physics experiment at the European organization for nuclear research (CERN) <ref name="MLCERN"/>, or to conduct animal trials in pharmacology <ref name="MLPharma"/>.  
such as verifying if an image shows a cat. In other applications, acquiring labels might require sending out a team of marine biologists to the Baltic sea <ref name="MLMarineBiology">S. Smoliski and K. Radtke. Spatial prediction of demersal fish diversity in the baltic sea:
  comparison of machine learning and regression-based techniques. ''ICES Journal of Marine Science'' 74(1):102--111, 2017</ref>, to run a particle physics experiment at the European organization for nuclear research (CERN) <ref name="MLCERN">S. Carrazza. Machine learning challenges in theoretical HEP. ''arXiv'' 2018</ref>, or to conduct animal trials in pharmacology <ref name="MLPharma">M. Gao, H. Igata, A. Takeuchi, K. Sato, and Y. Ikegaya. Machine learning-based prediction of adverse drug effects: An example
  of seizure-inducing compounds. ''Journal of Pharmacological Sciences'' 133(2):70 -- 78, 2017</ref>.  


Let us also point out online market places for human labelling workforce <ref name="Mort2018"/>. These market places,  
Let us also point out online market places for human labelling workforce <ref name="Mort2018">K. Mortensen and T. Hughes. Comparing amazon's mechanical turk platform to conventional data
  collection methods in the health and medical research literature. ''J. Gen. Intern Med.'' 33(4):533--538, 2018</ref>. These market places,  
allow to upload <span class="mw-gls" data-name ="datapoint">data point</span>s, such as collections of images or audio recordings, and then offer an hourly rate to humans that label the <span class="mw-gls" data-name ="datapoint">data point</span>s. This labeling work might amount to marking images that show a cat.  
allow to upload <span class="mw-gls" data-name ="datapoint">data point</span>s, such as collections of images or audio recordings, and then offer an hourly rate to humans that label the <span class="mw-gls" data-name ="datapoint">data point</span>s. This labeling work might amount to marking images that show a cat.  


Many applications involve <span class="mw-gls" data-name ="datapoint">data point</span>s whose features can be determined easily, but whose labels are known for few <span class="mw-gls" data-name ="datapoint">data point</span>s only. Labeled data is a scarce resource. Some of the most successful ML methods have been devised in application domains where label information can be acquired easily <ref name="UnreasonableData"/>. ML methods for  speech recognition and machine translation can make use of massive labeled datasets that are freely available <ref name="Koehn2005"/>.  
Many applications involve <span class="mw-gls" data-name ="datapoint">data point</span>s whose features can be determined easily, but whose labels are known for few <span class="mw-gls" data-name ="datapoint">data point</span>s only. Labeled data is a scarce resource. Some of the most successful ML methods have been devised in application domains where label information can be acquired easily <ref name="UnreasonableData">A. Halevy, P. Norvig, and F. Pereira. The unreasonable effectiveness of data. ''IEEE Intelligent Systems'' March/April 2009</ref>. ML methods for  speech recognition and machine translation can make use of massive labeled datasets that are freely available <ref name="Koehn2005">P. Koehn. Europarl: A parallel corpus for statistical machine translation. In ''The 10th Machine Translation Summit, page 79--86., AAMT,''
  Phuket, Thailand, 2005</ref>.  


In the extreme case, we do not know the label of any single <span class="mw-gls" data-name ="datapoint">data point</span>. Even in the absence of any labeled data, ML methods can be useful for extracting relevant information from features only. We refer to ML methods which do not require any labeled <span class="mw-gls" data-name ="datapoint">data point</span>s as “unsupervised” ML methods. We discuss some of the most important unsupervised ML methods in Chapter [[guide:1a7f020d42 | Clustering ]]  
In the extreme case, we do not know the label of any single <span class="mw-gls" data-name ="datapoint">data point</span>. Even in the absence of any labeled data, ML methods can be useful for extracting relevant information from features only. We refer to ML methods which do not require any labeled <span class="mw-gls" data-name ="datapoint">data point</span>s as “unsupervised” ML methods. We discuss some of the most important unsupervised ML methods in Chapter [[guide:1a7f020d42 | Clustering ]]  
Line 371: Line 380:
ML methods learn (or search for) a “good” predictor <math>h: \featurespace \rightarrow \labelspace</math> which takes the features <math>\featurevec \in \featurespace</math> of a <span class="mw-gls" data-name ="datapoint">data point</span> as its input and outputs a predicted label (or output, or target) <math>\hat{\truelabel} = h(\featurevec) \in \labelspace</math>. A good predictor should be such that <math>\hat{\truelabel} \approx \truelabel</math>, i.e., the predicted label <math>\hat{\truelabel}</math> is close (with small error <math>\hat{\truelabel} - \truelabel</math>) to the true underlying label <math>\truelabel</math>.  
ML methods learn (or search for) a “good” predictor <math>h: \featurespace \rightarrow \labelspace</math> which takes the features <math>\featurevec \in \featurespace</math> of a <span class="mw-gls" data-name ="datapoint">data point</span> as its input and outputs a predicted label (or output, or target) <math>\hat{\truelabel} = h(\featurevec) \in \labelspace</math>. A good predictor should be such that <math>\hat{\truelabel} \approx \truelabel</math>, i.e., the predicted label <math>\hat{\truelabel}</math> is close (with small error <math>\hat{\truelabel} - \truelabel</math>) to the true underlying label <math>\truelabel</math>.  


===Scatterplot===  
===<span id="equ_subsection_scatterplot"/>Scatterplot===  


Consider <span class="mw-gls" data-name ="datapoint">data point</span>s characterized by a single numeric feature <math>\feature</math> and single numeric label <math>\truelabel</math>.  
Consider <span class="mw-gls" data-name ="datapoint">data point</span>s characterized by a single numeric feature <math>\feature</math> and single numeric label <math>\truelabel</math>.  
Line 391: Line 400:
However, this interpretation allows us to use the properties of the <span class="mw-gls" data-name ="probdist">probability distribution</span> to characterize overall properties of entire datasets, i.e., large collections of <span class="mw-gls" data-name ="datapoint">data point</span>s.  
However, this interpretation allows us to use the properties of the <span class="mw-gls" data-name ="probdist">probability distribution</span> to characterize overall properties of entire datasets, i.e., large collections of <span class="mw-gls" data-name ="datapoint">data point</span>s.  


The <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\feature)</math> underlying the <span class="mw-gls" data-name ="datapoint">data point</span>s within the <span class="mw-gls mw-gls-first" data-name ="iidasspt">i.i.d. assumption</span> is either known (based on some domain expertise) or estimated from data. It is often enough to estimate only some parameters of the distribution <math>p(\feature)</math>. Section [[guide:013ef4b5cd#sec_max_iikelihood | Maximum Likelihood ]] discusses a principled approach to estimate the parameters of a <span class="mw-gls" data-name ="probdist">probability distribution</span> from a given set of <span class="mw-gls" data-name ="datapoint">data point</span>s. This approach is sometimes referred to as maximum likelihood and aims at finding (parameter of) a <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\feature)</math> such that the probability (density) of observing the given <span class="mw-gls" data-name ="datapoint">data point</span>s is maximized <ref name="LC"/><ref name="kay"/><ref name="BertsekasProb"/>.
The <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\feature)</math> underlying the <span class="mw-gls" data-name ="datapoint">data point</span>s within the <span class="mw-gls mw-gls-first" data-name ="iidasspt">i.i.d. assumption</span> is either known (based on some domain expertise) or estimated from data. It is often enough to estimate only some parameters of the distribution <math>p(\feature)</math>. Section [[guide:013ef4b5cd#sec_max_iikelihood | Maximum Likelihood ]] discusses a principled approach to estimate the parameters of a <span class="mw-gls" data-name ="probdist">probability distribution</span> from a given set of <span class="mw-gls" data-name ="datapoint">data point</span>s. This approach is sometimes referred to as maximum likelihood and aims at finding (parameter of) a <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\feature)</math> such that the probability (density) of observing the given <span class="mw-gls" data-name ="datapoint">data point</span>s is maximized <ref name="LC">E. L. Lehmann and G. Casella. ''Theory of Point Estimation'' Springer, New York, 2nd edition, 1998</ref><ref name="kay">S. M. Kay. ''Fundamentals of Statistical Signal Processing: Estimation
  Theory'' Prentice Hall, Englewood Cliffs, NJ, 1993</ref><ref name="BertsekasProb">D. Bertsekas and J. Tsitsiklis. ''Introduction to Probability'' Athena Scientific, 2 edition, 2008</ref>.


Two of the most basic and widely used parameters of a <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\feature)</math> are the expected value or <span class="mw-gls mw-gls-first" data-name ="mean">mean</span> <ref name="BillingsleyProbMeasure"/>
Two of the most basic and widely used parameters of a <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\feature)</math> are the expected value or <span class="mw-gls mw-gls-first" data-name ="mean">mean</span> <ref name="BillingsleyProbMeasure">P. Billingsley. ''Probability and Measure'' Wiley, New York, 3 edition, 1995</ref>
<math>\mu_{\feature} = \expect\{\feature\} \defeq \int_{\feature'} \feature' p(\feature') dx'</math> and the <span class="mw-gls mw-gls-first" data-name ="variance">variance</span> <math>\sigma_{\feature}^{2} \defeq \expect\big\{\big(\feature-\expect\{\feature\}\big)^{2}\big\}.</math> These parameters can be estimated using the sample mean (average) and sample variance,  
<math>\mu_{\feature} = \expect\{\feature\} \defeq \int_{\feature'} \feature' p(\feature') dx'</math> and the <span class="mw-gls mw-gls-first" data-name ="variance">variance</span> <math>\sigma_{\feature}^{2} \defeq \expect\big\{\big(\feature-\expect\{\feature\}\big)^{2}\big\}.</math> These parameters can be estimated using the sample mean (average) and sample variance,  


Line 403: Line 413:
</math>
</math>


The sample mean and sample variance \eqref{equ_sample_mean_var} are the <span class="mw-gls mw-gls-first" data-name ="ml">maximum likelihood</span> estimators for the mean and variance of a normal (Gaussian) distribution <math>p(\feature)</math> (see <ref name="BishopBook"/>{{rp | at=Chapter 2.3.4}}).  
The sample mean and sample variance \eqref{equ_sample_mean_var} are the <span class="mw-gls mw-gls-first" data-name ="ml">maximum likelihood</span> estimators for the mean and variance of a normal (Gaussian) distribution <math>p(\feature)</math> (see <ref name="BishopBook">C. M. Bishop. ''Pattern Recognition and Machine Learning'' Springer, 2006</ref>{{rp | at=Chapter 2.3.4}}).  


Most of the ML methods discussed in this book are motivated by an <span class="mw-gls" data-name ="iidasspt">i.i.d. assumption</span>. It is important to note that this <span class="mw-gls" data-name ="iidasspt">i.i.d. assumption</span> is only a modelling assumption (or hypothesis). There is no means to verify if an arbitrary set of <span class="mw-gls" data-name ="datapoint">data point</span>s are “exactly” realizations of <span class="mw-gls" data-name ="iid">iid</span> <span class="mw-gls" data-name ="rv">RV</span>s. There are principled statistical methods (hypothesis tests) that allow to verify if a given set of <span class="mw-gls" data-name ="datapoint">data point</span> can be well approximated as realizations of <span class="mw-gls" data-name ="iid">iid</span> <span class="mw-gls" data-name ="rv">RV</span>s <ref name="Luetkepol2005"/>. Alternatively, we can enforce the <span class="mw-gls" data-name ="iidasspt">i.i.d. assumption</span> if we generate synthetic data using a random number generator. Such synthetic <span class="mw-gls" data-name ="iid">iid</span> <span class="mw-gls" data-name ="datapoint">data point</span>s could be obtained by sampling algorithms that incrementally build a synthetic dataset by adding randomly chosen raw <span class="mw-gls" data-name ="datapoint">data point</span>s <ref name="Efron97"/>.  
Most of the ML methods discussed in this book are motivated by an <span class="mw-gls" data-name ="iidasspt">i.i.d. assumption</span>. It is important to note that this <span class="mw-gls" data-name ="iidasspt">i.i.d. assumption</span> is only a modelling assumption (or hypothesis). There is no means to verify if an arbitrary set of <span class="mw-gls" data-name ="datapoint">data point</span>s are “exactly” realizations of <span class="mw-gls" data-name ="iid">iid</span> <span class="mw-gls" data-name ="rv">RV</span>s. There are principled statistical methods (hypothesis tests) that allow to verify if a given set of <span class="mw-gls" data-name ="datapoint">data point</span> can be well approximated as realizations of <span class="mw-gls" data-name ="iid">iid</span> <span class="mw-gls" data-name ="rv">RV</span>s <ref name="Luetkepol2005">H. Lütkepohl. ''New Introduction to Multiple Time Series Analysis'' Springer, New York, 2005</ref>. Alternatively, we can enforce the <span class="mw-gls" data-name ="iidasspt">i.i.d. assumption</span> if we generate synthetic data using a random number generator. Such synthetic <span class="mw-gls" data-name ="iid">iid</span> <span class="mw-gls" data-name ="datapoint">data point</span>s could be obtained by sampling algorithms that incrementally build a synthetic dataset by adding randomly chosen raw <span class="mw-gls" data-name ="datapoint">data point</span>s <ref name="Efron97">B. Efron and R. Tibshirani. Improvements on cross-validation: The 632+ bootstrap method. ''Journal of the American Statistical Association''
  92(438):548--560, 1997</ref>.  




==The Model==  
==<span id="sec_hypo_space"/>The Model==  


Consider some ML application that generates <span class="mw-gls" data-name ="datapoint">data point</span>s, each characterized by features <math>\featurevec \in \featurespace</math> and label <math>\truelabel \in \labelspace</math>. The goal of a ML method is to learn a hypothesis map <math>h: \featurespace \rightarrow \labelspace</math> such that  
Consider some ML application that generates <span class="mw-gls" data-name ="datapoint">data point</span>s, each characterized by features <math>\featurevec \in \featurespace</math> and label <math>\truelabel \in \labelspace</math>. The goal of a ML method is to learn a hypothesis map <math>h: \featurespace \rightarrow \labelspace</math> such that  
Line 414: Line 425:
\label{equ_approx_label_pred}
\label{equ_approx_label_pred}
\truelabel \approx \underbrace{h(\featurevec)}_{\hat{\truelabel}} \mbox{ for any data point}.  
\truelabel \approx \underbrace{h(\featurevec)}_{\hat{\truelabel}} \mbox{ for any data point}.  
\end{equation}</math> The informal goal \eqref{equ_approx_label_pred} will be made precise in several aspects throughout the rest of our book. First, we need to quantify the approximation error \eqref{equ_approx_label_pred} incurred by a given hypothesis map <math>h</math>. Second, we need to make precise what we actually mean by requiring \eqref{equ_approx_label_pred} to hold for “any” <span class="mw-gls" data-name ="datapoint">data point</span>. We solve the first issue by the concept of a loss function in Section [[guide:B85f6bf6f2#sec_lossfct | The Loss ]]. The second issue is then solved in Chapter [[guide:2c0f621d22 | Empirical Risk Minimization ]] by using a simple probabilistic model for data.  
\end{equation}</math> The informal goal \eqref{equ_approx_label_pred} will be made precise in several aspects throughout the rest of our book. First, we need to quantify the approximation error \eqref{equ_approx_label_pred} incurred by a given hypothesis map <math>h</math>. Second, we need to make precise what we actually mean by requiring \eqref{equ_approx_label_pred} to hold for “any” <span class="mw-gls" data-name ="datapoint">data point</span>. We solve the first issue by the concept of a loss function in Section [[#sec_lossfct | The Loss ]]. The second issue is then solved in Chapter [[guide:2c0f621d22 | Empirical Risk Minimization ]] by using a simple probabilistic model for data.  


Let us assume for the time being that we have found a reasonable hypothesis <math>h</math> in the sense of \eqref{equ_approx_label_pred}. We can then use this hypothesis to predict the label of any <span class="mw-gls" data-name ="datapoint">data point</span> for which we know its features. The prediction <math>\hat{\truelabel}=h(\featurevec)</math> is obtained by evaluating the hypothesis for the features <math>\featurevec</math> of a <span class="mw-gls" data-name ="datapoint">data point</span> (see Figure [[#fig_feature_map_eval|fig_feature_map_eval]] and [[#fig:Hypothesis_Map|fig:Hypothesis_Map]]).  
Let us assume for the time being that we have found a reasonable hypothesis <math>h</math> in the sense of \eqref{equ_approx_label_pred}. We can then use this hypothesis to predict the label of any <span class="mw-gls" data-name ="datapoint">data point</span> for which we know its features. The prediction <math>\hat{\truelabel}=h(\featurevec)</math> is obtained by evaluating the hypothesis for the features <math>\featurevec</math> of a <span class="mw-gls" data-name ="datapoint">data point</span> (see Figure [[#fig_feature_map_eval|fig_feature_map_eval]] and [[#fig:Hypothesis_Map|fig:Hypothesis_Map]]).  
Line 435: Line 446:
   
   
In general, the set <math>\labelspace^{\featurespace}</math> is way too large to be search over by a practical ML methods.  
In general, the set <math>\labelspace^{\featurespace}</math> is way too large to be search over by a practical ML methods.  
As a point in case, consider <span class="mw-gls" data-name ="datapoint">data point</span>s characterized by a single numeric feature <math>\feature \in \mathbb{R}</math> and label <math>\truelabel \in \mathbb{R}</math>. The set of all real-valued maps <math>h(\feature)</math> of a real-valued argument already contains uncountably infinite many different hypothesis maps <ref name="HalmosSet"/>.  
As a point in case, consider <span class="mw-gls" data-name ="datapoint">data point</span>s characterized by a single numeric feature <math>\feature \in \mathbb{R}</math> and label <math>\truelabel \in \mathbb{R}</math>. The set of all real-valued maps <math>h(\feature)</math> of a real-valued argument already contains uncountably infinite many different hypothesis maps <ref name="HalmosSet">P. Halmos. ''Naive set theory'' Springer-Verlag, 1974</ref>.  


Practical ML methods can search and evaluate only a (tiny) subset of all possible hypothesis maps. This subset of computationally feasible (“affordable”) hypothesis maps is referred to as the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> or <span class="mw-gls" data-name ="model">model</span> underlying a ML method. As depicted in Figure [[#fig_hypo_space|fig_hypo_space]], ML methods typically use a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> that is a tiny subset of <math>\labelspace^{\featurespace}</math>. Similar to the features and labels used to characterize  
Practical ML methods can search and evaluate only a (tiny) subset of all possible hypothesis maps. This subset of computationally feasible (“affordable”) hypothesis maps is referred to as the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> or <span class="mw-gls" data-name ="model">model</span> underlying a ML method. As depicted in Figure [[#fig_hypo_space|fig_hypo_space]], ML methods typically use a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> that is a tiny subset of <math>\labelspace^{\featurespace}</math>. Similar to the features and labels used to characterize  
Line 465: Line 476:
   
   
* It has to be sufficiently large such that it contains at least one accurate predictor map <math>\hat{h} \in \hypospace</math>. A hypothesis space <math>\hypospace</math> that is too small might fail to include a predictor map required to reproduce the (potentially highly non-linear) relation between features and label. Consider the task of grouping or classifying images into “cat” images and “no cat image”. The classification of each image is based solely on the feature vector obtained from the pixel colour intensities. The relation between features and label (<math>\truelabel \in \{ \mbox{cat}, \mbox{no cat} \}</math>) is highly non-linear. Any ML method that uses a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> consisting only of linear maps will most likely fail to learn a good predictor (classifier). We say that a ML method is  underfitting if it uses a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> that does not contain any hypotheses maps that can accurately predict the label of any <span class="mw-gls" data-name ="datapoint">data point</span>s.  
* It has to be sufficiently large such that it contains at least one accurate predictor map <math>\hat{h} \in \hypospace</math>. A hypothesis space <math>\hypospace</math> that is too small might fail to include a predictor map required to reproduce the (potentially highly non-linear) relation between features and label. Consider the task of grouping or classifying images into “cat” images and “no cat image”. The classification of each image is based solely on the feature vector obtained from the pixel colour intensities. The relation between features and label (<math>\truelabel \in \{ \mbox{cat}, \mbox{no cat} \}</math>) is highly non-linear. Any ML method that uses a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> consisting only of linear maps will most likely fail to learn a good predictor (classifier). We say that a ML method is  underfitting if it uses a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> that does not contain any hypotheses maps that can accurately predict the label of any <span class="mw-gls" data-name ="datapoint">data point</span>s.  
* It has to be sufficiently small such that its processing fits the available computational resources (memory, bandwidth, processing time). We must be able to efficiently search over the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> to find good predictors (see Section [[guide:B85f6bf6f2#sec_lossfct | The Loss ]] and Chapter [[guide:2c0f621d22 | Empirical Risk Minimization ]]). This requirement implies also that the maps <math>h(\featurevec)</math> contained in <math>\hypospace</math> can be evaluated (computed) efficiently <ref name="Austin2018"/>. Another important reason for using a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> that is not too large is to avoid overfitting (see Chapter [[guide:50be9327aa | Regularization ]]). If the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> is too large, then we can easily find a hypothesis which (almost) perfectly predicts the labels of <span class="mw-gls" data-name ="datapoint">data point</span>s in a <span class="mw-gls mw-gls-first" data-name ="trainset">training set</span> which is used to learn a hypothesis. However, such a hypothesis might deliver poor predictions for labels of <span class="mw-gls" data-name ="datapoint">data point</span>s outside the <span class="mw-gls" data-name ="trainset">training set</span>. We say that the hypothesis does not generalize well.  
* It has to be sufficiently small such that its processing fits the available computational resources (memory, bandwidth, processing time). We must be able to efficiently search over the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> to find good predictors (see Section [[#sec_lossfct | The Loss ]] and Chapter [[guide:2c0f621d22 | Empirical Risk Minimization ]]). This requirement implies also that the maps <math>h(\featurevec)</math> contained in <math>\hypospace</math> can be evaluated (computed) efficiently <ref name="Austin2018">P. Austin, P. Kaski, and K. Kubjas. Tensor network complexity of multilinear maps. ''arXiv'' 2018</ref>. Another important reason for using a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> that is not too large is to avoid overfitting (see Chapter [[guide:50be9327aa | Regularization ]]). If the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> is too large, then we can easily find a hypothesis which (almost) perfectly predicts the labels of <span class="mw-gls" data-name ="datapoint">data point</span>s in a <span class="mw-gls mw-gls-first" data-name ="trainset">training set</span> which is used to learn a hypothesis. However, such a hypothesis might deliver poor predictions for labels of <span class="mw-gls" data-name ="datapoint">data point</span>s outside the <span class="mw-gls" data-name ="trainset">training set</span>. We say that the hypothesis does not generalize well.  


===Parametrized Hypothesis spaces===
===Parametrized Hypothesis spaces===
Line 471: Line 482:
A wide range of current scientific computing environments allow for efficient numerical linear algebra.  
A wide range of current scientific computing environments allow for efficient numerical linear algebra.  
This hard- and software allows to efficiently process data that is provided in the form of numeric arrays such as vectors, matrices or tensors <ref name="JMLR:v12:pedregosa11a"/>. To take advantage of such computational infrastructure, many ML methods use the <span class="mw-gls" data-name ="hypospace">hypothesis space</span>   
This hard- and software allows to efficiently process data that is provided in the form of numeric arrays such as vectors, matrices or tensors <ref name="JMLR:v12:pedregosa11a"/>. To take advantage of such computational infrastructure, many ML methods use the <span class="mw-gls" data-name ="hypospace">hypothesis space</span>   
<span id="equ_def_hypo_linear_pred"/>
<math display = "block">\begin{align}
<math display = "block">\begin{align}
\label{equ_def_hypo_linear_pred}
\label{equ_def_hypo_linear_pred}
Line 532: Line 545:
\end{equation}</math> The construction \eqref{equ_def_enlarged_hypospace} used for arbitrary combinations of a feature map <math>\featuremap(\cdot)</math> and a “base” <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math>. The only requirement is that the output of the feature map can be used as input for a hypothesis <math>h \in \hypospace</math>. More formally, the range of the feature map must belong to the domain of the maps in <math>\hypospace</math>. Examples for ML methods that use a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> of the form \eqref{equ_def_enlarged_hypospace} include polynomial regression (see Section [[guide:013ef4b5cd#sec_polynomial_regression | Polynomial Regression ]]),  
\end{equation}</math> The construction \eqref{equ_def_enlarged_hypospace} used for arbitrary combinations of a feature map <math>\featuremap(\cdot)</math> and a “base” <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math>. The only requirement is that the output of the feature map can be used as input for a hypothesis <math>h \in \hypospace</math>. More formally, the range of the feature map must belong to the domain of the maps in <math>\hypospace</math>. Examples for ML methods that use a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> of the form \eqref{equ_def_enlarged_hypospace} include polynomial regression (see Section [[guide:013ef4b5cd#sec_polynomial_regression | Polynomial Regression ]]),  
Gaussian basis regression (see Section [[guide:013ef4b5cd#sec_linbasreg | Gaussian Basis Regression ]]) and the important family of kernel methods (see Section [[guide:013ef4b5cd#sec_kernel_methods | Kernel Methods ]]).  
Gaussian basis regression (see Section [[guide:013ef4b5cd#sec_linbasreg | Gaussian Basis Regression ]]) and the important family of kernel methods (see Section [[guide:013ef4b5cd#sec_kernel_methods | Kernel Methods ]]).  
The feature map in \eqref{equ_def_enlarged_hypospace} might also be obtained from <span class="mw-gls mw-gls-first" data-name ="clustering">clustering</span> or feature learning methods (see Section [[guide:1a7f020d42#sec_clust_preproc | Clustering as Preprocessing ]] and Section \ref{sec_comb_PcA_Linreg}).
The feature map in \eqref{equ_def_enlarged_hypospace} might also be obtained from <span class="mw-gls mw-gls-first" data-name ="clustering">clustering</span> or feature learning methods (see Section [[guide:1a7f020d42#sec_clust_preproc | Clustering as Preprocessing ]] and Section [[guide:4f25e79970#sec_comb_PcA_Linreg|Combining PCA with linear regression]]).


For the special case of the linear <span class="mw-gls" data-name ="hypospace">hypothesis space</span> \eqref{equ_def_hypo_linear_pred}, the resulting enlarged  
For the special case of the linear <span class="mw-gls" data-name ="hypospace">hypothesis space</span> \eqref{equ_def_hypo_linear_pred}, the resulting enlarged  
Line 544: Line 557:
''' Non-Numeric Features.'''
''' Non-Numeric Features.'''
The <span class="mw-gls" data-name ="hypospace">hypothesis space</span> \eqref{equ_def_hypo_linear_pred} can only be used for <span class="mw-gls" data-name ="datapoint">data point</span>s whose features are numeric vectors <math>\featurevec = (\feature_{1},\ldots,\feature_{\featuredim})^{T} \in \mathbb{R}^{\featuredim}</math>. In some application domains, such as natural language processing, there is no obvious natural choice for numeric features. However, since ML methods based on the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> \eqref{equ_def_hypo_linear_pred} are well developed (using numerical linear algebra),  
The <span class="mw-gls" data-name ="hypospace">hypothesis space</span> \eqref{equ_def_hypo_linear_pred} can only be used for <span class="mw-gls" data-name ="datapoint">data point</span>s whose features are numeric vectors <math>\featurevec = (\feature_{1},\ldots,\feature_{\featuredim})^{T} \in \mathbb{R}^{\featuredim}</math>. In some application domains, such as natural language processing, there is no obvious natural choice for numeric features. However, since ML methods based on the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> \eqref{equ_def_hypo_linear_pred} are well developed (using numerical linear algebra),  
it might be useful to construct numerical features even for non-numeric data (such as text). For text data, there has been significant progress recently on methods that map a human-generated text into sequences of vectors (see <ref name="Goodfellow-et-al-2016"/>{{rp|at=Chap. 12}} for more details). Moreover, Section [[guide:4f25e79970#sec_discrete_embeddings | \section{Feature Learning for Non-Numeric Data}
it might be useful to construct numerical features even for non-numeric data (such as text). For text data, there has been significant progress recently on methods that map a human-generated text into sequences of vectors (see <ref name="Goodfellow-et-al-2016">I. Goodfellow, Y. Bengio, and A. Courville. ''Deep Learning'' MIT Press, 2016</ref>{{rp|at=Chap. 12}} for more details). Moreover, Section [[guide:4f25e79970#sec_discrete_embeddings | Feature Learning for Non-Numeric Data]] will discuss an approach to generate numeric features for <span class="mw-gls" data-name ="datapoint">data point</span>s that have an intrinsic notion of similarity.  
]] will discuss an approach to generate numeric features for <span class="mw-gls" data-name ="datapoint">data point</span>s that have an intrinsic notion of similarity.  


===The Size of a Hypothesis Space===
===The Size of a Hypothesis Space===
Line 557: Line 569:
We define the <span class="mw-gls" data-name ="effdim">effective dimension</span> <math>\effdim{\hypospace}</math> of <math>\hypospace</math> as the maximum number <math>\sizehypospace \in \mathbb{N}</math> such that for any set <math>\dataset = \big\{ \big(\featurevec^{(1)},\truelabel^{(1)}\big), \ldots, \big(\featurevec^{(\sizehypospace)},\truelabel^{(\sizehypospace)}\big) \}</math> of <math>\sizehypospace</math> <span class="mw-gls" data-name ="datapoint">data point</span>s with different features, we can always find a hypothesis <math>h \in \hypospace</math> that perfectly fits the labels, <math>\truelabel^{(\sampleidx)} = h\big( \featurevec^{(\sampleidx)} \big)</math> for <math>\sampleidx=1,\ldots,\sizehypospace</math>.  
We define the <span class="mw-gls" data-name ="effdim">effective dimension</span> <math>\effdim{\hypospace}</math> of <math>\hypospace</math> as the maximum number <math>\sizehypospace \in \mathbb{N}</math> such that for any set <math>\dataset = \big\{ \big(\featurevec^{(1)},\truelabel^{(1)}\big), \ldots, \big(\featurevec^{(\sizehypospace)},\truelabel^{(\sizehypospace)}\big) \}</math> of <math>\sizehypospace</math> <span class="mw-gls" data-name ="datapoint">data point</span>s with different features, we can always find a hypothesis <math>h \in \hypospace</math> that perfectly fits the labels, <math>\truelabel^{(\sampleidx)} = h\big( \featurevec^{(\sampleidx)} \big)</math> for <math>\sampleidx=1,\ldots,\sizehypospace</math>.  


The <span class="mw-gls" data-name ="effdim">effective dimension</span> of a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> is closely related to the <span class="mw-gls mw-gls-first" data-name ="vcdim">Vapnik–Chervonenkis (VC) dimension</span> <ref name="VapnikBook"/>.  
The <span class="mw-gls" data-name ="effdim">effective dimension</span> of a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> is closely related to the <span class="mw-gls mw-gls-first" data-name ="vcdim">Vapnik–Chervonenkis (VC) dimension</span> <ref name="VapnikBook">V. N. Vapnik. ''The Nature of Statistical Learning Theory'' Springer, 1999</ref>.  
The <span class="mw-gls" data-name ="vcdim">VC dimension</span> is maybe the most widely used concept for measuring the size of infinite <span class="mw-gls" data-name ="hypospace">hypothesis space</span>s  <ref name="VapnikBook"/><ref name="ShalevMLBook"/><ref name="BishopBook"/><ref name="hastie01statisticallearning"/>.  
The <span class="mw-gls" data-name ="vcdim">VC dimension</span> is maybe the most widely used concept for measuring the size of infinite <span class="mw-gls" data-name ="hypospace">hypothesis space</span>s  <ref name="VapnikBook"/><ref name="ShalevMLBook">S. Shalev-Shwartz and S. Ben-David. ''Understanding Machine Learning -- from Theory to Algorithms'' Cambridge University Press, 2014</ref><ref name="BishopBook"/><ref name="hastie01statisticallearning">T. Hastie, R. Tibshirani, and J. Friedman. ''The Elements of Statistical Learning'' Springer Series in Statistics. Springer, New York, NY, USA, 2001</ref>.  
However, the precise definition of the <span class="mw-gls" data-name ="vcdim">VC dimension</span> are beyond the scope of this book. Moreover, the  
However, the precise definition of the <span class="mw-gls" data-name ="vcdim">VC dimension</span> are beyond the scope of this book. Moreover, the  
<span class="mw-gls" data-name ="effdim">effective dimension</span> captures most of the relevant properties of the <span class="mw-gls" data-name ="vcdim">VC dimension</span> for our purposes. For a precise definition of the <span class="mw-gls" data-name ="vcdim">VC dimension</span> and discussion of its applications in ML we refer to <ref name="ShalevMLBook"/>. Let us illustrate the concept of <span class="mw-gls" data-name ="effdim">effective dimension</span> as a measure for the size of a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> with two examples: <span class="mw-gls" data-name ="linreg">linear regression</span> and polynomial regression.  
<span class="mw-gls" data-name ="effdim">effective dimension</span> captures most of the relevant properties of the <span class="mw-gls" data-name ="vcdim">VC dimension</span> for our purposes. For a precise definition of the <span class="mw-gls" data-name ="vcdim">VC dimension</span> and discussion of its applications in ML we refer to <ref name="ShalevMLBook"/>. Let us illustrate the concept of <span class="mw-gls" data-name ="effdim">effective dimension</span> as a measure for the size of a <span class="mw-gls" data-name ="hypospace">hypothesis space</span> with two examples: <span class="mw-gls" data-name ="linreg">linear regression</span> and polynomial regression.  
Line 579: Line 591:
</div>
</div>
==The Loss==  
==<span id="sec_lossfct"/>The Loss==  


Every ML method uses a (more of less explicit) <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> which consists of all computationally feasible predictor maps <math>h</math>. Which predictor map <math>h</math> out of all the maps in the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> is the best for the ML problem at hand? To answer this questions,  
Every ML method uses a (more of less explicit) <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> which consists of all computationally feasible predictor maps <math>h</math>. Which predictor map <math>h</math> out of all the maps in the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> is the best for the ML problem at hand? To answer this questions,  
Line 592: Line 604:
The loss value <math>\loss{(\featurevec,\truelabel)}{h}</math> quantifies the discrepancy between the true label <math>\truelabel</math> and the predicted label <math>h(\featurevec)</math>. A small (close to zero) value <math>\loss{(\featurevec,\truelabel)}{h}</math> indicates  a low discrepancy between predicted <span class="mw-gls" data-name ="label">label</span> and true label of a <span class="mw-gls" data-name ="datapoint">data point</span>. Figure [[#fig_loss_function|fig_loss_function]] depicts a loss function for a given <span class="mw-gls" data-name ="datapoint">data point</span>, with features <math>\featurevec</math> and label <math>\truelabel</math>, as a function of the hypothesis <math>h \in \hypospace</math>. The basic principle of ML methods can then be formulated as: Learn (find) a hypothesis out of a given <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> that incurs a minimum loss <math>\loss{(\featurevec,\truelabel)}{h}</math> for any <span class="mw-gls" data-name ="datapoint">data point</span> (see Chapter [[guide:2c0f621d22 | Empirical Risk Minimization ]]).  
The loss value <math>\loss{(\featurevec,\truelabel)}{h}</math> quantifies the discrepancy between the true label <math>\truelabel</math> and the predicted label <math>h(\featurevec)</math>. A small (close to zero) value <math>\loss{(\featurevec,\truelabel)}{h}</math> indicates  a low discrepancy between predicted <span class="mw-gls" data-name ="label">label</span> and true label of a <span class="mw-gls" data-name ="datapoint">data point</span>. Figure [[#fig_loss_function|fig_loss_function]] depicts a loss function for a given <span class="mw-gls" data-name ="datapoint">data point</span>, with features <math>\featurevec</math> and label <math>\truelabel</math>, as a function of the hypothesis <math>h \in \hypospace</math>. The basic principle of ML methods can then be formulated as: Learn (find) a hypothesis out of a given <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> that incurs a minimum loss <math>\loss{(\featurevec,\truelabel)}{h}</math> for any <span class="mw-gls" data-name ="datapoint">data point</span> (see Chapter [[guide:2c0f621d22 | Empirical Risk Minimization ]]).  
Much like the choice for the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> used in a ML method, also the loss function is a design choice. We will discuss some widely used examples for <span class="mw-gls" data-name ="lossfunc">loss function</span> in Section \ref{sec_loss_numeric_label}
Much like the choice for the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> used in a ML method, also the loss function is a design choice. We will discuss some widely used examples for <span class="mw-gls" data-name ="lossfunc">loss function</span> in Section [[#sec_loss_numeric_label |Loss Functions for Numeric Labels]] and Section [[#sec_loss_categorical |Loss Functions for Categorical Labels]]. The choice for the <span class="mw-gls" data-name ="lossfunc">loss function</span> should take into account the computational complexity of searching the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> for a hypothesis with minimum loss. Consider a ML method that uses a parametrized <span class="mw-gls" data-name ="hypospace">hypothesis space</span> and a <span class="mw-gls" data-name ="lossfunc">loss function</span> that is a convex and differentiable (smooth) function of the parameters of a hypothesis. In this case, searching for a hypothesis with small <span class="mw-gls mw-gls-first" data-name ="loss">loss</span> can be done efficiently using the <span class="mw-gls mw-gls-first" data-name ="gdmethods">gradient-based methods</span> discussed in Chapter [[guide:Cc42ad1ea4 | Gradient-Based Learning ]]. The minimization of a <span class="mw-gls" data-name ="lossfunc">loss function</span> that is either non-convex or non-differentiable is typically computationally much more difficult. Section [[guide:2c0f621d22#sec_comp_stat_ERM | Computational and Statistical Aspects of ERM ]] discusses the computational complexities of different types of <span class="mw-gls" data-name ="lossfunc">loss function</span>s in more detail.
and Section [[#sec_loss_categorical |Loss Functions for Categorical Labels]]. The choice for the <span class="mw-gls" data-name ="lossfunc">loss function</span> should take into account the computational complexity of searching the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> for a hypothesis with minimum loss. Consider a ML method that uses a parametrized <span class="mw-gls" data-name ="hypospace">hypothesis space</span> and a <span class="mw-gls" data-name ="lossfunc">loss function</span> that is a convex and differentiable (smooth) function of the parameters of a hypothesis. In this case, searching for a hypothesis with small <span class="mw-gls mw-gls-first" data-name ="loss">loss</span> can be done efficiently using the <span class="mw-gls mw-gls-first" data-name ="gdmethods">gradient-based methods</span> discussed in Chapter [[guide:Cc42ad1ea4 | Gradient-Based Learning ]]. The minimization of a <span class="mw-gls" data-name ="lossfunc">loss function</span> that is either non-convex or non-differentiable is typically computationally much more difficult. Section [[guide:2c0f621d22#sec_comp_stat_ERM | Computational and Statistical Aspects of ERM ]]  
discusses the computational complexities of different types of <span class="mw-gls" data-name ="lossfunc">loss function</span>s in more detail.
Beside computational aspects, the choice for the <span class="mw-gls" data-name ="lossfunc">loss function</span> should also take into account statistical aspects.  
Beside computational aspects, the choice for the <span class="mw-gls" data-name ="lossfunc">loss function</span> should also take into account statistical aspects. For example, some <span class="mw-gls" data-name ="lossfunc">loss function</span>s result in ML methods that are more robust against <span class="mw-gls mw-gls-first" data-name ="outlier">outlier</span>s (see Section [[guide:013ef4b5cd#sec_lad | Least Absolute Deviation Regression ]] and Section [[guide:013ef4b5cd#sec_SVM | Support Vector Machines ]]). The choice of <span class="mw-gls" data-name ="lossfunc">loss function</span> might also be guided by probabilistic models for the data generated in an ML application. Section [[guide:013ef4b5cd#sec_max_iikelihood | Maximum Likelihood ]] details how the <span class="mw-gls" data-name ="ml">maximum likelihood</span> principle of statistical inference provides an explicit construction of <span class="mw-gls" data-name ="lossfunc">loss function</span>s in terms of an (assumed)  
For example, some <span class="mw-gls" data-name ="lossfunc">loss function</span>s result in ML methods that are more robust against <span class="mw-gls mw-gls-first" data-name ="outlier">outlier</span>s (see Section [[guide:013ef4b5cd#sec_lad | Least Absolute Deviation Regression ]] and Section [[guide:013ef4b5cd#sec_SVM | Support Vector Machines ]]). The choice of <span class="mw-gls" data-name ="lossfunc">loss function</span> might also be guided by probabilistic models for the data generated in an ML application. Section [[guide:013ef4b5cd#sec_max_iikelihood | Maximum Likelihood ]] details how the <span class="mw-gls" data-name ="ml">maximum likelihood</span> principle of statistical inference provides an explicit construction of <span class="mw-gls" data-name ="lossfunc">loss function</span>s in terms of an (assumed)  
<span class="mw-gls" data-name ="probdist">probability distribution</span> for <span class="mw-gls" data-name ="datapoint">data point</span>s.  
<span class="mw-gls" data-name ="probdist">probability distribution</span> for <span class="mw-gls" data-name ="datapoint">data point</span>s.  
Line 613: Line 622:
<div class="d-flex justify-content-center">
<div class="d-flex justify-content-center">
<span id="fig_loss_function_and_metric "></span>
<span id="fig_loss_function_and_metric "></span>
[[File:fig_loss_function_and_metric.jpg | 500px | thumb | Two different loss functions for a given <span class="mw-gls" data-name ="datapoint">data point</span> and varying hypothesis <math>h</math>.  
[[File:fig_loss_function_and_metric.jpg | 500px | thumb | Two different loss functions for a given <span class="mw-gls" data-name ="datapoint">data point</span> and varying hypothesis <math>h</math>. One of these <span class="mw-gls" data-name ="loss">loss</span> functions (solid curve) is used to learn a good hypothesis by minimizing the loss.  
One of these <span class="mw-gls" data-name ="loss">loss</span> functions (solid curve) is used to learn a good hypothesis by minimizing the loss.  
The other <span class="mw-gls" data-name ="loss">loss</span> function (dashed curve) is used to evaluate the performance of the learnt hypothesis. The <span class="mw-gls" data-name ="lossfunc">loss function</span> used for this final performance evaluation is sometimes referred to as a <span class="mw-gls" data-name ="metric">metric</span>. ]]
The other <span class="mw-gls" data-name ="loss">loss</span> function (dashed curve) is used to evaluate the performance of the learnt hypothesis. The <span class="mw-gls" data-name ="lossfunc">loss function</span> used for this final performance evaluation is sometimes referred to as a <span class="mw-gls" data-name ="metric">metric</span>. ]]
</div>
</div>
===Loss Functions for Numeric Labels===  
=== <span id="sec_loss_numeric_label "/>Loss Functions for Numeric Labels===  
 
For ML problems involving <span class="mw-gls" data-name ="datapoint">data point</span>s with a numeric label <math>\truelabel \in \mathbb{R}</math>, i.e., for regression problems (see Section [[#sec_labels | Labels]]), a widely used (first) choice for the loss function can be the squared error <span class="mw-gls" data-name ="loss">loss</span>


For ML problems involving <span class="mw-gls" data-name ="datapoint">data point</span>s with a numeric label <math>\truelabel \in \mathbb{R}</math>, i.e., for regression problems
<span id = "equ_squared_loss"/>
(see Section [[#sec_labels | Labels]]), a widely used (first) choice for the loss function can be the squared error <span class="mw-gls" data-name ="loss">loss</span>
<math display="block">\begin{equation}  
<math display="block">\begin{equation}  
\label{equ_squared_loss}
\label{equ_squared_loss}
\loss{(\featurevec,\truelabel)}{h} \defeq \big(\truelabel - \underbrace{h(\featurevec)}_{=\predictedlabel} \big)^{2}.  
\loss{(\featurevec,\truelabel)}{h} \defeq \big(\truelabel - \underbrace{h(\featurevec)}_{=\predictedlabel} \big)^{2}.  
Line 636: Line 644:
</div>
</div>


The squared error loss \eqref{equ_squared_loss} has appealing computational and statistical properties.  
The squared error loss \eqref{equ_squared_loss} has appealing computational and statistical properties. For linear predictor maps <math>h(\featurevec) = \weights^{T} \featurevec</math>, the squared error loss is a convex and differentiable function of the parameter vector <math>\weights</math>. This allows, in turn, to efficiently search for the optimal linear predictor using efficient iterative optimization methods (see Chapter [[guide:Cc42ad1ea4 | Gradient-Based Learning ]]). The squared error loss also has a useful interpretation in terms of a probabilistic model for the features and labels. Minimizing the squared error loss is equivalent to maximum likelihood estimation within a linear Gaussian model <ref name="hastie01statisticallearning"/> {{rp | at=Sec. 2.6.3}}.
For linear predictor maps <math>h(\featurevec) = \weights^{T} \featurevec</math>, the squared error loss is a convex and differentiable function of the parameter vector <math>\weights</math>. This allows, in turn, to efficiently search for the optimal linear predictor using efficient iterative optimization methods (see Chapter [[guide:Cc42ad1ea4 | Gradient-Based Learning ]]). The squared error loss also has a useful interpretation in terms of a probabilistic model for the features and labels.  
Minimizing the squared error loss is equivalent to maximum likelihood estimation within a linear Gaussian model <ref name="hastie01statisticallearning"/> {{rp | at=Sec. 2.6.3}}.


Another loss function used in regression problems is the absolute error loss <math>|\hat{y} - y|</math>.  
Another loss function used in regression problems is the absolute error loss <math>|\hat{y} - y|</math>.  
Using this loss function to guide the learning of a predictor results in methods that are robust against a few <span class="mw-gls" data-name ="outlier">outlier</span>s in the <span class="mw-gls" data-name ="trainset">training set</span> (see Section [[guide:013ef4b5cd#sec_lad | Least Absolute Deviation Regression ]]). However, this improved robustness comes at the expense of increased computational complexity of minimizing the (non-differentiable)  
Using this loss function to guide the learning of a predictor results in methods that are robust against a few <span class="mw-gls" data-name ="outlier">outlier</span>s in the <span class="mw-gls" data-name ="trainset">training set</span> (see Section [[guide:013ef4b5cd#sec_lad | Least Absolute Deviation Regression ]]). However, this improved robustness comes at the expense of increased computational complexity of minimizing the (non-differentiable) absolute error loss compared to the (differentiable) squared error loss \eqref{equ_squared_loss}.  
absolute error loss compared to the (differentiable) squared error loss \eqref{equ_squared_loss}.  
=== <span id="sec_loss_categorical"/>Loss Functions for Categorical Labels===  
=== <span id="sec_loss_categorical"/>Loss Functions for Categorical Labels===  
Line 705: Line 710:
To avoid the non-convexity of the <math>0/1</math> loss \eqref{equ_def_0_1} we might approximate it by a convex <span class="mw-gls" data-name ="lossfunc">loss function</span>. One popular convex approximation of the <math>0/1</math> loss is the <span class="mw-gls mw-gls-first" data-name ="hingeloss">hinge loss</span>
To avoid the non-convexity of the <math>0/1</math> loss \eqref{equ_def_0_1} we might approximate it by a convex <span class="mw-gls" data-name ="lossfunc">loss function</span>. One popular convex approximation of the <math>0/1</math> loss is the <span class="mw-gls mw-gls-first" data-name ="hingeloss">hinge loss</span>
<math display="block">\begin{equation}  
 
<span id = "equ_hinge_loss" />
<math display="block">
\begin{equation}  
\label{equ_hinge_loss}
\label{equ_hinge_loss}
\loss{(\featurevec,\truelabel)}{h} \defeq \max \{ 0 , 1 - \truelabel h(\featurevec) \}.  
\loss{(\featurevec,\truelabel)}{h} \defeq \max \{ 0 , 1 - \truelabel h(\featurevec) \}.  
Line 726: Line 734:
However, in contrast to the <span class="mw-gls" data-name ="hingeloss">hinge loss</span>, the <span class="mw-gls" data-name ="logloss">logistic loss</span> \eqref{equ_log_loss} is also a differentiable function of the <math>\weights</math>.  
However, in contrast to the <span class="mw-gls" data-name ="hingeloss">hinge loss</span>, the <span class="mw-gls" data-name ="logloss">logistic loss</span> \eqref{equ_log_loss} is also a differentiable function of the <math>\weights</math>.  
The convex and differentiable <span class="mw-gls" data-name ="logloss">logistic loss</span> function can be minimized using simple <span class="mw-gls" data-name ="gdmethods">gradient-based methods</span> such as <span class="mw-gls mw-gls-first" data-name ="gd">gradient descent (GD)</span> (see Chapter [[guide:Cc42ad1ea4#sec_GD_logistic_regression | GD for Logistic Regression ]]). In contrast, we cannot use basic <span class="mw-gls" data-name ="gdmethods">gradient-based methods</span> to minimize the <span class="mw-gls" data-name ="hingeloss">hinge loss</span> since it is not differentiable (it does not have a <span class="mw-gls" data-name ="gradient">gradient</span> everywhere). However,  
The convex and differentiable <span class="mw-gls" data-name ="logloss">logistic loss</span> function can be minimized using simple <span class="mw-gls" data-name ="gdmethods">gradient-based methods</span> such as <span class="mw-gls mw-gls-first" data-name ="gd">gradient descent (GD)</span> (see Chapter [[guide:Cc42ad1ea4#sec_GD_logistic_regression | GD for Logistic Regression ]]). In contrast, we cannot use basic <span class="mw-gls" data-name ="gdmethods">gradient-based methods</span> to minimize the <span class="mw-gls" data-name ="hingeloss">hinge loss</span> since it is not differentiable (it does not have a <span class="mw-gls" data-name ="gradient">gradient</span> everywhere). However,  
we can apply a generalization of <span class="mw-gls" data-name ="gd">GD</span> which is known as <span class="mw-gls mw-gls-first" data-name ="sgd">subgradient descent</span> <ref name="CvxBubeck2015"/>.  
we can apply a generalization of <span class="mw-gls" data-name ="gd">GD</span> which is known as <span class="mw-gls mw-gls-first" data-name ="sgd">subgradient descent</span> <ref name="CvxBubeck2015">S. Bubeck. Convex optimization. algorithms and complexity. In ''Foundations and Trends in Machine Learning'' volume 8. Now
  Publishers, 2015</ref>.  
<span class="mw-gls mw-gls-first" data-name ="sgd">Subgradient descent</span> is obtained from <span class="mw-gls" data-name ="gd">GD</span> by generalizing the concept of a <span class="mw-gls" data-name ="gradient">gradient</span> to that of a <span class="mw-gls mw-gls-first" data-name ="subgradient">subgradient</span>.  
<span class="mw-gls mw-gls-first" data-name ="sgd">Subgradient descent</span> is obtained from <span class="mw-gls" data-name ="gd">GD</span> by generalizing the concept of a <span class="mw-gls" data-name ="gradient">gradient</span> to that of a <span class="mw-gls mw-gls-first" data-name ="subgradient">subgradient</span>.  
Line 740: Line 749:
===Loss Functions for Ordinal Label Values===
===Loss Functions for Ordinal Label Values===


Some <span class="mw-gls" data-name ="lossfunc">loss function</span>s are particularly suited for predicting ordinal label values (see Section [[guide:B85f6bf6f2#sec_the_data | The Data ]]).  
Some <span class="mw-gls" data-name ="lossfunc">loss function</span>s are particularly suited for predicting ordinal label values (see Section [[#sec_the_data | The Data ]]).  
Consider <span class="mw-gls" data-name ="datapoint">data point</span>s representing areal images of rectangular areas of size <math>1 \mbox{km}</math> by <math>1 \mbox{km}</math>.  
Consider <span class="mw-gls" data-name ="datapoint">data point</span>s representing areal images of rectangular areas of size <math>1 \mbox{km}</math> by <math>1 \mbox{km}</math>.  
We characterize each <span class="mw-gls" data-name ="datapoint">data point</span> (rectangular area) by the feature vector <math>\featurevec</math> obtained by stacking the RGB values of each image pixel (see Figure [[#fig_snapshot_pixels|fig_snapshot_pixels]]). Beside the feature vector,  
We characterize each <span class="mw-gls" data-name ="datapoint">data point</span> (rectangular area) by the feature vector <math>\featurevec</math> obtained by stacking the RGB values of each image pixel (see Figure [[#fig_snapshot_pixels|fig_snapshot_pixels]]). Beside the feature vector,  
Line 758: Line 767:
\end{equation}</math>  
\end{equation}</math>  
===Empirical Risk===  
===<span id="sec_empirical_risk"/>Empirical Risk===  
The basic idea of ML methods (including those discussed in Chapter [[guide:013ef4b5cd | The Landscape of ML ]]) is to find (or learn) a hypothesis (out of a given <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math>) that incurs minimum loss when applied to arbitrary <span class="mw-gls" data-name ="datapoint">data point</span>s. To make this informal goal precise we need to specify what we mean by “arbitrary <span class="mw-gls" data-name ="datapoint">data point</span>”.


The basic idea of ML methods (including those discussed in Chapter [[guide:013ef4b5cd | The Landscape of ML ]]) is to find (or learn)
One of the most successful approaches to define the notion of “arbitrary <span class="mw-gls" data-name ="datapoint">data point</span>” is by probabilistic models for the
a hypothesis (out of a given <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math>) that incurs minimum loss when applied to arbitrary <span class="mw-gls" data-name ="datapoint">data point</span>s. To make this informal goal precise we need to specify what we mean by “arbitrary <span class="mw-gls" data-name ="datapoint">data point</span>”.
observed <span class="mw-gls" data-name ="datapoint">data point</span>s. The most basic and widely-used probabilistic model interprets <span class="mw-gls" data-name ="datapoint">data point</span>s <math>\big( \featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)} \big)</math> as realizations of <span class="mw-gls" data-name ="iid">iid</span> <span class="mw-gls" data-name ="rv">RV</span>s with a common <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\featurevec,\truelabel)</math>. Given such a probabilistic model, it seems natural to measure the quality of a hypothesis <math>h</math> by the expected loss or <span class="mw-gls mw-gls-first" data-name ="bayesrisk">Bayes risk</span> <ref name="LC"/>  
One of the most successful approaches to define the notion of “arbitrary <span class="mw-gls" data-name ="datapoint">data point</span>” is by probabilistic models for the observed <span class="mw-gls" data-name ="datapoint">data point</span>s.  
The most basic and widely-used probabilistic model interprets <span class="mw-gls" data-name ="datapoint">data point</span>s  
<math>\big( \featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)} \big)</math> as realizations of <span class="mw-gls" data-name ="iid">iid</span> <span class="mw-gls" data-name ="rv">RV</span>s with a common <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\featurevec,\truelabel)</math>. Given such a probabilistic model, it seems natural to measure the quality of a hypothesis <math>h</math> by the expected loss or <span class="mw-gls mw-gls-first" data-name ="bayesrisk">Bayes risk</span> <ref name="LC"/>  
<math display="block">\begin{equation}  
<math display="block">\begin{equation}  


Line 784: Line 790:
\end{equation}</math> A hypothesis that solves \eqref{equ_def_bayes_risk_int}, i.e., that achieves the minimum possible <span class="mw-gls" data-name ="bayesrisk">Bayes risk</span>,  
\end{equation}</math> A hypothesis that solves \eqref{equ_def_bayes_risk_int}, i.e., that achieves the minimum possible <span class="mw-gls" data-name ="bayesrisk">Bayes risk</span>,  
is referred to as a <span class="mw-gls mw-gls-first" data-name ="bayesestimator">Bayes estimator</span> <ref name="LC"/>{{rp | at=Chapter 4}}. The main computational challenge for learning the optimal hypothesis is the efficient (numerical) solution of the optimization problem \eqref{equ_def_bayes_risk_int}.  
is referred to as a <span class="mw-gls mw-gls-first" data-name ="bayesestimator">Bayes estimator</span> <ref name="LC"/>{{rp | at=Chapter 4}}. The main computational challenge for learning the optimal hypothesis is the efficient (numerical) solution of the optimization problem \eqref{equ_def_bayes_risk_int}.  
Efficient methods to solve the optimization problem \eqref{equ_def_bayes_risk_int} are studied within estimation theory <ref name="LC"/><ref name="GraphModExpFamVarInfWainJor"/>.   
Efficient methods to solve the optimization problem \eqref{equ_def_bayes_risk_int} are studied within estimation theory <ref name="LC"/><ref name="GraphModExpFamVarInfWainJor">M. J. Wainwright and M. I. Jordan. ''Graphical Models, Exponential Families, and Variational
  Inference'' volume 1 of ''Foundations and Trends in Machine Learning'' Now Publishers, Hanover, MA, 2008</ref>.   
The focus of this book is on ML methods which do not require knowledge of the underlying <span class="mw-gls" data-name ="probdist">probability distribution</span>  
The focus of this book is on ML methods which do not require knowledge of the underlying <span class="mw-gls" data-name ="probdist">probability distribution</span>  
<math>p(\featurevec,\truelabel)</math>. One of the most widely used principle for these ML methods is to approximate the  
<math>p(\featurevec,\truelabel)</math>. One of the most widely used principle for these ML methods is to approximate the  
<span class="mw-gls" data-name ="bayesrisk">Bayes risk</span> by an empirical (sample) average over a finite set of labeled <span class="mw-gls" data-name ="datapoint">data point</span>s <math>\dataset=\big(\featurevec^{(1)},\truelabel^{(1)}\big),\ldots,\big(\featurevec^{(\samplesize)},\truelabel^{(\samplesize)}\big)</math>.  
<span class="mw-gls" data-name ="bayesrisk">Bayes risk</span> by an empirical (sample) average over a finite set of labeled <span class="mw-gls" data-name ="datapoint">data point</span>s <math>\dataset=\big(\featurevec^{(1)},\truelabel^{(1)}\big),\ldots,\big(\featurevec^{(\samplesize)},\truelabel^{(\samplesize)}\big)</math>.  
In particular, we define the <span class="mw-gls mw-gls-first" data-name ="emprisk">empirical risk</span> of a hypothesis <math>h \in \hypospace</math> for a dataset <math>\dataset</math> as  
In particular, we define the <span class="mw-gls mw-gls-first" data-name ="emprisk">empirical risk</span> of a hypothesis <math>h \in \hypospace</math> for a dataset <math>\dataset</math> as
<math display="block">\begin{equation}  
 
\label{eq_def_emp_error_101}
<span id="eq_def_emp_error_101"/>
\emperror(h|\dataset) = (1/\samplesize)\sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h}.   
<math display="block">\begin{equation}  
\end{equation}</math> The <span class="mw-gls" data-name ="emprisk">empirical risk</span> of the hypothesis <math>h \in \hypospace</math> is the average loss on the <span class="mw-gls" data-name ="datapoint">data point</span>s in <math>\dataset</math>.  
\label{eq_def_emp_error_101}
\emperror(h|\dataset) = (1/\samplesize)\sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h}.   
\end{equation}
</math>  
 
The <span class="mw-gls" data-name ="emprisk">empirical risk</span> of the hypothesis <math>h \in \hypospace</math> is the average loss on the <span class="mw-gls" data-name ="datapoint">data point</span>s in <math>\dataset</math>.  
To ease notational burden, we use <math>\emperror(h)</math> as a shorthand for <math>\emperror(h|\dataset)</math> if the underlying dataset <math>\dataset</math> is clear from the context. Note that in general the <span class="mw-gls" data-name ="emprisk">empirical risk</span> depends on both,  
To ease notational burden, we use <math>\emperror(h)</math> as a shorthand for <math>\emperror(h|\dataset)</math> if the underlying dataset <math>\dataset</math> is clear from the context. Note that in general the <span class="mw-gls" data-name ="emprisk">empirical risk</span> depends on both,  
the hypothesis <math>h</math> and the (features and labels of the) <span class="mw-gls" data-name ="datapoint">data point</span>s in the dataset <math>\dataset</math>.  
the hypothesis <math>h</math> and the (features and labels of the) <span class="mw-gls" data-name ="datapoint">data point</span>s in the dataset <math>\dataset</math>.  
Line 803: Line 815:
\expect \big \{ \loss{(\featurevec,\truelabel)}{h} \} \approx  (1/\samplesize)\sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h} \mbox{ for sufficiently large sample size } \samplesize.
\expect \big \{ \loss{(\featurevec,\truelabel)}{h} \} \approx  (1/\samplesize)\sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h} \mbox{ for sufficiently large sample size } \samplesize.
\end{equation}</math> The approximation error in \eqref{equ_emp_risk_approximates_Bayes_risk} can be quantified precisely by some of the most basic results of probability theory. These results are often summarized under the umbrella term  
\end{equation}</math> The approximation error in \eqref{equ_emp_risk_approximates_Bayes_risk} can be quantified precisely by some of the most basic results of probability theory. These results are often summarized under the umbrella term  
<span class="mw-gls" data-name ="lln">law of large numbers</span> <ref name="BillingsleyProbMeasure"/><ref name="GrayProbBook"/><ref name="BertsekasProb"/>.  
<span class="mw-gls" data-name ="lln">law of large numbers</span> <ref name="BillingsleyProbMeasure"/><ref name="GrayProbBook">R. Gray. ''Probability, Random Processes, and Ergodic Properties'' Springer, New York, 2 edition, 2009</ref><ref name="BertsekasProb"/>.  
Many (if not most) ML methods are motivated by \eqref{equ_emp_risk_approximates_Bayes_risk} which suggests that a hypothesis with small <span class="mw-gls" data-name ="emprisk">empirical risk</span> \eqref{eq_def_emp_error_101} will also result in a small expected loss. The minimum possible expected loss is achieved by the <span class="mw-gls" data-name ="bayesestimator">Bayes estimator</span> of the label <math>\truelabel</math>, given the features <math>\featurevec</math>.  
Many (if not most) ML methods are motivated by \eqref{equ_emp_risk_approximates_Bayes_risk} which suggests that a hypothesis with small <span class="mw-gls" data-name ="emprisk">empirical risk</span> \eqref{eq_def_emp_error_101} will also result in a small expected loss. The minimum possible expected loss is achieved by the <span class="mw-gls" data-name ="bayesestimator">Bayes estimator</span> of the label <math>\truelabel</math>, given the features <math>\featurevec</math>.  
However, to actually compute the optimal estimator we would need to know the (joint) <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\featurevec,\truelabel)</math> of features <math>\featurevec</math> and label <math>\truelabel</math>.  
However, to actually compute the optimal estimator we would need to know the (joint) <span class="mw-gls" data-name ="probdist">probability distribution</span> <math>p(\featurevec,\truelabel)</math> of features <math>\featurevec</math> and label <math>\truelabel</math>.  
====Confusion Matrix====
====<span id="sec_confustion_matrix"/>Confusion Matrix====
 
Consider a dataset <math>\dataset</math> with <span class="mw-gls" data-name ="datapoint">data point</span>s characterized by feature vectors <math>\featurevec^{(\sampleidx)}</math> and labels <math>\truelabel^{(\sampleidx)} \in \{1,\ldots,\nrcluster\}</math>. We might interpret the label value of a <span class="mw-gls" data-name ="datapoint">data point</span> as the index of a category or class to which the <span class="mw-gls" data-name ="datapoint">data point</span> belongs to. Multi-class classification problems aim at learning a hypothesis <math>h</math> such that <math>h(\featurevec) \approx \truelabel</math> for any <span class="mw-gls" data-name ="datapoint">data point</span>.
Consider a dataset <math>\dataset</math> with <span class="mw-gls" data-name ="datapoint">data point</span>s characterized by feature vectors <math>\featurevec^{(\sampleidx)}</math> and labels <math>\truelabel^{(\sampleidx)} \in \{1,\ldots,\nrcluster\}</math>. We might interpret the label value of a <span class="mw-gls" data-name ="datapoint">data point</span> as the index of a category or class to which the <span class="mw-gls" data-name ="datapoint">data point</span> belongs to. Multi-class classification problems aim at learning a hypothesis <math>h</math> such that <math>h(\featurevec) \approx \truelabel</math> for any <span class="mw-gls" data-name ="datapoint">data point</span>.
In principle, we could measure the quality of a given hypothesis <math>h</math> by the average <math>0/1</math> loss incurred on the labeled <span class="mw-gls" data-name ="datapoint">data point</span>s in (the <span class="mw-gls" data-name ="trainset">training set</span>) <math>\dataset</math>. However, if the dataset <math>\dataset</math> contains mostly <span class="mw-gls" data-name ="datapoint">data point</span>s with one specific label value, the average <math>0/1</math> loss might obscure the performance of <math>h</math> for <span class="mw-gls" data-name ="datapoint">data point</span>s having one of the rare label values. Indeed, even if the average <math>0/1</math> loss is very small, the hypothesis might perform poorly for <span class="mw-gls" data-name ="datapoint">data point</span>s of a minority category.  
In principle, we could measure the quality of a given hypothesis <math>h</math> by the average <math>0/1</math> loss incurred on the labeled <span class="mw-gls" data-name ="datapoint">data point</span>s in (the <span class="mw-gls" data-name ="trainset">training set</span>) <math>\dataset</math>. However, if the dataset <math>\dataset</math> contains mostly <span class="mw-gls" data-name ="datapoint">data point</span>s with one specific label value, the average <math>0/1</math> loss might obscure the performance of <math>h</math> for <span class="mw-gls" data-name ="datapoint">data point</span>s having one of the rare label values. Indeed, even if the average <math>0/1</math> loss is very small, the hypothesis might perform poorly for <span class="mw-gls" data-name ="datapoint">data point</span>s of a minority category.  
The <span class="mw-gls mw-gls-first" data-name ="cm">confusion matrix</span> generalizes the concept of the <math>0/1</math> loss to application domains where the relative frequency (fraction) of <span class="mw-gls" data-name ="datapoint">data point</span>s with a specific label value varies significantly (imbalanced data).  
The <span class="mw-gls mw-gls-first" data-name ="cm">confusion matrix</span> generalizes the concept of the <math>0/1</math> loss to application domains where the relative frequency (fraction) of <span class="mw-gls" data-name ="datapoint">data point</span>s with a specific label value varies significantly (imbalanced data).  
Line 828: Line 840:
====Precision, Recall and F-Measure====
====Precision, Recall and F-Measure====
 
Consider an object detection application where <span class="mw-gls" data-name ="datapoint">data point</span>s are images. The label of <span class="mw-gls" data-name ="datapoint">data point</span>s might indicate the presence (<math>\truelabel\!=\!1</math>) or absence (<math>\truelabel\!=\!-1</math>) of an object, it is then customary to define the <ref name="InfoRetrievaBook"/>
Consider an object detection application where <span class="mw-gls" data-name ="datapoint">data point</span>s are images. The label of <span class="mw-gls" data-name ="datapoint">data point</span>s might indicate the presence (<math>\truelabel\!=\!1</math>) or absence (<math>\truelabel\!=\!-1</math>) of an object, it is then customary to define the <ref name="InfoRetrievaBook">R. Baeza-Yates and B. Ribeiro-Neto. ''Modern Information Retrieval'' addwes, 1999</ref>
<math display="block">\begin{equation}  
 
<math display="block">\begin{equation}  
\mbox{recall } \defeq \emperror^{(1 \rightarrow 1)}(h|\dataset) \mbox{, and the precision } \defeq \frac{\emperror^{(1 \rightarrow 1)}(h|\dataset)} {\emperror^{(1 \rightarrow 1)}(h|\dataset)  + \emperror^{(-1 \rightarrow 1)}(h|\dataset)  }.  
\mbox{recall } \defeq \emperror^{(1 \rightarrow 1)}(h|\dataset) \mbox{, and the precision } \defeq \frac{\emperror^{(1 \rightarrow 1)}(h|\dataset)} {\emperror^{(1 \rightarrow 1)}(h|\dataset)  + \emperror^{(-1 \rightarrow 1)}(h|\dataset)  }.  
\end{equation}</math> Clearly, we would like to find a hypothesis with both, large recall and large precision. However, these two goals are typically conflicting, a hypothesis with a high recall will have small precision. Depending on the application, we might prefer having a high recall and tolerate a lower precision.  
\end{equation}</math>  
 
It might be convenient to combine the recall and precision of a hypothesis into a single quantity,  
Clearly, we would like to find a hypothesis with both, large recall and large precision. However, these two goals are typically conflicting, a hypothesis with a high recall will have small precision. Depending on the application, we might prefer having a high recall and tolerate a lower precision.  
 
It might be convenient to combine the recall and precision of a hypothesis into a single quantity,  
<math display="block">\begin{equation}  
<math display="block">\begin{equation}  
\label{equ_f1_score}
\label{equ_f1_score}
\fscore \defeq 2  \frac{\mbox{precision} \times \mbox{recall}}{\mbox{precision} +\mbox{recall}}
\fscore \defeq 2  \frac{\mbox{precision} \times \mbox{recall}}{\mbox{precision} +\mbox{recall}}
\end{equation}</math> The <math>F</math> measure \eqref{equ_f1_score} is the harmonic mean <ref name="AbramowitzStegun"/> of the precision and recall of a hypothesis <math>h</math>. It is a special case of the <math>F_{\beta}</math>-score
\end{equation}</math> The <math>F</math> measure \eqref{equ_f1_score} is the harmonic mean <ref name="AbramowitzStegun">M. Abramowitz and I. A. Stegun, editors. ''Handbook of Mathematical Functions'' Dover, New York, 1965</ref> of the precision and recall of a hypothesis <math>h</math>. It is a special case of the <math>F_{\beta}</math>-score
<math display="block">\begin{equation}  
<math display="block">\begin{equation}  
\label{equ_fbeta_score}
\label{equ_fbeta_score}
Line 845: Line 860:
===Regret===  
===Regret===  
In some ML applications, we might have access to the predictions obtained from some reference methods which are referred to as  <span class="mw-gls mw-gls-first" data-name ="expert">expert</span>s <ref name="PredictionLearningGames">N. Cesa-Bianchi and G. Lugosi. ''Prediction, Learning, and Games'' Cambridge University Press, New York, NY, USA, 2006</ref><ref name="HazanOCO">E. Hazan. ''Introduction to Online Convex Optimization'' Now Publishers Inc., 2016</ref>. The quality of a hypothesis <math>h</math> is measured via the difference between the loss incurred by its predictions <math>h(\featurevec)</math> and the loss incurred by the predictions of the experts. This difference, which is referred to as the regret,
measures by how much we <span class="mw-gls mw-gls-first" data-name ="regret">regret</span> to have used the prediction <math>h(\featurevec)</math> instead of using(or following) the prediction of the expert. The goal of  <span class="mw-gls" data-name ="regret">regret</span> minimization is to learn a hypothesis with a small <span class="mw-gls" data-name ="regret">regret</span> compared to given set of <span class="mw-gls" data-name ="expert">expert</span>s.
In some ML applications, we might have access to the predictions obtained from some reference methods which are referred to as  <span class="mw-gls mw-gls-first" data-name ="expert">expert</span>s <ref name="PredictionLearningGames"/><ref name="HazanOCO"/>.
The concept of <span class="mw-gls" data-name ="regret">regret</span> minimization is useful when we do not make any probabilistic assumptions (see Section [[#equ_prob_models_data |Probabilistic Models for Data]]) about the data. Without a probabilistic model we cannot use the <span class="mw-gls" data-name ="bayesrisk">Bayes risk</span> of the (optimal) <span class="mw-gls" data-name ="bayesestimator">Bayes estimator</span> as a baseline (or benchmark). The concept of <span class="mw-gls" data-name ="regret">regret</span> minimization avoids the need for a probabilistic model of the data to obtain a baseline <ref name="PredictionLearningGames"/>. This approach replaces the <span class="mw-gls" data-name ="bayesrisk">Bayes risk</span> with the <span class="mw-gls" data-name ="regret">regret</span> relative to given reference methods (the <span class="mw-gls" data-name ="expert">expert</span>s).  
The quality of a hypothesis <math>h</math> is measured via the difference between the loss incurred by its predictions <math>h(\featurevec)</math> and the loss incurred by the predictions of the experts. This difference, which is referred to as the regret,
measures by how much we <span class="mw-gls mw-gls-first" data-name ="regret">regret</span> to have used the prediction <math>h(\featurevec)</math> instead of using(or following)
the prediction of the expert. The goal of  <span class="mw-gls" data-name ="regret">regret</span> minimization is to learn a hypothesis with a small <span class="mw-gls" data-name ="regret">regret</span> compared to given set of <span class="mw-gls" data-name ="expert">expert</span>s.
The concept of <span class="mw-gls" data-name ="regret">regret</span> minimization is useful when we do not make any probabilistic assumptions  
(see Section[[#equ_prob_models_data |Probabilistic Models for Data]]) about the data. Without a probabilistic model we cannot use the <span class="mw-gls" data-name ="bayesrisk">Bayes risk</span> of the (optimal) <span class="mw-gls" data-name ="bayesestimator">Bayes estimator</span> as a baseline (or benchmark).  
The concept of <span class="mw-gls" data-name ="regret">regret</span> minimization avoids the need for a probabilistic model of the data to obtain a baseline <ref name="PredictionLearningGames"/>. This approach replaces the <span class="mw-gls" data-name ="bayesrisk">Bayes risk</span> with the <span class="mw-gls" data-name ="regret">regret</span> relative to given reference methods (the <span class="mw-gls" data-name ="expert">expert</span>s).  
===Rewards as Partial Feedback===   
===Rewards as Partial Feedback===   
Some applications involve <span class="mw-gls" data-name ="datapoint">data point</span>s whose labels are so difficult or costly to determine that we cannot assume to have any labeled data available. Without any labeled data, we cannot evaluate the loss function for different choices for the hypothesis. Indeed, the evaluation of the loss function typically amounts to measuring the distance between predicted label and true label of a <span class="mw-gls" data-name ="datapoint">data point</span>.  
 
Instead of evaluating a loss function, we must rely on some indirect feedback or “reward” that indicates the usefulness of a particular prediction <ref name="PredictionLearningGames"/><ref name="SuttonEd2"/>.  
Some applications involve <span class="mw-gls" data-name ="datapoint">data point</span>s whose labels are so difficult or costly to determine that we cannot assume to have any labeled data available. Without any labeled data, we cannot evaluate the loss function for different choices for the hypothesis. Indeed, the evaluation of the loss function typically amounts to measuring the distance between predicted label and true label of a <span class="mw-gls" data-name ="datapoint">data point</span>. Instead of evaluating a loss function, we must rely on some indirect feedback or “reward” that indicates the usefulness of a particular prediction <ref name="PredictionLearningGames"/><ref name="SuttonEd2">R. Sutton and A. Barto. ''Reinforcement learning: An introduction'' MIT press, Cambridge, MA, 2 edition, 2018</ref>.  
Consider the ML problem of predicting the optimal steering directions for an autonomous car. The prediction has to be recalculated for each new state of the car. ML methods can sense the state via a feature vector <math>\featurevec</math> whose entries are pixel intensities of a snapshot. The goal is to learn a hypothesis map from the feature vector <math>\featurevec</math> to a guess <math>\hat{\truelabel} = h(\featurevec)</math> for the optimal steering direction <math>\truelabel</math>
(true label). Unless the car circles around in small area with fixed obstacles, we have no access to labeled <span class="mw-gls" data-name ="datapoint">data point</span>s or reference driving scenes for which we already know the optimum steering direction.
Instead, the car (control unit) needs to learn the hypothesis <math>h(\featurevec)</math> based solely on the feedback signals obtained from various sensing devices (cameras, distance sensors).  
==Putting Together the Pieces==
Consider the ML problem of predicting the optimal steering directions for an autonomous car. The prediction has to be recalculated for each new state of the car. ML methods can sense the state via a feature vector <math>\featurevec</math> whose entries are pixel intensities of a snapshot. The goal is to learn a hypothesis map from the feature vector <math>\featurevec</math> to a guess <math>\hat{\truelabel} = h(\featurevec)</math> for the optimal steering direction <math>\truelabel</math>
(true label). Unless the car circles around in small area with fixed obstacles, we have no access to labeled <span class="mw-gls" data-name ="datapoint">data point</span>s or reference driving scenes for which we already know the optimum steering direction.
Instead, the car (control unit) needs to learn the hypothesis <math>h(\featurevec)</math> based solely on the feedback signals obtained from various sensing devices (cameras, distance sensors).
==<span id=""/>Putting Together the Pieces== 


A guiding theme of this book is that ML methods are obtained by different combinations of data, model and loss. We will discuss some key principles behind these methods in depth in the following chapters. Let us develop some intuition for how ML methods operate by considering a very simple ML problem. This problem involves <span class="mw-gls" data-name ="datapoint">data point</span>s that are characterized by a single numeric feature <math>\feature \in \mathbb{R}</math> and a numeric label <math>\truelabel  \in \mathbb{R}</math>. We assume to have access to <math>\samplesize</math> labeled <span class="mw-gls" data-name ="datapoint">data point</span>s  
A guiding theme of this book is that ML methods are obtained by different combinations of data,  
model and loss. We will discuss some key principles behind these methods in depth in the following chapters. Let us develop some intuition for how ML methods operate by considering a very simple ML problem. This problem involves <span class="mw-gls" data-name ="datapoint">data point</span>s that are characterized by a single numeric feature <math>\feature \in \mathbb{R}</math> and a numeric label <math>\truelabel  \in \mathbb{R}</math>. We assume to have access to <math>\samplesize</math> labeled <span class="mw-gls" data-name ="datapoint">data point</span>s  
<math display="block">\begin{equation}  
<math display="block">\begin{equation}  
\big(\feature^{(1)},\truelabel^{(1)}\big),\ldots, \big(\feature^{(\samplesize)},\truelabel^{(\samplesize)}\big)  \label{equ_labeled_data_putting_together}
\big(\feature^{(1)},\truelabel^{(1)}\big),\ldots, \big(\feature^{(\samplesize)},\truelabel^{(\samplesize)}\big)  \label{equ_labeled_data_putting_together}
\end{equation}</math> for which we know the true label values <math>\truelabel^{(\sampleidx)}</math>.  
\end{equation}</math> for which we know the true label values <math>\truelabel^{(\sampleidx)}
 
</math>.  
The assumption of knowing the exact true label values <math>\truelabel^{(\sampleidx)}</math> for any <span class="mw-gls" data-name ="datapoint">data point</span> is an idealization. We might often face labelling or measurement errors such that the observed labels are noisy versions of the true label. Later on, we will discuss techniques that allow ML methods to cope with noisy labels in Chapter [[guide:50be9327aa | Regularization ]].  
The assumption of knowing the exact true label values <math>\truelabel^{(\sampleidx)}</math> for any <span class="mw-gls" data-name ="datapoint">data point</span> is an idealization. We might often face labelling or measurement errors such that the observed labels are noisy versions of the true label. Later on, we will discuss techniques that allow ML methods to cope with noisy labels in Chapter [[guide:50be9327aa | Regularization ]].  
Our goal is to learn a (hypothesis) map <math>h: \mathbb{R} \rightarrow \mathbb{R}</math> such that  
Our goal is to learn a (hypothesis) map <math>h: \mathbb{R} \rightarrow \mathbb{R}</math> such that <math>h(\feature) \approx \truelabel</math> for any <span class="mw-gls" data-name ="datapoint">data point</span>. Given a <span class="mw-gls" data-name ="datapoint">data point</span> with feature <math>\feature</math>, the function value <math>h(\feature)</math> should be an accurate approximation of its label value <math>\truelabel</math>. We require the map to belong to the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> of linear maps,   
<math>h(\feature) \approx \truelabel</math> for any <span class="mw-gls" data-name ="datapoint">data point</span>. Given a <span class="mw-gls" data-name ="datapoint">data point</span> with feature <math>\feature</math>,  
 
the function value <math>h(\feature)</math> should be an accurate approximation of its label value <math>\truelabel</math>.  
We require the map to belong to the <span class="mw-gls" data-name ="hypospace">hypothesis space</span> <math>\hypospace</math> of linear maps,   
<math display="block">\begin{equation}  
<math display="block">\begin{equation}  
\label{equ_def_lin_pred_intercept}
\label{equ_def_lin_pred_intercept}
h^{(\weight_{0},\weight_{1})}(\feature) = \weight_{1} \feature + \weight_{0}.  
h^{(\weight_{0},\weight_{1})}(\feature) = \weight_{1} \feature + \weight_{0}.  
\end{equation}</math> The predictor \eqref{equ_def_lin_pred_intercept} is parameterized by the slope  
\end{equation}</math> The predictor \eqref{equ_def_lin_pred_intercept} is parameterized by the slope  
<math>\weight_{1}</math> and the intercept (bias or offset) <math>\weight_{0}</math>. We indicate this by the notation <math>h^{(\weight_{0},\weight_{1})}</math>. A particular choice for the weights <math>\weight_{1},\weight_{0}</math> defines a linear hypothesis <math>h^{(\weight_{0},\weight_{1})}(\feature) = \weight_{1}\feature +\weight_{0}</math>.  
<math>\weight_{1}</math> and the intercept (bias or offset) <math>\weight_{0}</math>. We indicate this by the notation <math>h^{(\weight_{0},\weight_{1})}</math>. A particular choice for the weights <math>\weight_{1},\weight_{0}</math> defines a linear hypothesis <math>h^{(\weight_{0},\weight_{1})}(\feature) = \weight_{1}\feature +\weight_{0}
</math>.  
Let us use the linear hypothesis map <math>h^{(\weight_{0},\weight_{1})}(\feature)</math> to predict the labels of training <span class="mw-gls" data-name ="datapoint">data point</span>s. In general, the predictions <math>\hat{\truelabel}^{(\sampleidx)} = h^{(\weight_{0},\weight_{1})}\big(\feature^{(\sampleidx)}\big)</math> will not be perfect and incur a non-zero prediction error <math>\hat{\truelabel}^{(\sampleidx)} - \truelabel^{(\sampleidx)}</math> (see Figure [[#fig_emp_error|fig_emp_error]]).  
 
Let us use the linear hypothesis map <math>h^{(\weight_{0},\weight_{1})}(\feature)</math> to predict the labels of training <span class="mw-gls" data-name ="datapoint">data point</span>s. In general, the predictions <math>\hat{\truelabel}^{(\sampleidx)} = h^{(\weight_{0},\weight_{1})}\big(\feature^{(\sampleidx)}\big)</math> will not be perfect and incur a non-zero prediction error <math>\hat{\truelabel}^{(\sampleidx)} - \truelabel^{(\sampleidx)}</math> (see Figure [[#fig_emp_error|fig_emp_error]]).  
We measure the goodness of the predictor map <math>h^{(\weight_{0},\weight_{1})}</math> using the average squared error loss (see \eqref{equ_squared_loss})
 
<math display = "block">
We measure the goodness of the predictor map <math>h^{(\weight_{0},\weight_{1})}</math> using the average squared error loss (see \eqref{equ_squared_loss})
 
<math display = "block">
\begin{align}
\begin{align}
f(\weight_{0},\weight_{1}) & \defeq (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - h^{(\weight_{0},\weight_{1})}(\feature^{(\sampleidx)})  \big)^{2} \nonumber \\ \label{equ_def_cost_fun_putting_togheter}
f(\weight_{0},\weight_{1}) & \defeq (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - h^{(\weight_{0},\weight_{1})}(\feature^{(\sampleidx)})  \big)^{2} \nonumber \\ \label{equ_def_cost_fun_putting_togheter}
& \stackrel{\eqref{equ_def_lin_pred_intercept}}{=}  (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - ( \weight_{1} \feature^{(\sampleidx)} + \weight_{0}) \big)^{2}.  
& \stackrel{\eqref{equ_def_lin_pred_intercept}}{=}  (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - ( \weight_{1} \feature^{(\sampleidx)} + \weight_{0}) \big)^{2}.  
\end{align}
\end{align}
</math> The <span class="mw-gls mw-gls-first" data-name ="trainerr">training error</span> <math>f(\weight_{0},\weight_{1})</math> is the average of the squared prediction errors incurred by the predictor <math>h^{(\weight_{0},\weight_{1})}(x)</math> to the labeled <span class="mw-gls" data-name ="datapoint">data point</span>s \eqref{equ_labeled_data_putting_together}.  
</math>
 
It seems natural to learn a good predictor \eqref{equ_def_lin_pred_intercept}  
The <span class="mw-gls mw-gls-first" data-name ="trainerr">training error</span> <math>f(\weight_{0},\weight_{1})</math> is the average of the squared prediction errors incurred by the predictor <math>h^{(\weight_{0},\weight_{1})}(x)</math> to the labeled <span class="mw-gls" data-name ="datapoint">data point</span>s \eqref{equ_labeled_data_putting_together}.  
by choosing the parameters <math>\weight_{0},\weight_{1}</math> to minimize the <span class="mw-gls" data-name ="trainerr">training error</span>  
 
It seems natural to learn a good predictor \eqref{equ_def_lin_pred_intercept} by choosing the parameters <math>\weight_{0},\weight_{1}</math> to minimize the <span class="mw-gls" data-name ="trainerr">training error</span>  
 
<math display="block">\begin{equation}
<math display="block">\begin{equation}
\min_{\weight_{0},\weight_{1} \in \mathbb{R}} f(\weight_{0},\weight_{1}) \stackrel{\eqref{equ_def_cost_fun_putting_togheter}}{=} \min_{\weight_{1},\weight_{0} \in \mathbb{R}}  (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - ( \weight_{1} \feature^{(\sampleidx)} + \weight_{0}) \big)^{2}.
\min_{\weight_{0},\weight_{1} \in \mathbb{R}} f(\weight_{0},\weight_{1}) \stackrel{\eqref{equ_def_cost_fun_putting_togheter}}{=} \min_{\weight_{1},\weight_{0} \in \mathbb{R}}  (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - ( \weight_{1} \feature^{(\sampleidx)} + \weight_{0}) \big)^{2}.
\end{equation}</math> The optimal parameters <math>\hat{\weight}_{0},\hat{\weight}_{1}</math> are characterized by the zero-gradient condition,\footnote{A necessary and sufficient condition for <math>\widehat{\weights}</math> to minimize a convex differentiable function <math>f(\weights)</math> is <math>\nabla f(\widehat{\weights}) = \mathbf{0}</math> <ref name="BoydConvexBook"/>{{rp | at=Sec. 4.2.3}}.}  
\end{equation}</math> The optimal parameters <math>\hat{\weight}_{0},\hat{\weight}_{1}</math> are characterized by the zero-gradient condition,\footnote{A necessary and sufficient condition for <math>\widehat{\weights}</math> to minimize a convex differentiable function <math>f(\weights)</math> is <math>\nabla f(\widehat{\weights}) = \mathbf{0}</math> <ref name="BoydConvexBook">S. Boyd and L. Vandenberghe. ''Convex Optimization'' Cambridge Univ. Press, Cambridge, UK, 2004</ref>{{rp | at=Sec. 4.2.3}}.}  
<math display="block">\begin{equation}
<math display="block">\begin{equation}
\label{equ_zero_grad_condition_putting_together}
\label{equ_zero_grad_condition_putting_together}
Line 933: Line 943:


==Notes==
==Notes==
{{notelist}}
{{notelist}}
==General References==
<div class="d-flex mt-3 mb-3">
<div class="me-3">
[[File:978-981-16-8193-6.jpg | 150px | link=]]
</div>
<div>
{{cite book | title = Machine Learning: The Basics |first = Alexander | last = Jung | publisher = Springer | location = Signapore | date = 2022 | doi = 10.1007/978-981-16-8193-6 }}
{{cite arXiv |last=Jung |first=Alexander |date= |title=Machine Learning: The Basics |eprint=1805.05052 | date=2022}}
</div>
</div>
==References==
==References==

Latest revision as of 07:08, 14 June 2023

[math] % Generic syms \newcommand\defeq{:=} \newcommand{\Tt}[0]{\boldsymbol{\theta}} \newcommand{\XX}[0]{{\cal X}} \newcommand{\ZZ}[0]{{\cal Z}} \newcommand{\vx}[0]{{\bf x}} \newcommand{\vv}[0]{{\bf v}} \newcommand{\vu}[0]{{\bf u}} \newcommand{\vs}[0]{{\bf s}} \newcommand{\vm}[0]{{\bf m}} \newcommand{\vq}[0]{{\bf q}} \newcommand{\mX}[0]{{\bf X}} \newcommand{\mC}[0]{{\bf C}} \newcommand{\mA}[0]{{\bf A}} \newcommand{\mL}[0]{{\bf L}} \newcommand{\fscore}[0]{F_{1}} \newcommand{\sparsity}{s} \newcommand{\mW}[0]{{\bf W}} \newcommand{\mD}[0]{{\bf D}} \newcommand{\mZ}[0]{{\bf Z}} \newcommand{\vw}[0]{{\bf w}} \newcommand{\D}[0]{{\mathcal{D}}} \newcommand{\mP}{\mathbf{P}} \newcommand{\mQ}{\mathbf{Q}} \newcommand{\E}[0]{{\mathbb{E}}} \newcommand{\vy}[0]{{\bf y}} \newcommand{\va}[0]{{\bf a}} \newcommand{\vn}[0]{{\bf n}} \newcommand{\vb}[0]{{\bf b}} \newcommand{\vr}[0]{{\bf r}} \newcommand{\vz}[0]{{\bf z}} \newcommand{\N}[0]{{\mathcal{N}}} \newcommand{\vc}[0]{{\bf c}} % Statistics and Probability Theory \newcommand{\errprob}{p_{\rm err}} \newcommand{\prob}[1]{p({#1})} \newcommand{\pdf}[1]{p({#1})} \def \expect {\mathbb{E} } % Machine Learning Symbols \newcommand{\biasterm}{B} \newcommand{\varianceterm}{V} \newcommand{\neighbourhood}[1]{\mathcal{N}^{(#1)}} \newcommand{\nrfolds}{k} \newcommand{\mseesterr}{E_{\rm est}} \newcommand{\bootstrapidx}{b} %\newcommand{\modeldim}{r} \newcommand{\modelidx}{l} \newcommand{\nrbootstraps}{B} \newcommand{\sampleweight}[1]{q^{(#1)}} \newcommand{\nrcategories}{K} \newcommand{\splitratio}[0]{{\rho}} \newcommand{\norm}[1]{\Vert {#1} \Vert} \newcommand{\sqeuclnorm}[1]{\big\Vert {#1} \big\Vert^{2}_{2}} \newcommand{\bmx}[0]{\begin{bmatrix}} \newcommand{\emx}[0]{\end{bmatrix}} \newcommand{\T}[0]{\text{T}} \DeclareMathOperator*{\rank}{rank} %\newcommand\defeq{:=} \newcommand\eigvecS{\hat{\mathbf{u}}} \newcommand\eigvecCov{\mathbf{u}} \newcommand\eigvecCoventry{u} \newcommand{\featuredim}{n} \newcommand{\featurelenraw}{\featuredim'} \newcommand{\featurelen}{\featuredim} \newcommand{\samplingset}{\mathcal{M}} \newcommand{\samplesize}{m} \newcommand{\sampleidx}{i} \newcommand{\nractions}{A} \newcommand{\datapoint}{\vz} \newcommand{\actionidx}{a} \newcommand{\clusteridx}{c} \newcommand{\sizehypospace}{D} \newcommand{\nrcluster}{k} \newcommand{\nrseeds}{s} \newcommand{\featureidx}{j} \newcommand{\clustermean}{{\bm \mu}} \newcommand{\clustercov}{{\bm \Sigma}} \newcommand{\target}{y} \newcommand{\error}{E} \newcommand{\augidx}{b} \newcommand{\task}{\mathcal{T}} \newcommand{\nrtasks}{T} \newcommand{\taskidx}{t} \newcommand\truelabel{y} \newcommand{\polydegree}{r} \newcommand\labelvec{\vy} \newcommand\featurevec{\vx} \newcommand\feature{x} \newcommand\predictedlabel{\hat{y}} \newcommand\dataset{\mathcal{D}} \newcommand\trainset{\dataset^{(\rm train)}} \newcommand\valset{\dataset^{(\rm val)}} \newcommand\realcoorspace[1]{\mathbb{R}^{\text{#1}}} \newcommand\effdim[1]{d_{\rm eff} \left( #1 \right)} \newcommand{\inspace}{\mathcal{X}} \newcommand{\sigmoid}{\sigma} \newcommand{\outspace}{\mathcal{Y}} \newcommand{\hypospace}{\mathcal{H}} \newcommand{\emperror}{\widehat{L}} \newcommand\risk[1]{\expect \big \{ \loss{(\featurevec,\truelabel)}{#1} \big\}} \newcommand{\featurespace}{\mathcal{X}} \newcommand{\labelspace}{\mathcal{Y}} \newcommand{\rawfeaturevec}{\mathbf{z}} \newcommand{\rawfeature}{z} \newcommand{\condent}{H} \newcommand{\explanation}{e} \newcommand{\explainset}{\mathcal{E}} \newcommand{\user}{u} \newcommand{\actfun}{\sigma} \newcommand{\noisygrad}{g} \newcommand{\reconstrmap}{r} \newcommand{\predictor}{h} \newcommand{\eigval}[1]{\lambda_{#1}} \newcommand{\regparam}{\lambda} \newcommand{\lrate}{\alpha} \newcommand{\edges}{\mathcal{E}} \newcommand{\generror}{E} \DeclareMathOperator{\supp}{supp} %\newcommand{\loss}[3]{L({#1},{#2},{#3})} \newcommand{\loss}[2]{L\big({#1},{#2}\big)} \newcommand{\clusterspread}[2]{L^{2}_{\clusteridx}\big({#1},{#2}\big)} \newcommand{\determinant}[1]{{\rm det}({#1})} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \newcommand{\itercntr}{r} \newcommand{\state}{s} \newcommand{\statespace}{\mathcal{S}} \newcommand{\timeidx}{t} \newcommand{\optpolicy}{\pi_{*}} \newcommand{\appoptpolicy}{\hat{\pi}} \newcommand{\dummyidx}{j} \newcommand{\gridsizex}{K} \newcommand{\gridsizey}{L} \newcommand{\localdataset}{\mathcal{X}} \newcommand{\reward}{r} \newcommand{\cumreward}{G} \newcommand{\return}{\cumreward} \newcommand{\action}{a} \newcommand\actionset{\mathcal{A}} \newcommand{\obstacles}{\mathcal{B}} \newcommand{\valuefunc}[1]{v_{#1}} \newcommand{\gridcell}[2]{\langle #1, #2 \rangle} \newcommand{\pair}[2]{\langle #1, #2 \rangle} \newcommand{\mdp}[5]{\langle #1, #2, #3, #4, #5 \rangle} \newcommand{\actionvalue}[1]{q_{#1}} \newcommand{\transition}{\mathcal{T}} \newcommand{\policy}{\pi} \newcommand{\charger}{c} \newcommand{\itervar}{k} \newcommand{\discount}{\gamma} \newcommand{\rumba}{Rumba} \newcommand{\actionnorth}{\rm N} \newcommand{\actionsouth}{\rm S} \newcommand{\actioneast}{\rm E} \newcommand{\actionwest}{\rm W} \newcommand{\chargingstations}{\mathcal{C}} \newcommand{\basisfunc}{\phi} \newcommand{\augparam}{B} \newcommand{\valerror}{E_{v}} \newcommand{\trainerror}{E_{t}} \newcommand{\foldidx}{b} \newcommand{\testset}{\dataset^{(\rm test)} } \newcommand{\testerror}{E^{(\rm test)}} \newcommand{\nrmodels}{M} \newcommand{\benchmarkerror}{E^{(\rm ref)}} \newcommand{\lossfun}{L} \newcommand{\datacluster}[1]{\mathcal{C}^{(#1)}} \newcommand{\cluster}{\mathcal{C}} \newcommand{\bayeshypothesis}{h^{*}} \newcommand{\featuremtx}{\mX} \newcommand{\weight}{w} \newcommand{\weights}{\vw} \newcommand{\regularizer}{\mathcal{R}} \newcommand{\decreg}[1]{\mathcal{R}_{#1}} \newcommand{\naturalnumbers}{\mathbb{N}} \newcommand{\featuremapvec}{{\bf \Phi}} \newcommand{\featuremap}{\phi} \newcommand{\batchsize}{B} \newcommand{\batch}{\mathcal{B}} \newcommand{\foldsize}{B} \newcommand{\nriter}{R} [/math]

ML methods fit a model to data via minimizing a loss function (see Figure fig_AlexMLBP). We obtain a variety of ML methods from different design choices for the model, loss function and data representation (see Chapter The Landscape of ML ). A principled approach to combine these three components is empirical risk minimization (ERM) (see Chapter Empirical Risk Minimization ).

This book portrays ML as combinations of three components as depicted in Figure fig_ml_problem. These components are

  • data as collections of individual data points that are characterized by features (see Section Features) and labels (see Section Labels)
  • a model or hypothesis space that consists of computationally feasible hypothesis maps from feature space to label space (see Section The Model )
  • a loss function (see Section The Loss ) to measure the quality of a hypothesis map.

A ML problem involves specific design choices for data points, its features and labels, the hypothesis space and the loss function to measure the quality of a particular hypothesis. Similar to ML problems (or applications), we can also characterize ML methods as combinations of these three components. This chapter discusses in some depth each of the above ML components and their combination within some widely-used ML methods [1].

We detail in Chapter The Landscape of ML how some of the most popular ML methods, including linear regression (see Section Linear Regression ) as well as deep learning methods (see Section Deep Learning ), are obtained by specific design choices for the three components. Chapter Empirical Risk Minimization will then introduce ERM as a main principle for how to operationally combine the individual ML components. Within the ERM principle, ML problems become optimization problems and ML methods become optimization methods.

The Data

Data as Collections of Data points. Maybe the most important component of any ML problem (and method) is data. We consider data as collections of individual data points which are atomic units of “information containers”. Data points can represent text documents, signal samples of time series generated by sensors, entire time series generated by collections of sensors, frames within a single video, random variables, videos within a movie database, cows within a herd, trees within a forest, or forests within a collection of forests. Mountain hikers might be interested in data points that represent different hiking tours (see Figure fig:image).

A snapshot taken at the beginning of a mountain hike.

We use the concept of data points in a very abstract and therefore highly flexible manner. Data points can represent very different types of objects that arise in fundamentally different application domains. For an image processing application it might be useful to define data points as images. When developing a recommendation system we might define data points to represent customers. In the development of new drugs we might use data points to represent different diseases. Ultimately, the choice or definition of data points is a design choice. We might refer to the task of finding a useful definition of data points as “data point engineering”.

One practical requirement for a useful definition of data points is that we should have access to many of them. Many ML methods construct estimates for a quantity of interest (such as a prediction or forecast) by averaging over a set of reference (or training) data points. These estimates become more accurate for an increasing number of data points used for computing the average.

A key property of a dataset is the number [math]\samplesize[/math] of individual data points it contains. The number of data points within a dataset is also referred to as the sample size. From a statistical point of view, the larger the sample size [math]\samplesize[/math] the better. However, there might be restrictions on computational resources (such as memory size) that limit the maximum sample size [math]\samplesize[/math] that can be processed.

For most applications, it is impossible to have full access to every single microscopic property of a data point. Consider a data point that represents a vaccine. A full characterization of such a data point would require to specify its chemical composition down to level of molecules and atoms. Moreover, there are properties of a vaccine that depend on the patient who received the vaccine.

We find it useful to distinguish between two different groups of properties of a data point. The first group of properties is referred to as features and the second group of properties is referred to as labels. Roughly speaking, features are low-level properties of a data point that can be measured or computed easily in an automated fashion. In contract, labels are high-level properties of a data points that represent some quantity of interest. Determining the label value of a data point often requires human labour, e.g., a domain expert who has to examine the data point. Some widely used synonyms for features are “covariate”,“explanatory variable”, “independent variable”, “input (variable)”, “predictor (variable)” or “regressor” [2][3][4]. Some widely used synonyms for the label of a data point are "response variable", "output variable" or "target" [2][3][4].

We will discuss the concepts of features and labels in somewhat more detail in Sections Features and Labels. However, we would like to point out that the distinction between features and labels is blurry. The same property of a data point might be used as a feature in one application, while it might be used as a label in another application. Let us illustrate this blurry distinction between features and labels using the problem of missing data.

Assume we have a list of data points each of which is characterized by several properties that could be measured easily in principles (by sensors). These properties would be first candidates for being used as features of the data points. However, some of these properties are unknown (missing) for a small set of data points (e.g., due to broken sensors). We could then define the properties which are missing for some data points as labels and try to predict these labels using the remaining properties (which are known for all data points) as features. The task of determining missing values of properties that could be measured easily in principle is referred to as imputation [5].

Missing data might also arise in image processing applications. Consider data points being images (snapshots) generated by a smartphone camera. Maybe the most intuitive choice for the features of a (bitmap) image are the colour intensities for each pixel (see Figure fig_snapshot_pixels). Due to hardware failures some of the image pixels might be corrupted or (their colour intensities) even completely missing. We could then try to use to learn to predict the colour intensities of a pixel based on the colour intensities of the neighbouring pixels. To this end, we might define new data points as small rectangular regions (patches) of the image and use the colour intensity of the centre pixel (“target pixel”) as the label of such a patch.

Figure fig:two_main_parameters illustrates two key properties of a dataset. The first property is the sample size [math]\samplesize[/math], i.e., the number of individual data points that constitute the dataset. The second key property of is the number [math]\featurelen[/math] of features that are used to characterize an individual data point. The behaviour of ML methods often depends crucially on the ratio [math]\samplesize/\featurelen[/math]. The performance of ML methods typically improves with increasing [math]\samplesize/\featurelen[/math]. As a rule of thumb, we should use datasets for which [math]\samplesize/\featurelen \gg 1[/math]. We will make the informal condition [math]\samplesize / \featurelen \gg 1[/math] more precise in Chapter Model Validation and Selection .

Two key properties of a dataset are the number (sample size) [math]\samplesize[/math] of individual data points that constitute the dataset and the number [math]\featuredim[/math] of features used to characterize individual data points. The behaviour of ML methods typically depends crucially on the ratio [math]\samplesize/\featuredim[/math].

Features

Similar to the choice (or definition) of data points with an ML application, also the choice of which properties to be used as their features is a design choice. In general, features are (low-level) properties of a data point that can be computed or measured easily. This is obviously a highly informal characterization since there is no universal criterion for the difficulty of computing of measuring a specific property of data points. The task of choosing which properties to use as features of data points might be the most challenging part in the application of ML methods. Chapter Feature Learning discusses feature learning methods that automate (to some extend) the construction of good features.

In some application domains there is a rather natural choice for the features of a data point. For data points representing audio recording (of a given duration) we might use the signal amplitudes at regular sampling instants (e.g., using sampling frequency [math]44[/math] kHz) as features. For data points representing images it seems natural to use the colour (red, green and blue) intensity levels of each pixel as a feature (see Figure fig_snapshot_pixels).

The feature construction for images depicted in Figure fig_snapshot_pixels can be extended to other types of data points as long as they can be visualized efficiently [6]. Audio recordings are typically available a sequence of signal amplitudes [math]a_{\timeidx}[/math] collected regularly at time instants [math]\timeidx=1,\ldots,\featuredim[/math] with sampling frequency [math]\approx 44[/math] kHz. From a signal processing perspective, it seems natural to directly use the signal amplitudes as features, [math]\feature_{\featureidx} = a_{\featureidx}[/math] for [math]\featureidx=1,\ldots,\featurelen[/math]. However, another choice for the features would be the pixel RGB values of some visualization of the audio recording.

Two visualizations of a data point that represents an audio recording. The left figure shows a line plot of the audio signal amplitudes. The right figure shows a spectogram of the audio recording.

Figure fig_visualization_audio depicts two possible visualizations of an audio signal. The first visualization is obtained from a line plot of the signal amplitudes as a function of time [math]\timeidx[/math]. Another visualization of an audio recording is obtained from an intensity plot of its spectrogram[7][8]. We can then use the pixel RGB intensities of these visualizations as the features for an audio recording. Using this trick we can transform any ML method for image data to an ML method for audio data. We can use the scatterplot of a data set to use ML methods for image segmentation to cluster the dataset(see Chapter Clustering ).

If the snapshot [math]\datapoint^{(\sampleidx)}[/math] is stored as a [math]512 \times 512[/math] RGB bitmap, we could use as features [math]\featurevec^{(\sampleidx)} \in \mathbb{R}^{\featuredim}[/math] the red-, green- and blue component of each pixel in the snapshot. The length of the feature vector would then be [math]\featuredim= 3 \times 512 \times 512 \approx 786000[/math].

Many important ML application domains generate data points that are characterized by several numeric features [math]x_{1},\ldots,x_{\featuredim}[/math]. We represent numeric features by real numbers [math]x_{1},\ldots,x_{\featuredim} \in \mathbb{R}[/math] which might seem impractical. Indeed, digital computers cannot store a real number exactly as this would require an infinite number of bits. However, numeric linear algebra soft- and hardware allows to approximate real numbers with sufficient accuracy. The majority of ML methods discussed in this book assume that data points are characterized by real-valued features. Section Feature Learning for Non-Numeric Data discusses methods for constructing numeric features of data points whose natural representation is non-numeric.

We assume that data points arising in a given ML application are characterized by the same number [math]\featurelen[/math] of individual features [math]\feature_{1}.\ldots,\feature_{\featurelen}[/math]. It is convenient to stack the individual features of a data point into a single feature vector

[[math]]\begin{equation} \nonumber \featurevec=\big(\feature_{1},\ldots,\feature_{\featuredim}\big)^{T}. \end{equation}[[/math]]

Each data point is then characterized by its feature vector [math]\featurevec[/math]. Note that stacking the features of a data point into a column vector [math]\featurevec[/math] is pure convention. We could also arrange the features as a row vector or even as a matrix, which might be even more natural for features obtained by the pixels of an image (see Figure fig_snapshot_pixels).

We refer to the set of possible feature vectors of data points arising in some ML application as the feature space and denote it as [math]\featurespace[/math]. The feature space is a design choice as it depends on what properties of a data point we use as its features. This design choice should take into account the statistical properties of the data as well as the available computational infrastructure. If the computational infrastructure allows for efficient numerical linear algebra, then using [math]\featurespace = \mathbb{R}^{\featurelen}[/math] might be a good choice.

The Euclidean space [math]\mathbb{R}^{\featuredim}[/math] is an example of a feature space with a rich geometric and algebraic structure [9]. The algebraic structure of [math]\mathbb{R}^{\featuredim}[/math] is defined by vector addition and multiplication of vectors with scalars. The geometric structure of [math]\mathbb{R}^{\featuredim}[/math] is obtained by the Euclidean norm as a measure for the distance between two elements of [math]\mathbb{R}^{\featuredim}[/math]. The algebraic and geometric structure of [math]\mathbb{R}^{\featuredim}[/math] often enables an efficient search over [math]\mathbb{R}^{\featuredim}[/math] to find elements with desired properties. Chapter ERM for Linear Regression discusses examples of such search problems in the context of learning an optimal hypothesis.

Modern information-technology, including smartphones or wearables, allows us to measure a huge number of properties about data points in many application domains. Consider a data point representing the book author “Alex Jung”. Alex uses a smartphone to take roughly five snapshots per day (sometimes more, e.g., during a mountain hike) resulting in more than [math]1000[/math] snapshots per year. Each snapshot contains around [math]10^{6}[/math] pixels whose greyscale levels we can use as features of the data point. This results in more than [math]10^{9}[/math] features (per year!).

As indicated above, many important ML applications involve data points represented by very long feature vectors. To process such high-dimensional data, modern ML methods rely on concepts from high-dimensional statistics [10][11]. One such concept from high-dimensional statistics is the notion of sparsity. Section The Lasso discusses methods that exploit the tendency of high-dimensional data points, which are characterized by a large number [math]\featuredim[/math] of features, to concentrate near low-dimensional subspaces in the feature space [12].

At first sight it might seem that “the more features the better” since using more features might convey more relevant information to achieve the overall goal. However, as we discuss in Chapter Regularization , it can be detrimental for the performance of ML methods to use an excessive amount of (irrelevant) features. Computationally, using too many features might result in prohibitive computational resource requirements (such as processing time). Statistically, each additional feature typically introduces an additional amount of noise (due to measurement or modelling errors) which is harmful for the accuracy of the ML method.

It is difficult to give a precise and broadly applicable characterization of the maximum number of features that should be used to characterize the data points. As a rule of thumb, the number [math]\samplesize[/math] of (labeled) data points used to train a ML method should be much larger than the number [math]\featurelen[/math] of numeric features (see Figure fig:two_main_parameters). The informal condition [math]\samplesize/\featurelen \gg 1[/math] can be ensured by either collecting a sufficiently large number [math]\samplesize[/math] of data points or by using a sufficiently small number [math]\featurelen[/math] of features. We next discuss implementations for each of these two complementary approaches.

The acquisition of (labeled) data points might be costly, requiring human expert labour. Instead of collecting more raw data, it might be more efficient to generate new artificial (synthetic) data via data augmentation techniques. Section Data Augmentation shows how intrinsic symmetries in the data can be used to augment the raw data with synthetic data. As an example for an intrinsic symmetry of data, consider data points being images. We assign each image the label [math]\truelabel=1[/math] if it shows a cat and [math]\truelabel=-1[/math] otherwise. For each image with known label we can generate several augmented (additional) images with the same label. These additional images might be obtained by simple image transformation such as rotations or re-scaling (zoom-in or zoom-out) that do not change the depicted objects (the meaning of the image). Chapter Regularization shows that some basic regularization techniques can be interpreted as an implicit form of data augmentation.

The informal condition [math]\samplesize/\featurelen \gg 1[/math] can also be ensured by reducing the number [math]\featurelen[/math] of features used to characterize data points. In some applications, we might use some domain knowledge to choose the most relevant features. For other applications, it might be difficult to tell which quantities are the best choice for features. Chapter Feature Learning discusses methods that learn, based on some given dataset, to determine a small number of relevant features of data points.

Beside the available computational infrastructure, also the statistical properties of datasets must be taken into account for the choices of the feature space. The linear algebraic structure of [math]\mathbb{R}^{\featuredim}[/math] allows us to efficiently represent and approximate datasets that are well aligned along linear subspaces. Section Principal Component Analysis discusses a basic method to optimally approximate datasets by linear subspaces of a given dimension. The geometric structure of [math]\mathbb{R}^{\featuredim}[/math] is also used in Chapter Clustering to decompose a dataset into few groups or clusters that consist of similar data points.

Throughout this book we will mainly use the feature space [math]\mathbb{R}^{\featuredim}[/math] with dimension [math]\featuredim[/math] being the number of features of a data point. This feature space has proven useful in many ML applications due to availability of efficient soft- and hardware for numerical linear algebra. Moreover, the algebraic and geometric structure of [math]\mathbb{R}^{\featuredim}[/math] reflect the intrinsic structure of data arising in many important application domains. This should not be too surprising as the Euclidean space has evolved as a useful mathematical abstraction of physical phenomena [13].

In general there is no mathematically correct choice for which properties of a data point to be used as its features. Most application domains allow for some design freedom in the choice of features. Let us illustrate this design freedom with a personalized health-care applications. This application involves data points that represent audio recordings with the fixed duration of three seconds. These recordings are obtained via smartphone microphones and used to detect coughing [14].

Labels

Besides its features, a data point might have a different kind of properties. These properties represent a higher-level fact or quantity of interest that is associated with the data point. We refer to such properties of a data point as its label (or “output” or “target”) and typically denote it by [math]\truelabel[/math] (if it is a single number) or by [math]\mathbf{\truelabel}[/math] (if it is a vector of different label values, such as in multi-label classification). We refer to the set of all possible label values of data points arising in a ML application is the label space [math]\labelspace[/math]. In general, determining the label of a data point is more difficult (to automate) compared to determining its features. Many ML methods revolve around finding efficient ways to predict (estimate or approximate) the label of a data point based solely on its features.

The distinction of data point properties into labels and features is blurry. Roughly speaking, labels are properties of data points that might only be determined with the help of human experts. For data points representing humans we could define its label [math]\truelabel[/math] as an indicator if the person has flu ([math]\truelabel=1[/math]) or not ([math]\truelabel=0[/math]). This label value can typically only be determined by a physician. However, in another application we might have enough resources to determine the flu status of any person of interest and could use it as a feature that characterizes a person.

Consider a data point that represents a hike, at the start of which the snapshot in Figure fig:image has been taken. The features of this data point could be the red, green and blue (RGB) intensities of each pixel in the snapshot in Figure fig:image. We stack these RGB values into a vector [math]\featurevec \in \mathbb{R}^{\featurelen}[/math] whose length [math]\featurelen[/math] is three times the number of pixels in the image. The label [math]\truelabel[/math] associated with a data point (a hike) could be the expected hiking time to reach the mountain in the snapshot. Alternatively, we could define the label [math]\truelabel[/math] as the water temperature of the lake that is depicted in the snapshot.

Numeric Labels - Regression.

For a given ML application, the label space [math]\labelspace[/math] contains all possible label values of data points. In general, the label space is not just a set of different elements but also equipped (algebraic or geometric) structure. To obtain efficient ML methods, we should exploit such structure. Maybe the most prominent example for such a structured label space are the real numbers [math]\labelspace = \mathbb{R}[/math]. This label space is useful for ML applications involving data points with numeric labels that can be modelled by real numbers. ML methods that aim at predicting a numeric label are referred to as regression methods.

Categorical Labels - Classification. Many important ML applications involve data points whose label indicate the category or class to which data points belongs to. ML methods that aim at predicting such categorical labels are referred to as classification methods. Examples for classification problems include the diagnosis of tumours as benign or maleficent, the classification of persons into age groups or detecting the current floor conditions ( “grass”, “tiles” or “soil”) for a mower robot.

The most simple type of a classification problems is a binary classification problem. Within binary classification, each data point belongs to exactly one out of two different classes. Thus, the label of a data point takes on values from a set that contains two different elements such as [math]\{0,1\}[/math] or [math]\{-1,1\}[/math] or [math]\{\mbox{“shows cat”}, \mbox{“shows no cat”}\}[/math].

We speak of a multi-class classification problem if data points belong to exactly one out of more than two categories (e.g., image categories “no cat shown” vs. “one cat shown” and “more than one cat shown”). If there are [math]\nrcategories[/math] different categories we might use the label values [math]\{1,2,\ldots, \nrcategories\}[/math].

There are also applications where data points can belong to several categories simultaneously. For example, an image can be cat image and a dog image at the same time if it contains a dog and a cat. Multi-label classification problems and methods use several labels [math]\truelabel_{1},\truelabel_{2},\ldots,[/math] for different categories to which a data point can belong to. The label [math]\truelabel_{\featureidx}[/math] represents the [math]\featureidx[/math]th category and its value is [math]\truelabel_{\featureidx}=1[/math] if the data point belongs to the [math]\featureidx[/math]-th category and [math]\truelabel_{\featureidx}=0[/math] if not.

Ordinal Labels. Ordinal label values are somewhat in between numeric and categorical labels. Similar to categorical labels, ordinal labels take on values from a finite set. Moreover, similar to numeric labels, ordinal labels take on values from an ordered set. For an example for such an ordered label space, consider data points representing rectangular areas of size [math]1[/math] km by [math]1[/math] km. The features [math]\featurevec[/math] of such a data point can be obtained by stacking the RGB pixel values of a satellite image depicting that area (see Figure fig_snapshot_pixels). Beside the feature vector, each rectangular area is characterized by a label [math]\truelabel \in \{1,2,3\}[/math] where

  • [math]\truelabel=1[/math] means that the area contains no trees.
  • [math]\truelabel=2[/math] means that the area is partially covered by trees.
  • [math]\truelabel=3[/math] means that the are is entirely covered by trees.

Thus we might say that label value [math]\truelabel=2[/math] is “larger” than label value [math]\truelabel=1[/math] and label value [math]\truelabel=3[/math] is “larger” than label value [math]\truelabel=2[/math].

The distinction between regression and classification problems and methods is somewhat blurry. Consider a binary classification problem based on data points whose label [math]\truelabel[/math] takes on values [math]-1[/math] or [math]1[/math]. We could turn this into a regression problem by using a new label [math]\truelabel'[/math] which is defined as the confidence in the label [math]\truelabel[/math] being equal to [math]1[/math]. On the other hand, given a prediction [math]\hat{\truelabel'}[/math] for the numeric label [math]\truelabel' \in \mathbb{R}[/math] we can obtain a prediction [math]\hat{\truelabel}[/math] for the binary label [math]\truelabel \in \{-1,1\}[/math] by setting [math]\hat{\truelabel} \defeq 1[/math] if [math]\hat{\truelabel'} \geq0[/math] and [math]\hat{\truelabel}\defeq-1[/math] otherwise. A prominent example for this link between regression and classification is logistic regression which is discussed in Section Logistic Regression . Logistic regression is a binary classification method that uses the same model as linear regression but a different loss function.

We refer to a data point as being \emph{labeled} if, besides its features [math]\featurevec[/math], the value of its label [math]\truelabel[/math] is known. The acquisition of labeled data points typically involves human labour, such as verifying if an image shows a cat. In other applications, acquiring labels might require sending out a team of marine biologists to the Baltic sea [15], to run a particle physics experiment at the European organization for nuclear research (CERN) [16], or to conduct animal trials in pharmacology [17].

Let us also point out online market places for human labelling workforce [18]. These market places, allow to upload data points, such as collections of images or audio recordings, and then offer an hourly rate to humans that label the data points. This labeling work might amount to marking images that show a cat.

Many applications involve data points whose features can be determined easily, but whose labels are known for few data points only. Labeled data is a scarce resource. Some of the most successful ML methods have been devised in application domains where label information can be acquired easily [19]. ML methods for speech recognition and machine translation can make use of massive labeled datasets that are freely available [20].

In the extreme case, we do not know the label of any single data point. Even in the absence of any labeled data, ML methods can be useful for extracting relevant information from features only. We refer to ML methods which do not require any labeled data points as “unsupervised” ML methods. We discuss some of the most important unsupervised ML methods in Chapter Clustering and Chapter Feature Learning ).

ML methods learn (or search for) a “good” predictor [math]h: \featurespace \rightarrow \labelspace[/math] which takes the features [math]\featurevec \in \featurespace[/math] of a data point as its input and outputs a predicted label (or output, or target) [math]\hat{\truelabel} = h(\featurevec) \in \labelspace[/math]. A good predictor should be such that [math]\hat{\truelabel} \approx \truelabel[/math], i.e., the predicted label [math]\hat{\truelabel}[/math] is close (with small error [math]\hat{\truelabel} - \truelabel[/math]) to the true underlying label [math]\truelabel[/math].

Scatterplot

Consider data points characterized by a single numeric feature [math]\feature[/math] and single numeric label [math]\truelabel[/math]. To gain more insight into the relation between the features and label of a data point, it can be instructive to generate a scatterplot as shown in Figure fig_scatterplot_temp_FMI. A scatterplot depicts the data points [math]\datapoint^{(\sampleidx)}=(\feature^{(\sampleidx)},\truelabel^{(\sampleidx)})[/math] in a two-dimensional plane with the axes representing the values of feature [math]\feature[/math] and label [math]\truelabel[/math].

The visual inspection of a scatterplot might suggest potential relationships between feature [math]\feature[/math] (minimum daytime temperature) and label [math]\truelabel[/math] (maximum daytime temperature). From Figure fig_scatterplot_temp_FMI, it seems that there might be a relation between feature [math]\feature[/math] and label [math]\truelabel[/math] since data points with larger [math]\feature[/math] tend to have larger [math]\truelabel[/math]. This makes sense since having a larger minimum daytime temperature typically implies also a larger maximum daytime temperature.

To construct a scatterplot for data points with more than two features we can use feature learning methods (see Chapter Feature Learning ). These methods transform high-dimensional data points, having billions of raw features, to three or two new features. These new features can then be used as the coordinates of the data points in a scatterplot.

Probabilistic Models for Data

A powerful idea in ML is to interpret each data points as the realization of a random variable (RV). For ease of exposition let us consider data points that are characterized by a single feature [math]\feature[/math]. The following concepts can be extended easily to data points characterized by a feature vector [math]\featurevec[/math] and a label [math]\truelabel[/math].

One of the most basic examples of a probabilistic model for data points in ML is the independent and identically distributed (iid) assumption. This assumption interprets data points [math]\feature^{(1)},\ldots,\feature^{(\samplesize)}[/math] as realizations of statistically independent RVs with the same probability distribution [math]p(\feature)[/math]. It might not be immediately clear why it is a good idea to interpret data points as realizations of RVs with the common probability distribution [math]p(\feature)[/math]. However, this interpretation allows us to use the properties of the probability distribution to characterize overall properties of entire datasets, i.e., large collections of data points.

The probability distribution [math]p(\feature)[/math] underlying the data points within the i.i.d. assumption is either known (based on some domain expertise) or estimated from data. It is often enough to estimate only some parameters of the distribution [math]p(\feature)[/math]. Section Maximum Likelihood discusses a principled approach to estimate the parameters of a probability distribution from a given set of data points. This approach is sometimes referred to as maximum likelihood and aims at finding (parameter of) a probability distribution [math]p(\feature)[/math] such that the probability (density) of observing the given data points is maximized [21][22][23].

Two of the most basic and widely used parameters of a probability distribution [math]p(\feature)[/math] are the expected value or mean [24] [math]\mu_{\feature} = \expect\{\feature\} \defeq \int_{\feature'} \feature' p(\feature') dx'[/math] and the variance [math]\sigma_{\feature}^{2} \defeq \expect\big\{\big(\feature-\expect\{\feature\}\big)^{2}\big\}.[/math] These parameters can be estimated using the sample mean (average) and sample variance,

[[math]] \begin{align} \hat{\mu}_{\feature} & \defeq (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \feature^{(\sampleidx)}\mbox{ , and } \nonumber \\ \widehat{\sigma^{2}_{\feature}}& \label{equ_sample_mean_var}\defeq (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \feature^{(\sampleidx)} - \hat{\mu}_{\feature} \big)^{2}. \end{align} [[/math]]

The sample mean and sample variance \eqref{equ_sample_mean_var} are the maximum likelihood estimators for the mean and variance of a normal (Gaussian) distribution [math]p(\feature)[/math] (see [25](Chapter 2.3.4)).

Most of the ML methods discussed in this book are motivated by an i.i.d. assumption. It is important to note that this i.i.d. assumption is only a modelling assumption (or hypothesis). There is no means to verify if an arbitrary set of data points are “exactly” realizations of iid RVs. There are principled statistical methods (hypothesis tests) that allow to verify if a given set of data point can be well approximated as realizations of iid RVs [26]. Alternatively, we can enforce the i.i.d. assumption if we generate synthetic data using a random number generator. Such synthetic iid data points could be obtained by sampling algorithms that incrementally build a synthetic dataset by adding randomly chosen raw data points [27].


The Model

Consider some ML application that generates data points, each characterized by features [math]\featurevec \in \featurespace[/math] and label [math]\truelabel \in \labelspace[/math]. The goal of a ML method is to learn a hypothesis map [math]h: \featurespace \rightarrow \labelspace[/math] such that

[[math]]\begin{equation} \label{equ_approx_label_pred} \truelabel \approx \underbrace{h(\featurevec)}_{\hat{\truelabel}} \mbox{ for any data point}. \end{equation}[[/math]]

The informal goal \eqref{equ_approx_label_pred} will be made precise in several aspects throughout the rest of our book. First, we need to quantify the approximation error \eqref{equ_approx_label_pred} incurred by a given hypothesis map [math]h[/math]. Second, we need to make precise what we actually mean by requiring \eqref{equ_approx_label_pred} to hold for “any” data point. We solve the first issue by the concept of a loss function in Section The Loss . The second issue is then solved in Chapter Empirical Risk Minimization by using a simple probabilistic model for data.

Let us assume for the time being that we have found a reasonable hypothesis [math]h[/math] in the sense of \eqref{equ_approx_label_pred}. We can then use this hypothesis to predict the label of any data point for which we know its features. The prediction [math]\hat{\truelabel}=h(\featurevec)[/math] is obtained by evaluating the hypothesis for the features [math]\featurevec[/math] of a data point (see Figure fig_feature_map_eval and fig:Hypothesis_Map). We refer to a hypothesis map also as a predictor map since it is used or compute the prediction [math]\hat{\truelabel}[/math] of a (true) label [math]\truelabel[/math].


For ML problems using a finite label space [math]\labelspace[/math] (e..g, [math]\labelspace=\{-1,1\}[/math], we refer to a hypothesis also as a classifier. For a finite \label{labelspace} [math]\labelspace[/math], we can characterize a particular classifier map [math]h[/math] using its different decision regions

[[math]]\begin{equation} \label{equ_decision_region} \decreg{a} \defeq \big\{ \featurevec \in \mathbb{R}^{\featuredim}: h(\featurevec) = a \big\} \subseteq \featurespace. \end{equation}[[/math]]

Each label value [math]a \in \labelspace[/math] is associated with a specific decision region [math]\decreg{a}[/math]. For a given label value [math]a \in \labelspace[/math], the decision region [math]\decreg{a}[/math] is constituted by all feature vectors [math]\featurevec \in \featurespace[/math] which are mapped to this label value, [math]h(\featurevec)=a[/math].

A hypothesis (predictor) [math]h[/math] maps features [math]\featurevec\!\in\!\featurespace[/math], of an on-board camera snapshot, to the prediction [math]\hat{\truelabel}\!=\! h(\featurevec)\!\in\!\labelspace[/math] for the coordinate of the current location of a cleaning robot. ML methods use data to learn predictors [math]h[/math] such that [math]\hat{\truelabel}\!\approx\!\truelabel[/math] (with true label [math]\truelabel[/math]).

In principle, ML methods could use any possible map [math]h: \featurespace \rightarrow \labelspace[/math] to predict the label [math]\truelabel \in \labelspace[/math] via computing [math]\hat{\truelabel} = h(\featurevec)[/math]. The set of all maps from the feature space [math]\featurespace[/math] to the label space is typically denoted as [math]\labelspace^{\featurespace}[/math].[a]

In general, the set [math]\labelspace^{\featurespace}[/math] is way too large to be search over by a practical ML methods. As a point in case, consider data points characterized by a single numeric feature [math]\feature \in \mathbb{R}[/math] and label [math]\truelabel \in \mathbb{R}[/math]. The set of all real-valued maps [math]h(\feature)[/math] of a real-valued argument already contains uncountably infinite many different hypothesis maps [28].

Practical ML methods can search and evaluate only a (tiny) subset of all possible hypothesis maps. This subset of computationally feasible (“affordable”) hypothesis maps is referred to as the hypothesis space or model underlying a ML method. As depicted in Figure fig_hypo_space, ML methods typically use a hypothesis space [math]\hypospace[/math] that is a tiny subset of [math]\labelspace^{\featurespace}[/math]. Similar to the features and labels used to characterize data points, also the hypothesis space underlying a ML method is a design choice. As we will see, the choice for the hypothesis space involves a trade-off between computational complexity and statistical properties of the resulting ML methods.

The preference for a particular hypothesis space often depends on the computational infrastructure that is available to a ML method. Different computational infrastructures favour different hypothesis spaces. ML methods implemented in a small embedded system, might prefer a linear hypothesis space which results in algorithms that require a small number of arithmetic operations. Deep learning methods implemented in a cloud computing environment typically use much larger hypothesis spaces obtained from large artificial neural network (ANN) (see Section Deep Learning ).

ML methods can also be implemented using a spreadsheet software. Here, we might use a hypothesis space consisting of maps [math]h: \featurespace \rightarrow \labelspace[/math] that are represented by look up tables (see Table table-lookup). If we instead use the programming language Python to implement a ML method, we can obtain a hypothesis class by collecting all possible Python subroutines with one input (scalar feature [math]\feature[/math]), one output argument (predicted label [math]\hat{\truelabel}[/math]) and consisting of less than [math]100[/math] lines of code.

A look-up table defines a hypothesis map [math]h[/math]. The value [math]h(\feature)[/math] is given by the entry in the second column of the row whose first column entry is [math]\feature[/math]. We can construct a hypothesis space [math]\hypospace[/math] by using a collection of different look-up tables.
feature x prediction h(x)
0 0
1/10 10
2/10 3
[math]\vdots[/math] [math]\vdots[/math]
1 22.3

Broadly speaking, the design choice for the hypothesis space [math]\hypospace[/math] of a ML method has to balance between two conflicting requirements.

  • It has to be sufficiently large such that it contains at least one accurate predictor map [math]\hat{h} \in \hypospace[/math]. A hypothesis space [math]\hypospace[/math] that is too small might fail to include a predictor map required to reproduce the (potentially highly non-linear) relation between features and label. Consider the task of grouping or classifying images into “cat” images and “no cat image”. The classification of each image is based solely on the feature vector obtained from the pixel colour intensities. The relation between features and label ([math]\truelabel \in \{ \mbox{cat}, \mbox{no cat} \}[/math]) is highly non-linear. Any ML method that uses a hypothesis space consisting only of linear maps will most likely fail to learn a good predictor (classifier). We say that a ML method is underfitting if it uses a hypothesis space that does not contain any hypotheses maps that can accurately predict the label of any data points.
  • It has to be sufficiently small such that its processing fits the available computational resources (memory, bandwidth, processing time). We must be able to efficiently search over the hypothesis space to find good predictors (see Section The Loss and Chapter Empirical Risk Minimization ). This requirement implies also that the maps [math]h(\featurevec)[/math] contained in [math]\hypospace[/math] can be evaluated (computed) efficiently [29]. Another important reason for using a hypothesis space [math]\hypospace[/math] that is not too large is to avoid overfitting (see Chapter Regularization ). If the hypothesis space [math]\hypospace[/math] is too large, then we can easily find a hypothesis which (almost) perfectly predicts the labels of data points in a training set which is used to learn a hypothesis. However, such a hypothesis might deliver poor predictions for labels of data points outside the training set. We say that the hypothesis does not generalize well.

Parametrized Hypothesis spaces

A wide range of current scientific computing environments allow for efficient numerical linear algebra. This hard- and software allows to efficiently process data that is provided in the form of numeric arrays such as vectors, matrices or tensors [1]. To take advantage of such computational infrastructure, many ML methods use the hypothesis space

[[math]]\begin{align} \label{equ_def_hypo_linear_pred} \hypospace^{(\featuredim)} \!\defeq\!\{ h^{(\weights)}\!:\!\mathbb{R}^{\featuredim}\!\rightarrow\!\mathbb{R}: h^{(\weights)}(\featurevec)\!\defeq\!\featurevec^{T} \weights \mbox{ with some parameter vector } \weights\!\in\!\mathbb{R}^{\featuredim} \}. \end{align}[[/math]]

The hypothesis space \eqref{equ_def_hypo_linear_pred} is constituted by linear maps (functions)

[[math]]\begin{equation} \label{equ_def_linear_map} h^{(\weights)}\big(\featurevec\big): \mathbb{R}^{\featuredim} \rightarrow \mathbb{R}: \featurevec \mapsto \weights^{T} \featurevec. \end{equation}[[/math]]

The function [math]h^{(\weights)}[/math] \eqref{equ_def_linear_map} maps, in a linear fashion, the feature vector [math]\featurevec \in \mathbb{R}^{\featuredim}[/math] to the predicted label [math]h^{(\weights)}(\featurevec) = \featurevec^{T} \weights \in \mathbb{R}[/math]. For [math]\featuredim\!=\!1[/math] the feature vector reduces a single feature [math]\feature[/math] and the hypothesis space \eqref{equ_def_hypo_linear_pred} consists of all maps [math]h^{(\weight)}(\feature) = \weight \feature[/math] with weight [math]\weight \in \mathbb{R}[/math] (see Figure scalar_lin_space).


Consider a hypothesis [math]h: \featurespace \rightarrow \labelspace[/math] that is used for locating a cleaning robot. The hypothesis [math]h[/math] reads in the feature vector [math]\featurevec^{(\timeidx)} \in \featurespace[/math], that might be RGB pixel intensities of an on-board camera snapshot, at time [math]\timeidx[/math]. It then outputs a prediction [math]\hat{\truelabel}^{(\timeidx)}=h(\featurevec^{(\timeidx)} )[/math] for the [math]\truelabel[/math]-coordinate [math]\truelabel^{(\timeidx)}[/math] of the cleaning robot at time [math]\timeidx[/math]. A key problem studied within ML is how to automatically learn a good (accurate) predictor [math]h[/math] such that [math]\truelabel^{(\timeidx)} \approx h(\featurevec^{(\timeidx)} )[/math].

Three particular members of the hypothesis space [math]\hypospace =\{ h^{(\weight)}: \mathbb{R} \rightarrow \mathbb{R}, h^{(\weight)}(\feature) = \weight \feature \}[/math] which consists of all linear functions of the scalar feature [math]x[/math]. We can parametrize this hypothesis space conveniently using the weight [math] \weight \in \mathbb{R}[/math] as [math]h^{(\weight)}(\feature) = \weight \feature[/math].

The elements of the hypothesis space [math]\hypospace[/math] in \eqref{equ_def_hypo_linear_pred} are parametrized by the parameter vector [math]\weights \in \mathbb{R}^{\featuredim}[/math]. Each map [math]h^{(\weights)} \in \hypospace[/math] is fully specified by the parameter vector [math]\weights \in \mathbb{R}^{\featuredim}[/math]. This parametrization of the hypothesis space [math]\hypospace[/math] allows to process and manipulate hypothesis maps by vector operations. In particular, instead of searching over the function space [math]\hypospace[/math] (its elements are functions!) to find a good hypothesis, we can equivalently search over all possible parameter vectors [math]\weights \in \mathbb{R}^{\featuredim}[/math].

The search space [math]\mathbb{R}^{\featuredim}[/math] is still (uncountably) infinite but it has a rich geometric and algebraic structure that allows to efficiently search over this space. Chapter Gradient-Based Learning discusses methods that use the concept ot a gradient to implement an efficient search for useful parameter vectors [math]\weights \in \mathbb{R}^{\featurelen}[/math].

The hypothesis space \eqref{equ_def_hypo_linear_pred} is also appealing because of the broad availability of computing hardware such as graphic processing units. Another factor boosting the widespread use of \eqref{equ_def_hypo_linear_pred} might be the offer for optimized software libraries for numerical linear algebra.

The hypothesis space \eqref{equ_def_hypo_linear_pred} can also be used for classification problems, e.g., with label space [math]\labelspace = \{-1,1\}[/math]. Indeed, given a linear predictor map [math]h^{(\weights)}[/math] we can classify data points according to [math]\hat{\truelabel}\!=\!1[/math] for [math]h^{(\weights)}(\featurevec) \geq 0[/math] and [math]\hat{\truelabel}\!=\!-1[/math] otherwise. We refer to a classifier that computes the predicted label by first applying a linear map to the features as a linear classifier.

Figure fig_lin_dec_boundary illustrates the decision regions \eqref{equ_decision_region} of a linear classifier for binary labels. The decision regions are half-spaces and, in turn, the decision boundary is a hyperplane [math]\{ \featurevec: \weights^{T} \featurevec = b \}[/math]. Note that each linear classifier corresponds to a particular linear hypothesis map from the hypothesis space \eqref{equ_def_hypo_linear_pred}. However, we can use different loss functions to measure the quality of a linear classifier. Three widely-used examples for ML methods that learn a linear classifier are logistic regression (see Section Logistic Regression ), the support vector machine (SVM) (see Section Support Vector Machines ) and the naive Bayes classifier (see Section Bayes Classifier ).

A hypothesis [math]h: \featurespace\!\rightarrow\!\labelspace[/math] for a binary classification problem, with label space [math]\labelspace=\{ -1,1\}[/math] and feature space [math]\featurespace=\mathbb{R}^{2}[/math], can be represented conveniently via the decision boundary (dashed line) which separates all feature vectors [math]\featurevec[/math] with [math]h(\featurevec) \geq 0 [/math] from the region of feature vectors with [math]h(\featurevec)\lt0[/math]. If the decision boundary is a hyperplane [math]\{ \featurevec : \weights^{T} \featurevec = b \}[/math] (with normal vector [math]\weights \in \mathbb{R}^{\featuredim}[/math]), we refer to the map [math]h[/math] as a linear classifier.

In some application domains, the relation between features [math]\featurevec[/math] and label [math]\truelabel[/math] of a data point is highly non-linear. As a case in point, consider data points that are images of animals. The map that relates the pixel intensities of an image to a label value indicating if the image shows a cat is highly non-linear. For such applications, the hypothesis space \eqref{equ_def_hypo_linear_pred} is not suitable as it only contains linear maps. The second main example for a parametrized hypothesis space studied in this book contains also non-linear maps. This parametrized hypothesis space is obtained from a parametrized signal flow diagram which is referred to as an ANN. Section Deep Learning discusses the construction of non-linear parametrized hypothesis spaces using an ANN.

Upgrading a Hypothesis Space via Feature Maps. Let us discuss a simple but powerful technique for enlarging (“upgrading”) a given hypothesis space [math]\hypospace[/math] to a larger hypothesis space [math]\hypospace' \supseteq \hypospace[/math] that offers a wider selection of hypothesis maps. The idea is to replace the original features [math]\featurevec[/math] of a data point with new (transformed) features [math]\rawfeaturevec = \featuremapvec (\featurevec)[/math]. The transformed features are obtained by applying a feature map [math]\featuremap(\cdot)[/math] to the original features [math]\featurevec[/math]. This upgraded hypothesis space [math]\hypospace'[/math] consists of all concatenations of the feature map [math]\featuremap[/math] and some hypothesis [math]h \in \hypospace[/math],

[[math]]\begin{equation} \label{equ_def_enlarged_hypospace} \hypospace' \defeq \big\{ h'(\featurevec) \defeq h\big( \featuremapvec (\featurevec) \big) : h \in \hypospace\big\}. \end{equation}[[/math]]

The construction \eqref{equ_def_enlarged_hypospace} used for arbitrary combinations of a feature map [math]\featuremap(\cdot)[/math] and a “base” hypothesis space [math]\hypospace[/math]. The only requirement is that the output of the feature map can be used as input for a hypothesis [math]h \in \hypospace[/math]. More formally, the range of the feature map must belong to the domain of the maps in [math]\hypospace[/math]. Examples for ML methods that use a hypothesis space of the form \eqref{equ_def_enlarged_hypospace} include polynomial regression (see Section Polynomial Regression ), Gaussian basis regression (see Section Gaussian Basis Regression ) and the important family of kernel methods (see Section Kernel Methods ). The feature map in \eqref{equ_def_enlarged_hypospace} might also be obtained from clustering or feature learning methods (see Section Clustering as Preprocessing and Section Combining PCA with linear regression).

For the special case of the linear hypothesis space \eqref{equ_def_hypo_linear_pred}, the resulting enlarged hypothesis space \eqref{equ_def_enlarged_hypospace} is given by all linear maps [math]\weights^{T} \rawfeaturevec[/math] of the transformed features [math]\featuremapvec(\featurevec)[/math]. Combining the hypothesis space \eqref{equ_def_hypo_linear_pred} with a non-linear feature map results in a hypothesis space that contains non-linear maps from the original feature vector [math]\featurevec[/math] to the predicted label [math]\hat{\truelabel}[/math],

[[math]]\begin{equation} \label{equ_conc_featuremap_linear} \hat{\truelabel} = \weights^{T} \rawfeaturevec = \weights^{T} \featuremapvec (\featurevec). \end{equation}[[/math]]


Non-Numeric Features. The hypothesis space \eqref{equ_def_hypo_linear_pred} can only be used for data points whose features are numeric vectors [math]\featurevec = (\feature_{1},\ldots,\feature_{\featuredim})^{T} \in \mathbb{R}^{\featuredim}[/math]. In some application domains, such as natural language processing, there is no obvious natural choice for numeric features. However, since ML methods based on the hypothesis space \eqref{equ_def_hypo_linear_pred} are well developed (using numerical linear algebra), it might be useful to construct numerical features even for non-numeric data (such as text). For text data, there has been significant progress recently on methods that map a human-generated text into sequences of vectors (see [30](Chap. 12) for more details). Moreover, Section Feature Learning for Non-Numeric Data will discuss an approach to generate numeric features for data points that have an intrinsic notion of similarity.

The Size of a Hypothesis Space

The notion of a hypothesis space being too small or being too large can be made precise in different ways. The size of a finite hypothesis space [math]\hypospace[/math] can be defined as its cardinality [math]|\hypospace|[/math] which is simply the number of its elements. For example, consider data points represented by [math]100 \times 10\!=\!1000[/math] black-and-white pixels and characterized by a binary label [math]\truelabel \in \{0,1\}[/math]. We can model such data points using the feature space [math]\featurespace = \{0,1\}^{1000}[/math] and label space [math]\labelspace = \{0,1\}[/math]. The largest possible hypothesis space [math]\hypospace = \labelspace^\featurespace[/math] consists of all maps from [math]\featurespace[/math] to [math]\labelspace[/math]. The size or cardinality of this space is [math]|\hypospace| = 2^{2^{1000}}[/math].

Many ML methods use a hypothesis space which contains infinitely many different predictor maps (see, e.g., \eqref{equ_def_hypo_linear_pred}). For an infinite hypothesis space, we cannot use the number of its elements as a measure for its size. Indeed, for an infinite hypothesis space, the number of elements is not well-defined. Therefore, we measure the size of a hypothesis space [math]\hypospace[/math] using its effective dimension [math]\effdim{\hypospace}[/math].

Consider a hypothesis space [math]\hypospace[/math] consisting of maps [math]h: \featurespace\rightarrow \labelspace[/math] that read in the features [math]\featurevec \in \featurespace[/math] and output an predicted label [math]\hat{\truelabel} = h(\featurevec) \in \labelspace[/math]. We define the effective dimension [math]\effdim{\hypospace}[/math] of [math]\hypospace[/math] as the maximum number [math]\sizehypospace \in \mathbb{N}[/math] such that for any set [math]\dataset = \big\{ \big(\featurevec^{(1)},\truelabel^{(1)}\big), \ldots, \big(\featurevec^{(\sizehypospace)},\truelabel^{(\sizehypospace)}\big) \}[/math] of [math]\sizehypospace[/math] data points with different features, we can always find a hypothesis [math]h \in \hypospace[/math] that perfectly fits the labels, [math]\truelabel^{(\sampleidx)} = h\big( \featurevec^{(\sampleidx)} \big)[/math] for [math]\sampleidx=1,\ldots,\sizehypospace[/math].

The effective dimension of a hypothesis space is closely related to the Vapnik–Chervonenkis (VC) dimension [31]. The VC dimension is maybe the most widely used concept for measuring the size of infinite hypothesis spaces [31][32][25][33]. However, the precise definition of the VC dimension are beyond the scope of this book. Moreover, the effective dimension captures most of the relevant properties of the VC dimension for our purposes. For a precise definition of the VC dimension and discussion of its applications in ML we refer to [32]. Let us illustrate the concept of effective dimension as a measure for the size of a hypothesis space with two examples: linear regression and polynomial regression.

Linear regression uses the hypothesis space [math]\hypospace^{(\featuredim)} = \{ h: \mathbb{R}^{\featuredim} \rightarrow \mathbb{R}: h(\featurevec) = \weights^{T} \featurevec \mbox{ with some vector } \weights \in \mathbb{R}^{\featurelen}\}.[/math] Consider a dataset [math]\dataset=\big\{ \big(\featurevec^{(1)} ,\truelabel^{(1)}\big),\ldots, \big(\featurevec^{(\samplesize)} ,\truelabel^{(\samplesize)}\big) \big\} [/math] consisting of [math]\samplesize[/math] data points. We refer to this number also as the sample size of the dataset. Each data point is characterized by a feature vector [math]\featurevec^{(\sampleidx)} \in \mathbb{R}^{\featuredim}[/math] and a numeric label [math]\truelabel^{(\sampleidx)} \in \mathbb{R}[/math]. Let us further assume that data points are realizations of iid RVs with a common probability distribution.

Under the i.i.d. assumption, the matrix [math]\featuremtx = \big(\featurevec^{(1)},\ldots,\featurevec^{(\samplesize)}\big) \in \mathbb{R}^{\featuredim\times \samplesize},[/math] which is obtained by stacking (column-wise) the feature vectors [math]\featurevec^{(\sampleidx)}[/math] (for [math]\sampleidx=1,\ldots,\samplesize[/math]), is full (column-) rank with probability one. Basic results of linear algebra allow to show that the data points in [math]\dataset[/math] can be perfectly fit by a linear map [math]h \in \hypospace^{(\featuredim)}[/math] as long as [math]\samplesize \leq \featuredim[/math].

As long as the number [math]\samplesize[/math] of data points does not exceed the number of features characterizing each data point, i.e., as long as [math]\samplesize \leq \featuredim[/math], we can find (with probability one) a parameter vector [math]\widehat{\weights}[/math] such that [math]\truelabel^{(\sampleidx)} = \widehat{\weights}^{T} \featurevec^{(\sampleidx)}[/math] for all [math]\sampleidx=1,\ldots,\samplesize[/math]. The effective dimension of the linear hypothesis space [math]\hypospace^{(\featuredim)}[/math] is therefore [math]\sizehypospace = \featuredim[/math].

As a second example, consider the hypothesis space [math]\hypospace_{\rm poly}^{(\featuredim)}[/math] which is constituted by the set of polynomials with maximum degree [math]\featuredim[/math]. The fundamental theorem of algebra tells us that any set of [math]\samplesize[/math] data points with different features can be perfectly fit by a polynomial of degree [math]\featuredim[/math] as long as [math]\featuredim \geq \samplesize[/math]. Therefore, the effective dimension of the hypothesis space [math]\hypospace_{\rm poly}^{(\featuredim)}[/math] is [math]\sizehypospace=\featuredim[/math]. Section Polynomial Regression discusses polynomial regression in more detail.

The hypothesis space [math]\hypospace[/math] is a (typically very small) subset of the (typically very large) set [math]\labelspace^{\featurespace}[/math] of all possible maps from feature space [math]\featurespace[/math] into the label space [math]\labelspace[/math].

The Loss

Every ML method uses a (more of less explicit) hypothesis space [math]\hypospace[/math] which consists of all computationally feasible predictor maps [math]h[/math]. Which predictor map [math]h[/math] out of all the maps in the hypothesis space [math]\hypospace[/math] is the best for the ML problem at hand? To answer this questions, ML methods use the concept of a loss function. Formally, a loss function is a map [math]\lossfun: \featurespace \times \labelspace \times \hypospace \rightarrow \mathbb{R}_{+}: \big( \big(\featurevec,\truelabel\big), h\big) \mapsto \loss{(\featurevec,\truelabel)}{h}[/math] which assigns a pair consisting of a data point, itself characterized by features [math]\featurevec[/math] and label [math]\truelabel[/math], and a hypothesis [math]h \in \hypospace[/math] the non-negative real number [math]\loss{(\featurevec,\truelabel)}{h}[/math].

Some loss function [math]\loss{(\featurevec,\truelabel)}{h}[/math] for a fixed data point, with features [math]\featurevec[/math] and label [math]\truelabel[/math], and varying hypothesis [math]h[/math]. ML methods try to find (learn) a hypothesis that incurs minimum loss.

The loss value [math]\loss{(\featurevec,\truelabel)}{h}[/math] quantifies the discrepancy between the true label [math]\truelabel[/math] and the predicted label [math]h(\featurevec)[/math]. A small (close to zero) value [math]\loss{(\featurevec,\truelabel)}{h}[/math] indicates a low discrepancy between predicted label and true label of a data point. Figure fig_loss_function depicts a loss function for a given data point, with features [math]\featurevec[/math] and label [math]\truelabel[/math], as a function of the hypothesis [math]h \in \hypospace[/math]. The basic principle of ML methods can then be formulated as: Learn (find) a hypothesis out of a given hypothesis space [math]\hypospace[/math] that incurs a minimum loss [math]\loss{(\featurevec,\truelabel)}{h}[/math] for any data point (see Chapter Empirical Risk Minimization ).

Much like the choice for the hypothesis space [math]\hypospace[/math] used in a ML method, also the loss function is a design choice. We will discuss some widely used examples for loss function in Section Loss Functions for Numeric Labels and Section Loss Functions for Categorical Labels. The choice for the loss function should take into account the computational complexity of searching the hypothesis space for a hypothesis with minimum loss. Consider a ML method that uses a parametrized hypothesis space and a loss function that is a convex and differentiable (smooth) function of the parameters of a hypothesis. In this case, searching for a hypothesis with small loss can be done efficiently using the gradient-based methods discussed in Chapter Gradient-Based Learning . The minimization of a loss function that is either non-convex or non-differentiable is typically computationally much more difficult. Section Computational and Statistical Aspects of ERM discusses the computational complexities of different types of loss functions in more detail.

Beside computational aspects, the choice for the loss function should also take into account statistical aspects. For example, some loss functions result in ML methods that are more robust against outliers (see Section Least Absolute Deviation Regression and Section Support Vector Machines ). The choice of loss function might also be guided by probabilistic models for the data generated in an ML application. Section Maximum Likelihood details how the maximum likelihood principle of statistical inference provides an explicit construction of loss functions in terms of an (assumed) probability distribution for data points.

The choice for the loss function used to evaluate the quality of a hypothesis might also be influenced by its interpretability. Section Loss Functions for Categorical Labels discusses loss functions for hypotheses that are used to classify data points into two categories. It seems natural to measure the quality of such a hypothesis by the average number of wrongly classified data points, which is precisely the average [math]0/1[/math] loss \eqref{equ_def_0_1} (see Section Loss Functions for Categorical Labels).

In contrast to its appealing interpretation as error-rate, the computational aspects of the average [math]0/1[/math] loss are less pleasant. Minimizing the average [math]0/1[/math] loss to learn an accurate hypothesis amounts to a non-convex and non-smooth optimization problem which is computationally challenging. Section Loss Functions for Categorical Labels introduces the logistic loss as a computationally attractive alternative choice for the loss function in binary classification problems. Learning a hypothesis that minimizes a (average) logistic loss amounts to a smooth convex optimization problem. Chapter Gradient-Based Learning discusses computationally cheap gradient-based methods for solving smooth convex optimization problems.

The above aspects (computation, statistic, interpretability) result typically in conflicting goals for the choice of a loss function. A loss function that has favourable statistical properties might incur a high computational complexity of the resulting ML method. Loss functions that result in computationally efficient ML methods might not allow for an easy interpretation (what does it mean intuitively if the logistic loss of a hypothesis in a binary classification problem is [math]10^{-1}[/math]?). It might therefore be useful to use different loss functions for the search of a good hypothesis (see Chapter Empirical Risk Minimization ) and for its final evaluation. Figure fig_loss_function_and_metric depicts an example for two such loss functions, one of them used for learning a hypothesis by minimizing the loss and the other one used for the final performance evaluation.

For example, in a binary classification problem, we might use the logistic loss to search for (learn) an accurate hypothesis using the optimization methods in Chapter Empirical Risk Minimization . The logistic loss is appealing for this purpose as it can be minimized via efficient gradient-based methods (see Chapter Gradient-Based Learning ). After having found (learnt) an accurate hypothesis, we use the average [math]0/1[/math] loss for the final performance evaluation. The [math]0/1[/math] loss is appealing for this purpose as it can be interpreted as an error or misclassification rate. The loss function used for the final performance evaluation of a learnt hypothesis is sometimes referred to as metric.

Two different loss functions for a given data point and varying hypothesis [math]h[/math]. One of these loss functions (solid curve) is used to learn a good hypothesis by minimizing the loss. The other loss function (dashed curve) is used to evaluate the performance of the learnt hypothesis. The loss function used for this final performance evaluation is sometimes referred to as a metric.

Loss Functions for Numeric Labels

For ML problems involving data points with a numeric label [math]\truelabel \in \mathbb{R}[/math], i.e., for regression problems (see Section Labels), a widely used (first) choice for the loss function can be the squared error loss

[[math]]\begin{equation} \label{equ_squared_loss} \loss{(\featurevec,\truelabel)}{h} \defeq \big(\truelabel - \underbrace{h(\featurevec)}_{=\predictedlabel} \big)^{2}. \end{equation}[[/math]]

The squared error loss \eqref{equ_squared_loss} depends on the features [math]\featurevec[/math] only via the predicted label value [math]\predictedlabel= h(\featurevec)[/math]. We can evaluate the squared error loss solely using the prediction [math]h(\featurevec)[/math] and the true label value [math]\truelabel[/math]. Besides the prediction [math]h(\featurevec)[/math], no other properties of the features [math]\featurevec[/math] are required to determine the squared error loss. We will slightly abuse notation and use the shorthand [math]\loss{\truelabel}{\predictedlabel}[/math] for any loss function that depends on the features [math]\featurevec[/math] only via the predicted label [math]\predictedlabel=h(\featurevec)[/math]. Figure fig_squarederror_loss depicts the squared error loss as a function of the prediction error [math]\truelabel - \predictedlabel[/math].

A widely used choice for the loss function in regression problems (with data points having numeric labels) is the squared error loss \eqref{equ_squared_loss}. Note that, for a given hypothesis [math]h[/math], we can evaluate the squared error loss only if we know the features [math]\featurevec[/math] and the label [math]\truelabel[/math] of the data point.

The squared error loss \eqref{equ_squared_loss} has appealing computational and statistical properties. For linear predictor maps [math]h(\featurevec) = \weights^{T} \featurevec[/math], the squared error loss is a convex and differentiable function of the parameter vector [math]\weights[/math]. This allows, in turn, to efficiently search for the optimal linear predictor using efficient iterative optimization methods (see Chapter Gradient-Based Learning ). The squared error loss also has a useful interpretation in terms of a probabilistic model for the features and labels. Minimizing the squared error loss is equivalent to maximum likelihood estimation within a linear Gaussian model [33] (Sec. 2.6.3).

Another loss function used in regression problems is the absolute error loss [math]|\hat{y} - y|[/math]. Using this loss function to guide the learning of a predictor results in methods that are robust against a few outliers in the training set (see Section Least Absolute Deviation Regression ). However, this improved robustness comes at the expense of increased computational complexity of minimizing the (non-differentiable) absolute error loss compared to the (differentiable) squared error loss \eqref{equ_squared_loss}.

Loss Functions for Categorical Labels

Classification problems involve data points whose labels take on values from a discrete label space [math]\labelspace[/math]. In what follows, unless stated otherwise, we focus on binary classification problems. Moreover, without loss of generality, we use the label space [math]\labelspace = \{-1,1\}[/math]. Classification methods aim at learning a hypothesis or classifier that maps the features [math]\featurevec[/math] of a data point to a predicted label [math]\hat{\truelabel} \in \labelspace[/math].

It is often convenient to implement a classifier by thresholding the value [math]h(\featurevec) \in \mathbb{R}[/math] of a hypothesis map that can deliver arbitrary real numbers. We then classify a data point as [math]\predictedlabel=1[/math] if [math]h(\featurevec)\gt0[/math] and [math]\predictedlabel=-1[/math] otherwise. Thus, the predicted label is obtained from the sign of the value [math]h(\featurevec)[/math]. While the sign of [math]h(\featurevec)[/math] determines the predicted label [math]\predictedlabel[/math], we can interpret the absolute value [math]|h(\featurevec)|[/math] as the confidence in this classification. Is is customary to abuse notation and refer to both, the final classification rule (obtained by a thresholding step) [math]\featurevec \mapsto \predictedlabel[/math] and the hypothesis [math]h(\featurevec)[/math] (whose outout is thresholded) as a binary classifier.

In principle, we can measure the quality of a hypothesis when used to classify data points using the squared error loss \eqref{equ_squared_loss}. However, the squared error is typically a poor measure for the quality of a hypothesis [math]h(\featurevec)[/math] that is used to classify a data point with binary label [math]\truelabel \in \{-1,1\}[/math]. Figure fig_squarederrornotgoodclass illustrates how the squared error loss of a hypothesis can be (highly) misleading for binary classification.

A training set consisting of four data points with binary labels [math]\predictedlabel^{(\sampleidx)} \in \{-1,1\}[/math]. Minimizing the squared error loss \eqref{equ_squared_loss} would prefer the (poor) classifier [math]h^{(1)}[/math] over the (reasonable) classifier [math]h^{(2)}[/math].

Figure fig_squarederrornotgoodclass depicts a dataset [math]\dataset[/math] consisting of [math]\samplesize=4[/math] data points with binary labels [math]\truelabel^{(\sampleidx)} \in \{-1,1\}[/math], for [math]\sampleidx=1,\ldots,\samplesize[/math]. The figure also depicts two candidate hypotheses [math]h^{(1)}(\feature)[/math] and [math]h^{(2)}(\feature)[/math] that can be used for classifying data points. The classifications [math]\predictedlabel[/math] obtained with the hypothesis [math]h^{(2)}(\feature)[/math] would perfectly match the labels of the four training data points since [math]h^{(2)}\big(\feature^{(\sampleidx)}\big)\geq 0[/math] if and if only if [math]\truelabel^{(\sampleidx)}=1[/math]. In contrast, the classifications [math]\predictedlabel^{(\sampleidx)}[/math] obtained by thresholding [math]h^{(1)}(\feature)[/math] are wrong for data points with [math]\truelabel=-1[/math].

Looking at [math]\dataset[/math], we might prefer using [math]h^{(2)}(\feature)[/math] over [math]h^{(1)}[/math] to classify data points. However, the squared error loss incurred by the (reasonable) classifier [math]h^{(2)}[/math] is much larger than the squared error loss incurred by the (poor) classifier [math]h^{(1)}[/math]. The squared error loss is typically a bad choice for assessing the quality of a hypothesis map that is used for classifying data points into different categories.

Generally speaking, we want the loss function to punish (deliver large values for) a hypothesis that is very confident ([math]|h(\featurevec)|[/math] is large) in a wrong classification ([math]\predictedlabel \neq \truelabel[/math]). Moreover, a useful loss function loss function should not punish (deliver small values for) a hypothesis is very confident ([math]|h(\featurevec)|[/math] is large) in a correct classification ([math]\predictedlabel = \truelabel[/math]). However, by its very definition, the squared loss \eqref{equ_squared_loss} yields large values if the confidence [math]|h(\featurevec)|[/math] is large, no matter if the resulting (after thresholding) classification is correct or wrong.

We now discuss some loss functions which have proven useful for assessing the quality of a hypothesis that is used to classify data points. Unless noted otherwise, the formulas for these loss functions are valid only if the label values are the real numbers [math]-1[/math] and [math]1[/math] (the label space is [math]\labelspace = \{-1,1\}[/math]). These formulas need to modified accordingly if different label values are used. For example, instead of the label space [math]\labelspace = \{-1,1\}[/math], we could equally well use the label space [math]\labelspace =\{0,1\}[/math], or [math]\labelspace = \{ \square, \triangle \}[/math] or [math]\labelspace = \{ \mbox{ “Class 1”}, \mbox{ “Class 2”} \}[/math].

A natural choice for the loss function can be based on the requirement that a reasonable hypothesis should deliver a correct classifications, [math]\predictedlabel = \truelabel[/math] for any data point. This suggests to learn a hypothesis [math]h(\featurevec)[/math] by minimizing the [math]0/1[/math] loss

[[math]]\begin{equation} \label{equ_def_0_1} \loss{(\featurevec,\truelabel)}{h} \defeq \begin{cases} 1 &\mbox{ if } \truelabel \neq \predictedlabel \\ 0 & \mbox{ else,} \end{cases} \mbox{ with } \predictedlabel = 1 \mbox{ for }h(\featurevec) \geq 0\mbox{, and } \predictedlabel = -1 \mbox{ for }h(\featurevec) \lt 0. \end{equation}[[/math]]

Figure fig_class_loss illustrates the [math]0/1[/math] loss \eqref{equ_def_0_1} for a data point with features [math]\featurevec[/math] and label [math]\truelabel\!=\!1[/math] as a function of the hypothesis value [math]h(\featurevec)[/math]. The [math]0/1[/math] loss is equal to zero if the hypothesis yields a correct classification [math]\predictedlabel=\truelabel[/math]. For a wrong classification [math]\predictedlabel\neq\truelabel[/math], the [math]0/1[/math] loss yields the value one.

The [math]0/1[/math] loss \eqref{equ_def_0_1} is conceptually appealing when data points are interpreted as realizations of iid RVs with the same probability distribution [math]p(\featurevec,\truelabel)[/math]. Given [math]\samplesize[/math] realizations [math](\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)}) \big\}_{\sampleidx=1}^{\samplesize}[/math] of such iid RVs,

[[math]]\begin{equation} \label{equ_0_1_approx_prob} (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h} \approx \prob{ \truelabel \neq \predictedlabel } \end{equation}[[/math]]

with high probability for sufficiently large sample size [math]\samplesize[/math]. A precise formulation of the approximation \eqref{equ_0_1_approx_prob} can be obtained from the law of large numbers [24](Section 1). We can apply the law of large numbers since the loss values [math]\loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h}[/math] are realizations of iid RVs. It is customary to indicate the average [math]0/1[/math] loss of a hypothesis as the accuracy [math]1 - (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h}[/math].

In view of \eqref{equ_0_1_approx_prob}, the [math]0/1[/math] loss \eqref{equ_def_0_1} seems a very natural choice for assessing the quality of a classifier if our goal is to enforce correct classifications [math]\hat{\truelabel}=\truelabel[/math]. This appealing statistical property of the [math]0/1[/math] loss comes at the cost of a high computational complexity. Indeed, for a given data point [math](\featurevec,\truelabel)[/math], the [math]0/1[/math] loss \eqref{equ_def_0_1} is non-convex and non-differentiable when viewed as a function of the hypothesis [math]h[/math]. Thus, ML methods that use the [math]0/1[/math] loss to learn a hypothesis map typically involve advanced optimization methods to solve the resulting learning problem (see Section Bayes Classifier ).

To avoid the non-convexity of the [math]0/1[/math] loss \eqref{equ_def_0_1} we might approximate it by a convex loss function. One popular convex approximation of the [math]0/1[/math] loss is the hinge loss

[[math]] \begin{equation} \label{equ_hinge_loss} \loss{(\featurevec,\truelabel)}{h} \defeq \max \{ 0 , 1 - \truelabel h(\featurevec) \}. \end{equation}[[/math]]

Figure fig_class_loss depicts the hinge loss \eqref{equ_hinge_loss} as a function of the hypothesis [math]h(\featurevec)[/math]. The hinge loss \eqref{equ_hinge_loss} becomes minimal (equal to zero) for a correct classification ([math]\hat{\truelabel} = \truelabel[/math]) with sufficient confidence [math]h(\featurevec) \geq 1[/math]. For a wrong classification ([math]\hat{\truelabel} \neq \truelabel[/math]), the hinge loss increases monotonically with the confidence [math]|h(\feature)|[/math] in the wrong classification. While the hinge loss avoids the non-convexity of the [math]0/1[/math] loss, it still is a non-differentiable function of [math]h(\featurevec)[/math]. A non-differentiable loss function cannot be minimized by simple gradient-based methods (see Chapter Gradient-Based Learning ) but require more advanced optimization methods.

Beside the [math]0/1[/math] loss and the hinge loss, another popular loss function for binary classification problems is the logistic loss

[[math]]\begin{equation} \label{equ_log_loss} \loss{(\featurevec,\truelabel)}{h} \defeq \log ( 1 + \exp(- \truelabel h(\featurevec))). \end{equation}[[/math]]

The logistic loss \eqref{equ_log_loss} is used in logistic regression (see Section Logistic Regression ) to measure the usefulness of a linear hypothesis [math]h(\featurevec) = \weights^{T} \featurevec[/math]. Figure fig_class_loss depicts the logistic loss \eqref{equ_log_loss} as a function of the hypothesis [math]h(\featurevec)[/math]. The logistic loss \eqref{equ_log_loss} is a convex and differentiable function of [math]h(\featurevec)[/math]. For a correct classification ([math]\hat{\truelabel}\!=\!\truelabel[/math]), the logistic loss \eqref{equ_log_loss} decreases monotonically with increasing confidence [math]h(\featurevec)[/math]. For a wrong classification ([math]\hat{\truelabel}\!\neq\!\truelabel[/math]), the logistic loss increases monotonically with increasing confidence [math]|h(\featurevec)|[/math] in the wrong classification.

Both the hinge loss \eqref{equ_hinge_loss} and the logistic loss \eqref{equ_log_loss} are convex functions of the weights [math]\weights \in \mathbb{R}^{\featuredim}[/math] in a linear hypothesis map [math]h(\featurevec) = \weights^{T} \featurevec[/math]. However, in contrast to the hinge loss, the logistic loss \eqref{equ_log_loss} is also a differentiable function of the [math]\weights[/math]. The convex and differentiable logistic loss function can be minimized using simple gradient-based methods such as gradient descent (GD) (see Chapter GD for Logistic Regression ). In contrast, we cannot use basic gradient-based methods to minimize the hinge loss since it is not differentiable (it does not have a gradient everywhere). However, we can apply a generalization of GD which is known as subgradient descent [34]. Subgradient descent is obtained from GD by generalizing the concept of a gradient to that of a subgradient.


The solid curves depict three widely-used loss functions for binary classification. A data point with features [math]\featurevec[/math] and label [math]\truelabel=1[/math] is classified as [math]\predictedlabel=1[/math] if [math]h(\featurevec)\geq 0[/math] and classified as [math]\predictedlabel=-1[/math] if [math]h(\featurevec) \lt0[/math]. We can interpret the absolute value [math]|h(\featurevec)|[/math] as the confidence in the classification [math]\hat{\truelabel}[/math]. The more confident we are in a correct classification ([math]\predictedlabel\!=\!\truelabel\!=\!1[/math]), i.e, the more positive [math]h(\featurevec)[/math], the smaller the loss. Note that each of the three loss functions for binary classification tends monotonically towards [math]0[/math] for increasing [math]h(\featurevec)[/math]. The dashed curve depicts the squared error loss \eqref{equ_squared_loss}, which increases for increasing [math]h(\featurevec)[/math].


Loss Functions for Ordinal Label Values

Some loss functions are particularly suited for predicting ordinal label values (see Section The Data ). Consider data points representing areal images of rectangular areas of size [math]1 \mbox{km}[/math] by [math]1 \mbox{km}[/math]. We characterize each data point (rectangular area) by the feature vector [math]\featurevec[/math] obtained by stacking the RGB values of each image pixel (see Figure fig_snapshot_pixels). Beside the feature vector, each rectangular area is characterized by a label [math]\truelabel \in \{1,2,3\}[/math] where

  • [math]\truelabel=1[/math] means that the area contains no trees.
  • [math]\truelabel=2[/math] means that the area is partially covered by trees.
  • [math]\truelabel=3[/math] means that the are is entirely covered by trees.

We might consider the label value [math]\truelabel=2[/math] to be “larger” than label value [math]\truelabel=1[/math] and label value [math]\truelabel=3[/math] to be “larger” than label value [math]\truelabel=2[/math]. Let us construct a loss function that takes such an ordering of label values into account when evaluating the quality of the predictions [math]h(\featurevec)[/math].

Consider a data point with feature vector [math]\featurevec[/math] and label [math]\truelabel=1[/math] as well as two different hypotheses [math]h^{(a)}, h^{(b)} \in \hypospace[/math]. The hypothesis [math]h^{(a)}[/math] delivers the predicted label [math]\hat{\truelabel}^{(a)}= h^{(a)}(\featurevec) =2[/math], while the other hypothesis [math]h^{(b)}[/math] delivers the predicted label [math]\hat{\truelabel}^{(a)}= h^{(a)}(\featurevec) =3[/math]. Both predictions are wrong, since they are different form the true label value [math]\truelabel=1[/math]. It seems reasonable to consider the prediction [math]\hat{\truelabel}^{(a)}[/math] to be less wrong than the prediction [math]\hat{\truelabel}^{(b)}[/math] and therefore we would prefer the hypothesis [math]h^{(a)}[/math] over [math]h^{(b)}[/math]. However, the [math]0/1[/math] loss is the same for [math]h^{(a)}[/math] and [math]h^{(b)}[/math] and therefore does not reflect our preference for [math]h^{(a)}[/math]. We need to modify (or tailor) the [math]0/1[/math] loss to take into account the application-specific ordering of label values. For the above application, we might define a loss function via

[[math]]\begin{equation} \loss{(\featurevec,\truelabel)}{h} \defeq \begin{cases} 0 &\mbox{ , when } \truelabel = h(\featurevec) \\ 10 &\mbox{ , when } |\truelabel-h(\featurevec)|=1 \\ 100 & \mbox{ otherwise.} \end{cases} \end{equation}[[/math]]

Empirical Risk

The basic idea of ML methods (including those discussed in Chapter The Landscape of ML ) is to find (or learn) a hypothesis (out of a given hypothesis space [math]\hypospace[/math]) that incurs minimum loss when applied to arbitrary data points. To make this informal goal precise we need to specify what we mean by “arbitrary data point”.

One of the most successful approaches to define the notion of “arbitrary data point” is by probabilistic models for the observed data points. The most basic and widely-used probabilistic model interprets data points [math]\big( \featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)} \big)[/math] as realizations of iid RVs with a common probability distribution [math]p(\featurevec,\truelabel)[/math]. Given such a probabilistic model, it seems natural to measure the quality of a hypothesis [math]h[/math] by the expected loss or Bayes risk [21]

[[math]]\begin{equation} \expect \big \{ \loss{(\featurevec,\truelabel)}{h} \} \defeq \int_{\featurevec,\truelabel} \loss{(\featurevec,\truelabel)}{h} d p(\featurevec,\truelabel). \end{equation}[[/math]]

The Bayes risk of [math]h[/math] is the expected value of the loss [math]\loss{(\featurevec,\truelabel)}{h}[/math] incurred when applying the hypothesis [math]h[/math] to (the realization of) a random data point with features [math]\featurevec[/math] and label [math]\truelabel[/math]. Note that the computation of the Bayes risk \eqref{equ_def_bayes_risk_int} requires the joint probability distribution [math]p(\featurevec,\truelabel)[/math] of the (random) features and label of data points.

The Bayes risk \eqref{equ_def_bayes_risk_int} seems to be reasonable performance measure for a hypothesis [math]h[/math]. Indeed, the Bayes risk of a hypothesis is small only if the hypothesis incurs a small loss on average for data points drawn from the probability distribution [math]p(\featurevec,\truelabel)[/math]. However, it might be challenging to verify if the data points generated in a particular application domain can be accurately modelled as realizations (draws) from a probability distribution [math]p(\featurevec,\truelabel)[/math]. Moreover, it is also often the case that we do not know the correct probability distribution [math]p(\featurevec,\truelabel)[/math].

Let us assume for the moment, that data points are generated as iid realizations of a common probability distribution [math]p(\featurevec,\truelabel)[/math] which is known. It seems reasonable to learn a hypothesis [math]\bayeshypothesis[/math] that incurs minimum Bayes risk,

[[math]]\begin{equation} \label{equ_def_bayes_risk_int} \expect \big \{ \loss{(\featurevec,\truelabel)}{\bayeshypothesis} \} \defeq \min_{h \in \hypospace} \expect \big \{ \loss{(\featurevec,\truelabel)}{h} \}. \end{equation}[[/math]]

A hypothesis that solves \eqref{equ_def_bayes_risk_int}, i.e., that achieves the minimum possible Bayes risk, is referred to as a Bayes estimator [21](Chapter 4). The main computational challenge for learning the optimal hypothesis is the efficient (numerical) solution of the optimization problem \eqref{equ_def_bayes_risk_int}. Efficient methods to solve the optimization problem \eqref{equ_def_bayes_risk_int} are studied within estimation theory [21][35].

The focus of this book is on ML methods which do not require knowledge of the underlying probability distribution [math]p(\featurevec,\truelabel)[/math]. One of the most widely used principle for these ML methods is to approximate the Bayes risk by an empirical (sample) average over a finite set of labeled data points [math]\dataset=\big(\featurevec^{(1)},\truelabel^{(1)}\big),\ldots,\big(\featurevec^{(\samplesize)},\truelabel^{(\samplesize)}\big)[/math]. In particular, we define the empirical risk of a hypothesis [math]h \in \hypospace[/math] for a dataset [math]\dataset[/math] as

[[math]]\begin{equation} \label{eq_def_emp_error_101} \emperror(h|\dataset) = (1/\samplesize)\sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h}. \end{equation} [[/math]]

The empirical risk of the hypothesis [math]h \in \hypospace[/math] is the average loss on the data points in [math]\dataset[/math]. To ease notational burden, we use [math]\emperror(h)[/math] as a shorthand for [math]\emperror(h|\dataset)[/math] if the underlying dataset [math]\dataset[/math] is clear from the context. Note that in general the empirical risk depends on both, the hypothesis [math]h[/math] and the (features and labels of the) data points in the dataset [math]\dataset[/math].

If the data points used to compute the empirical risk \eqref{eq_def_emp_error_101} are (can be modelled as) realizations of iid RVs whose common distribution is [math]p(\featurevec,\truelabel)[/math], basic results of probability theory tell us that

[[math]]\begin{equation} \label{equ_emp_risk_approximates_Bayes_risk} \expect \big \{ \loss{(\featurevec,\truelabel)}{h} \} \approx (1/\samplesize)\sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h} \mbox{ for sufficiently large sample size } \samplesize. \end{equation}[[/math]]

The approximation error in \eqref{equ_emp_risk_approximates_Bayes_risk} can be quantified precisely by some of the most basic results of probability theory. These results are often summarized under the umbrella term law of large numbers [24][36][23].

Many (if not most) ML methods are motivated by \eqref{equ_emp_risk_approximates_Bayes_risk} which suggests that a hypothesis with small empirical risk \eqref{eq_def_emp_error_101} will also result in a small expected loss. The minimum possible expected loss is achieved by the Bayes estimator of the label [math]\truelabel[/math], given the features [math]\featurevec[/math]. However, to actually compute the optimal estimator we would need to know the (joint) probability distribution [math]p(\featurevec,\truelabel)[/math] of features [math]\featurevec[/math] and label [math]\truelabel[/math].

Confusion Matrix

Consider a dataset [math]\dataset[/math] with data points characterized by feature vectors [math]\featurevec^{(\sampleidx)}[/math] and labels [math]\truelabel^{(\sampleidx)} \in \{1,\ldots,\nrcluster\}[/math]. We might interpret the label value of a data point as the index of a category or class to which the data point belongs to. Multi-class classification problems aim at learning a hypothesis [math]h[/math] such that [math]h(\featurevec) \approx \truelabel[/math] for any data point.

In principle, we could measure the quality of a given hypothesis [math]h[/math] by the average [math]0/1[/math] loss incurred on the labeled data points in (the training set) [math]\dataset[/math]. However, if the dataset [math]\dataset[/math] contains mostly data points with one specific label value, the average [math]0/1[/math] loss might obscure the performance of [math]h[/math] for data points having one of the rare label values. Indeed, even if the average [math]0/1[/math] loss is very small, the hypothesis might perform poorly for data points of a minority category.

The confusion matrix generalizes the concept of the [math]0/1[/math] loss to application domains where the relative frequency (fraction) of data points with a specific label value varies significantly (imbalanced data). Instead of considering only the average [math]0/1[/math] loss incurred by a hypothesis on a dataset [math]\dataset[/math], we use a whole family of loss functions. In particular, for each pair of label values [math]\clusteridx,\clusteridx' \in \{1,\ldots,\nrcluster\}[/math], we define the loss

[[math]]\begin{equation} \label{equ_def_loss_funs_conf_matrix} \lossfun^{(\clusteridx\!\rightarrow\!\clusteridx')}\big( \big(\featurevec, \truelabel \big),h \big) \defeq \begin{cases} 1 & \mbox{ if } \truelabel= \clusteridx \mbox{ and } h(\featurevec) = \clusteridx' \\ 0 &\mbox{ otherwise} \end{cases}. \end{equation}[[/math]]

We then compute the average loss \eqref{equ_def_loss_funs_conf_matrix} incurred on the dataset [math]\dataset[/math],

[[math]]\begin{equation} \label{eq_def_emp_error_conf_matrix} \emperror^{(\clusteridx\!\rightarrow\!\clusteridx')}(h|\dataset) \defeq (1/\samplesize)\sum_{\sampleidx=1}^{\samplesize} \lossfun^{(\clusteridx\!\rightarrow\!\clusteridx')} \big( (\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)}),h \big) \mbox{ for } \clusteridx,\clusteridx' \in \{1,\ldots,\nrcluster\}. \end{equation}[[/math]]

It is convenient to arrange the values \eqref{eq_def_emp_error_conf_matrix} as a matrix which is referred to as a confusion matrix. The rows of a confusion matrix correspond to different values [math]\clusteridx[/math] of the true label of a data point. The columns of a confusion matrix correspond to different values [math]\clusteridx'[/math] delivered by the hypothesis [math]h(\featurevec)[/math]. The [math](\clusteridx,\clusteridx')[/math]-th entry of the confusion matrix is [math]\emperror^{(\clusteridx\!\rightarrow\!\clusteridx')}(h|\dataset)[/math].

Precision, Recall and F-Measure

Consider an object detection application where data points are images. The label of data points might indicate the presence ([math]\truelabel\!=\!1[/math]) or absence ([math]\truelabel\!=\!-1[/math]) of an object, it is then customary to define the [37]

[[math]]\begin{equation} \mbox{recall } \defeq \emperror^{(1 \rightarrow 1)}(h|\dataset) \mbox{, and the precision } \defeq \frac{\emperror^{(1 \rightarrow 1)}(h|\dataset)} {\emperror^{(1 \rightarrow 1)}(h|\dataset) + \emperror^{(-1 \rightarrow 1)}(h|\dataset) }. \end{equation}[[/math]]

Clearly, we would like to find a hypothesis with both, large recall and large precision. However, these two goals are typically conflicting, a hypothesis with a high recall will have small precision. Depending on the application, we might prefer having a high recall and tolerate a lower precision.

It might be convenient to combine the recall and precision of a hypothesis into a single quantity,

[[math]]\begin{equation} \label{equ_f1_score} \fscore \defeq 2 \frac{\mbox{precision} \times \mbox{recall}}{\mbox{precision} +\mbox{recall}} \end{equation}[[/math]]

The [math]F[/math] measure \eqref{equ_f1_score} is the harmonic mean [38] of the precision and recall of a hypothesis [math]h[/math]. It is a special case of the [math]F_{\beta}[/math]-score

[[math]]\begin{equation} \label{equ_fbeta_score} F_{\beta} \defeq \big( 1 + \beta^2 \big) \frac{\mbox{precision} \times \mbox{recall}}{\beta^{2} \mbox{precision} +\mbox{recall}}. \end{equation}[[/math]]

The [math]F[/math] measure \eqref{equ_f1_score} is obtained from \eqref{equ_fbeta_score} for the choice [math]\beta=1[/math]. It is therefore customary to refer to \eqref{equ_f1_score} as the [math]F_{1}[/math]-score of a hypothesis [math]h[/math].

Regret

In some ML applications, we might have access to the predictions obtained from some reference methods which are referred to as experts [39][40]. The quality of a hypothesis [math]h[/math] is measured via the difference between the loss incurred by its predictions [math]h(\featurevec)[/math] and the loss incurred by the predictions of the experts. This difference, which is referred to as the regret, measures by how much we regret to have used the prediction [math]h(\featurevec)[/math] instead of using(or following) the prediction of the expert. The goal of regret minimization is to learn a hypothesis with a small regret compared to given set of experts.

The concept of regret minimization is useful when we do not make any probabilistic assumptions (see Section Probabilistic Models for Data) about the data. Without a probabilistic model we cannot use the Bayes risk of the (optimal) Bayes estimator as a baseline (or benchmark). The concept of regret minimization avoids the need for a probabilistic model of the data to obtain a baseline [39]. This approach replaces the Bayes risk with the regret relative to given reference methods (the experts).

Rewards as Partial Feedback

Some applications involve data points whose labels are so difficult or costly to determine that we cannot assume to have any labeled data available. Without any labeled data, we cannot evaluate the loss function for different choices for the hypothesis. Indeed, the evaluation of the loss function typically amounts to measuring the distance between predicted label and true label of a data point. Instead of evaluating a loss function, we must rely on some indirect feedback or “reward” that indicates the usefulness of a particular prediction [39][41].

Consider the ML problem of predicting the optimal steering directions for an autonomous car. The prediction has to be recalculated for each new state of the car. ML methods can sense the state via a feature vector [math]\featurevec[/math] whose entries are pixel intensities of a snapshot. The goal is to learn a hypothesis map from the feature vector [math]\featurevec[/math] to a guess [math]\hat{\truelabel} = h(\featurevec)[/math] for the optimal steering direction [math]\truelabel[/math] (true label). Unless the car circles around in small area with fixed obstacles, we have no access to labeled data points or reference driving scenes for which we already know the optimum steering direction. Instead, the car (control unit) needs to learn the hypothesis [math]h(\featurevec)[/math] based solely on the feedback signals obtained from various sensing devices (cameras, distance sensors).

Putting Together the Pieces

A guiding theme of this book is that ML methods are obtained by different combinations of data, model and loss. We will discuss some key principles behind these methods in depth in the following chapters. Let us develop some intuition for how ML methods operate by considering a very simple ML problem. This problem involves data points that are characterized by a single numeric feature [math]\feature \in \mathbb{R}[/math] and a numeric label [math]\truelabel \in \mathbb{R}[/math]. We assume to have access to [math]\samplesize[/math] labeled data points

[[math]]\begin{equation} \big(\feature^{(1)},\truelabel^{(1)}\big),\ldots, \big(\feature^{(\samplesize)},\truelabel^{(\samplesize)}\big) \label{equ_labeled_data_putting_together} \end{equation}[[/math]]

for which we know the true label values [math]\truelabel^{(\sampleidx)} [/math].

The assumption of knowing the exact true label values [math]\truelabel^{(\sampleidx)}[/math] for any data point is an idealization. We might often face labelling or measurement errors such that the observed labels are noisy versions of the true label. Later on, we will discuss techniques that allow ML methods to cope with noisy labels in Chapter Regularization .

Our goal is to learn a (hypothesis) map [math]h: \mathbb{R} \rightarrow \mathbb{R}[/math] such that [math]h(\feature) \approx \truelabel[/math] for any data point. Given a data point with feature [math]\feature[/math], the function value [math]h(\feature)[/math] should be an accurate approximation of its label value [math]\truelabel[/math]. We require the map to belong to the hypothesis space [math]\hypospace[/math] of linear maps,

[[math]]\begin{equation} \label{equ_def_lin_pred_intercept} h^{(\weight_{0},\weight_{1})}(\feature) = \weight_{1} \feature + \weight_{0}. \end{equation}[[/math]]

The predictor \eqref{equ_def_lin_pred_intercept} is parameterized by the slope [math]\weight_{1}[/math] and the intercept (bias or offset) [math]\weight_{0}[/math]. We indicate this by the notation [math]h^{(\weight_{0},\weight_{1})}[/math]. A particular choice for the weights [math]\weight_{1},\weight_{0}[/math] defines a linear hypothesis [math]h^{(\weight_{0},\weight_{1})}(\feature) = \weight_{1}\feature +\weight_{0} [/math].

Let us use the linear hypothesis map [math]h^{(\weight_{0},\weight_{1})}(\feature)[/math] to predict the labels of training data points. In general, the predictions [math]\hat{\truelabel}^{(\sampleidx)} = h^{(\weight_{0},\weight_{1})}\big(\feature^{(\sampleidx)}\big)[/math] will not be perfect and incur a non-zero prediction error [math]\hat{\truelabel}^{(\sampleidx)} - \truelabel^{(\sampleidx)}[/math] (see Figure fig_emp_error).  

We measure the goodness of the predictor map [math]h^{(\weight_{0},\weight_{1})}[/math] using the average squared error loss (see \eqref{equ_squared_loss})

[[math]] \begin{align} f(\weight_{0},\weight_{1}) & \defeq (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - h^{(\weight_{0},\weight_{1})}(\feature^{(\sampleidx)}) \big)^{2} \nonumber \\ \label{equ_def_cost_fun_putting_togheter} & \stackrel{\eqref{equ_def_lin_pred_intercept}}{=} (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - ( \weight_{1} \feature^{(\sampleidx)} + \weight_{0}) \big)^{2}. \end{align} [[/math]]

The training error [math]f(\weight_{0},\weight_{1})[/math] is the average of the squared prediction errors incurred by the predictor [math]h^{(\weight_{0},\weight_{1})}(x)[/math] to the labeled data points \eqref{equ_labeled_data_putting_together}.

It seems natural to learn a good predictor \eqref{equ_def_lin_pred_intercept} by choosing the parameters [math]\weight_{0},\weight_{1}[/math] to minimize the training error

[[math]]\begin{equation} \min_{\weight_{0},\weight_{1} \in \mathbb{R}} f(\weight_{0},\weight_{1}) \stackrel{\eqref{equ_def_cost_fun_putting_togheter}}{=} \min_{\weight_{1},\weight_{0} \in \mathbb{R}} (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - ( \weight_{1} \feature^{(\sampleidx)} + \weight_{0}) \big)^{2}. \end{equation}[[/math]]

The optimal parameters [math]\hat{\weight}_{0},\hat{\weight}_{1}[/math] are characterized by the zero-gradient condition,\footnote{A necessary and sufficient condition for [math]\widehat{\weights}[/math] to minimize a convex differentiable function [math]f(\weights)[/math] is [math]\nabla f(\widehat{\weights}) = \mathbf{0}[/math] [42](Sec. 4.2.3).}

[[math]]\begin{equation} \label{equ_zero_grad_condition_putting_together} \frac{\partial f(\weight_{0},\weight_{1})}{\partial \weight_{0}}\bigg|_{\weight_{0} = \hat{\weight}_{0}, \weight_{1} = \hat{\weight}_{1}} = 0 \mbox{, and }\frac{\partial f(\weight_{0},\weight_{1})}{\partial \weight_{1}}\bigg|_{\weight_{0} = \hat{\weight}_{0}, \weight_{1} = \hat{\weight}_{1}} = 0. \end{equation}[[/math]]

Inserting \eqref{equ_def_cost_fun_putting_togheter} into \eqref{equ_zero_grad_condition_putting_together} and by using basic rules for calculating derivatives (see, e.g., [9]), we obtain the following optimality conditions

[[math]]\begin{align} \label{equ_zero_grad_condition_putting_together_special_form} & (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \truelabel^{(\sampleidx)} - ( \hat{\weight}_{1} \feature^{(\sampleidx)} + \hat{\weight}_{0}) \big)= 0 \mbox{, and } \\ & (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \feature^{(\sampleidx)} \big( \truelabel^{(\sampleidx)} - ( \hat{\weight}_{1} \feature^{(\sampleidx)} + \hat{\weight}_{0}) \big) = 0. \nonumber \end{align}[[/math]]

Any parameter values [math]\hat{\weight}_{0},\hat{\weight}_{1} \in \mathbb{R}[/math] that satisfy \eqref{equ_zero_grad_condition_putting_together_special_form} define a hypothesis map [math]h^{(\hat{\weight}_{0},\hat{\weight}_{1})}(\feature) = \hat{\weight}_{1}\feature + \hat{\weight}_{0}[/math] that is optimal in the sense of incurring a minimum training error, [math]f(\hat{\weight}_{0},\hat{\weight}_{1}) = \min_{\weight_{0},\weight_{1} \in \mathbb{R}} f(\weight_{0},\weight_{1}).[/math] Let us rewrite the optimality condition \eqref{equ_zero_grad_condition_putting_together_special_form} using matrices and vectors. To this end, we first rewrite the hypothesis \eqref{equ_def_lin_pred_intercept} as [math] h(\featurevec) = \weights^{T} \featurevec \mbox{ with } \weights = \big(\weight_{0},\weight_{1}\big)^{T}, \featurevec = \big(1,\feature\big)^{T}.[/math] Let us stack the feature vectors [math]\featurevec^{(\sampleidx)} = \big(1,\feature^{(\sampleidx)} \big)^{T}[/math] and labels [math]\truelabel^{(\sampleidx)}[/math] of the data points \eqref{equ_labeled_data_putting_together} into a feature matrix and a label vector,

[[math]]\begin{equation} \featuremtx = \big(\featurevec^{(1)},\ldots,\featurevec^{(\samplesize)}\big)^{T} \in \mathbb{R}^{\samplesize \times 2}, \labelvec = \big(\truelabel^{(1)},\ldots,\truelabel^{(\samplesize)}\big)^{T} \in \mathbb{R}^{\samplesize}. \end{equation}[[/math]]

We can then reformulate \eqref{equ_zero_grad_condition_putting_together_special_form} as

[[math]]\begin{equation} \label{equ_zero_grad_condition_putting_together_special_form_mtx} \featuremtx^{T} \big( \labelvec - \featuremtx \widehat{\weights} \big) = \mathbf{0}. \end{equation}[[/math]]

The optimality conditions \eqref{equ_zero_grad_condition_putting_together_special_form_mtx} and \eqref{equ_zero_grad_condition_putting_together_special_form} are equivalent in the following sense. The entries of any parameter vector [math]\widehat{\weights} = \big(\hat{\weight}_{0},\hat{\weight}_{1}\big)[/math] that satisfies \eqref{equ_zero_grad_condition_putting_together_special_form_mtx} are solutions to \eqref{equ_zero_grad_condition_putting_together_special_form} and vice-versa.

We can evaluate the quality of a particular predictor [math]h \in \hypospace[/math] by measuring the prediction error [math]\truelabel- h(\feature)[/math] obtained for a labeled data point [math](\feature,\truelabel)[/math].

Notes

  1. The notation [math]\labelspace^{\featurespace}[/math] is to be understood as a symbolic shorthand and should not be understood literately as a power such as [math]4^5[/math]

General References

Jung, Alexander (2022). Machine Learning: The Basics. Signapore: Springer. doi:10.1007/978-981-16-8193-6.

Jung, Alexander (2022). "Machine Learning: The Basics". arXiv:1805.05052.

References

  1. 1.0 1.1 F. Pedregosa. Scikit-learn: Machine learning in python. Journal of Machine Learning Research 12(85):2825--2830, 2011
  2. 2.0 2.1 D. Gujarati and D. Porter. Basic Econometrics Mc-Graw Hill, 2009
  3. 3.0 3.1 Y. Dodge. The Oxford Dictionary of Statistical Terms Oxford University Press, 2003
  4. 4.0 4.1 B. Everitt. Cambridge Dictionary of Statistics Cambridge University Press, 2002
  5. K. Abayomi, A. Gelman, and M. A. Levy. Diagnostics for multivariate imputations. Journal of The Royal Statistical Society Series C-applied Statistics 57:273--291, 2008
  6. M. Friendly. A brief history of data visualization. In C. Chen, W. Härdle, and A. Unwin, editors, Handbook of Computational Statistics: Data Visualization volume III. Springer-Verlag, 2006
  7. B. Boashash, editor. Time Frequency Signal Analysis and Processing: A Comprehensive Reference Elsevier, Amsterdam, The Netherlands, 2003
  8. S. G. Mallat. A Wavelet Tour of Signal Processing -- The Sparse Way Academic Press, San Diego, CA, 3 edition, 2009
  9. 9.0 9.1 W. Rudin. Principles of Mathematical Analysis McGraw-Hill, New York, 3 edition, 1976
  10. P. Bühlmann and S. van de Geer. Statistics for High-Dimensional Data Springer, New York, 2011
  11. M. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint Cambridge: Cambridge University Press, 2019
  12. R. Vidal. Subspace clustering. IEEE Signal Processing Magazine 52, March 2011
  13. T. Kibble and F. Berkshire. Classical Mechanics Imperical College Press, 5 edition, 2011
  14. F. Barata, K. Kipfer, M. Weber, P. Tinschert, E. Fleisch, and T. Kowatsch. Towards device-agnostic mobile cough detection with convolutional neural networks. In 2019 IEEE International Conference on Healthcare Informatics (ICHI) pages 1--11, 2019
  15. S. Smoliski and K. Radtke. Spatial prediction of demersal fish diversity in the baltic sea: comparison of machine learning and regression-based techniques. ICES Journal of Marine Science 74(1):102--111, 2017
  16. S. Carrazza. Machine learning challenges in theoretical HEP. arXiv 2018
  17. M. Gao, H. Igata, A. Takeuchi, K. Sato, and Y. Ikegaya. Machine learning-based prediction of adverse drug effects: An example of seizure-inducing compounds. Journal of Pharmacological Sciences 133(2):70 -- 78, 2017
  18. K. Mortensen and T. Hughes. Comparing amazon's mechanical turk platform to conventional data collection methods in the health and medical research literature. J. Gen. Intern Med. 33(4):533--538, 2018
  19. A. Halevy, P. Norvig, and F. Pereira. The unreasonable effectiveness of data. IEEE Intelligent Systems March/April 2009
  20. P. Koehn. Europarl: A parallel corpus for statistical machine translation. In The 10th Machine Translation Summit, page 79--86., AAMT, Phuket, Thailand, 2005
  21. 21.0 21.1 21.2 21.3 E. L. Lehmann and G. Casella. Theory of Point Estimation Springer, New York, 2nd edition, 1998
  22. S. M. Kay. Fundamentals of Statistical Signal Processing: Estimation Theory Prentice Hall, Englewood Cliffs, NJ, 1993
  23. 23.0 23.1 D. Bertsekas and J. Tsitsiklis. Introduction to Probability Athena Scientific, 2 edition, 2008
  24. 24.0 24.1 24.2 P. Billingsley. Probability and Measure Wiley, New York, 3 edition, 1995
  25. 25.0 25.1 C. M. Bishop. Pattern Recognition and Machine Learning Springer, 2006
  26. H. Lütkepohl. New Introduction to Multiple Time Series Analysis Springer, New York, 2005
  27. B. Efron and R. Tibshirani. Improvements on cross-validation: The 632+ bootstrap method. Journal of the American Statistical Association 92(438):548--560, 1997
  28. P. Halmos. Naive set theory Springer-Verlag, 1974
  29. P. Austin, P. Kaski, and K. Kubjas. Tensor network complexity of multilinear maps. arXiv 2018
  30. I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning MIT Press, 2016
  31. 31.0 31.1 V. N. Vapnik. The Nature of Statistical Learning Theory Springer, 1999
  32. 32.0 32.1 S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning -- from Theory to Algorithms Cambridge University Press, 2014
  33. 33.0 33.1 T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning Springer Series in Statistics. Springer, New York, NY, USA, 2001
  34. S. Bubeck. Convex optimization. algorithms and complexity. In Foundations and Trends in Machine Learning volume 8. Now Publishers, 2015
  35. M. J. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference volume 1 of Foundations and Trends in Machine Learning Now Publishers, Hanover, MA, 2008
  36. R. Gray. Probability, Random Processes, and Ergodic Properties Springer, New York, 2 edition, 2009
  37. R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval addwes, 1999
  38. M. Abramowitz and I. A. Stegun, editors. Handbook of Mathematical Functions Dover, New York, 1965
  39. 39.0 39.1 39.2 N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games Cambridge University Press, New York, NY, USA, 2006
  40. E. Hazan. Introduction to Online Convex Optimization Now Publishers Inc., 2016
  41. R. Sutton and A. Barto. Reinforcement learning: An introduction MIT press, Cambridge, MA, 2 edition, 2018
  42. S. Boyd and L. Vandenberghe. Convex Optimization Cambridge Univ. Press, Cambridge, UK, 2004