ABy Admin
Jun 11'23

Exercise

Section Logistic Regression discussed logistic regression as a ML method that learns a linear hypothesis map by minimizing the logistic loss. The logistic loss has computationally pleasant properties as it is smooth and convex. However, in some applications we might be ultimately interested in the accuracy or (equivalently) the average 0/1 loss.

Can we upper bound the average [math]0/1[/math] loss using the average logistic loss incurred by a given hypothesis on a given training set?