exercise:D10bcb88bf: Difference between revisions

From Stochiki
(Created page with "<div class="d-none"><math> \newcommand{\indexmark}[1]{#1\markboth{#1}{#1}} \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\NOTE}[1]{$^{\textcolor{red}\clubsuit}$\marginpar{\setstretch{0.5}$^{\scriptscriptstyle\textcolor{red}\clubsuit}$\textcolor{blue}{\bf\tiny #1}}} \newcommand\xoverline[2][0.75]{% \sbox{\myboxA}{$\m@th#2$}% \setbox\myboxB\null% Phantom box \ht\myboxB=\ht\myboxA% \dp\myboxB=\dp\myboxA% \wd\myboxB=#1\wd\myboxA% Scale phantom...")
 
No edit summary
Line 1: Line 1:
<div class="d-none"><math>
<div class="d-none"><math>
\newcommand{\indexmark}[1]{#1\markboth{#1}{#1}}
\newcommand{\smallfrac}[2]{\frac{#1}{#2}}
\newcommand{\red}[1]{\textcolor{red}{#1}}
\newcommand{\medfrac}[2]{\frac{#1}{#2}}
\newcommand{\NOTE}[1]{$^{\textcolor{red}\clubsuit}$\marginpar{\setstretch{0.5}$^{\scriptscriptstyle\textcolor{red}\clubsuit}$\textcolor{blue}{\bf\tiny #1}}}
\newcommand{\textfrac}[2]{\frac{#1}{#2}}
\newcommand\xoverline[2][0.75]{%
    \sbox{\myboxA}{$\m@th#2$}%
    \setbox\myboxB\null% Phantom box
    \ht\myboxB=\ht\myboxA%
    \dp\myboxB=\dp\myboxA%
    \wd\myboxB=#1\wd\myboxA% Scale phantom
    \sbox\myboxB{$\m@th\overline{\copy\myboxB}$}%  Overlined phantom
    \setlength\mylenA{\the\wd\myboxA}%  calc width diff
    \addtolength\mylenA{-\the\wd\myboxB}%
    \ifdim\wd\myboxB<\wd\myboxA%
      \rlap{\hskip 0.35\mylenA\usebox\myboxB}{\usebox\myboxA}%
    \else
        \hskip -0.5\mylenA\rlap{\usebox\myboxA}{\hskip 0.5\mylenA\usebox\myboxB}%
    \fi}
\newcommand{\smallfrac}[2]{\scalebox{1.35}{\ensuremath{\frac{#1}{#2}}}}
\newcommand{\medfrac}[2]{\scalebox{1.2}{\ensuremath{\frac{#1}{#2}}}}
\newcommand{\textfrac}[2]{{\textstyle\ensuremath{\frac{#1}{#2}}}}
\newcommand{\nsum}[1][1.4]{% only for \displaystyle
    \mathop{%
        \raisebox
            {-#1\depthofsumsign+1\depthofsumsign}
\newcommand{\tr}{\operatorname{tr}}
\newcommand{\tr}{\operatorname{tr}}
\newcommand{\e}{\operatorname{e}}
\newcommand{\e}{\operatorname{e}}
\newcommand{\B}{\operatorname{B}}
\newcommand{\B}{\operatorname{B}}
\newcommand{\Bbar}{\xoverline[0.75]{\operatorname{B}}}
\newcommand{\Bbar}{\overline{\operatorname{B}}}
\newcommand{\pr}{\operatorname{pr}}
\newcommand{\pr}{\operatorname{pr}}
\newcommand{\dd}{\operatorname{d}\hspace{-1pt}}
\newcommand{\dd}{\operatorname{d}\hspace{-1pt}}
Line 33: Line 12:
\newcommand{\V}{\operatorname{V}}
\newcommand{\V}{\operatorname{V}}
\newcommand{\Cov}{\operatorname{Cov}}
\newcommand{\Cov}{\operatorname{Cov}}
\newcommand{\Bigsum}[2]{\ensuremath{\mathop{\textstyle\sum}_{#1}^{#2}}}
\newcommand{\Bigsum}[2]{\mathop{\textstyle\sum}_{#1}^{#2}}
\newcommand{\ran}{\operatorname{ran}}
\newcommand{\ran}{\operatorname{ran}}
\newcommand{\card}{\#}
\newcommand{\card}{\#}
\newcommand{\Conv}{\mathop{\scalebox{1.1}{\raisebox{-0.08ex}{$\ast$}}}}%
\renewcommand{\P}{\operatorname{P}}
\renewcommand{\L}{\operatorname{L}}
\newcommand{\mathds}{\mathbb}</math></div>


\usepackage{pgfplots}
Implement the naive separation algorithm, that picks one data point at random and then labels that half of the data set which is closest to the first point as <math>0</math> and the rest as <math>1</math>. Test the algorithm on the data set from [[guide:6588b14666#PROBL-6-1 |Problem]]. When generating the data, mark the data points with <math>0</math> and <math>1</math> and after running the separation algorithm, let your code count how many data points got classified correctly.
\newcommand{\filledsquare}{\begin{picture}(0,0)(0,0)\put(-4,1.4){$\scriptscriptstyle\text{\ding{110}}$}\end{picture}\hspace{2pt}}
\newcommand{\mathds}{\mathbb}</math></div>
\label{PROBL-6-2} Implement the naive separation algorithm, that picks one data point at random and then labels that half of the data set which is closest to the first point as <math>0</math> and the rest as <math>1</math>. Test the algorithm on the data set from [[#PROBL-6-1 |Problem]]. When generating the data, mark the data points with <math>0</math> and <math>1</math> and after running the separation algorithm, let your code count how many data points got classified correctly.

Revision as of 03:08, 2 June 2024

[math] \newcommand{\smallfrac}[2]{\frac{#1}{#2}} \newcommand{\medfrac}[2]{\frac{#1}{#2}} \newcommand{\textfrac}[2]{\frac{#1}{#2}} \newcommand{\tr}{\operatorname{tr}} \newcommand{\e}{\operatorname{e}} \newcommand{\B}{\operatorname{B}} \newcommand{\Bbar}{\overline{\operatorname{B}}} \newcommand{\pr}{\operatorname{pr}} \newcommand{\dd}{\operatorname{d}\hspace{-1pt}} \newcommand{\E}{\operatorname{E}} \newcommand{\V}{\operatorname{V}} \newcommand{\Cov}{\operatorname{Cov}} \newcommand{\Bigsum}[2]{\mathop{\textstyle\sum}_{#1}^{#2}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\card}{\#} \renewcommand{\P}{\operatorname{P}} \renewcommand{\L}{\operatorname{L}} \newcommand{\mathds}{\mathbb}[/math]

Implement the naive separation algorithm, that picks one data point at random and then labels that half of the data set which is closest to the first point as [math]0[/math] and the rest as [math]1[/math]. Test the algorithm on the data set from Problem. When generating the data, mark the data points with [math]0[/math] and [math]1[/math] and after running the separation algorithm, let your code count how many data points got classified correctly.