exercise:D4f9286df0: Difference between revisions

From Stochiki
No edit summary
No edit summary
 
Line 23: Line 23:
<ul style="list-style-type:lower-roman"><li> Implement a random projection <math>T\colon\mathbb{R}^d\rightarrow\mathbb{R}^k</math>  and test it for small <math>d</math> and <math>k</math>.
<ul style="list-style-type:lower-roman"><li> Implement a random projection <math>T\colon\mathbb{R}^d\rightarrow\mathbb{R}^k</math>  and test it for small <math>d</math> and <math>k</math>.
</li>
</li>
<li> Put <math>d=1000</math>, generate 10000 points in <math>\mathbb{R}^d</math> at random, project them via <math>T</math> to <math>\mathbb{R}^k</math> and compute, for different values of <math>k</math>, the worst distorsion of a pairwise distance
<li> Put <math>d=1000</math>, generate 10,000 points in <math>\mathbb{R}^d</math> at random, project them via <math>T</math> to <math>\mathbb{R}^k</math> and compute, for different values of <math>k</math>, the worst distorsion of a pairwise distance


<math display="block">
<math display="block">

Latest revision as of 02:47, 2 June 2024

[math] \newcommand{\smallfrac}[2]{\frac{#1}{#2}} \newcommand{\medfrac}[2]{\frac{#1}{#2}} \newcommand{\textfrac}[2]{\frac{#1}{#2}} \newcommand{\tr}{\operatorname{tr}} \newcommand{\e}{\operatorname{e}} \newcommand{\B}{\operatorname{B}} \newcommand{\Bbar}{\overline{\operatorname{B}}} \newcommand{\pr}{\operatorname{pr}} \newcommand{\dd}{\operatorname{d}\hspace{-1pt}} \newcommand{\E}{\operatorname{E}} \newcommand{\V}{\operatorname{V}} \newcommand{\Cov}{\operatorname{Cov}} \newcommand{\Bigsum}[2]{\mathop{\textstyle\sum}_{#1}^{#2}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\card}{\#} \newcommand{\mathds}{\mathbb} \renewcommand{\P}{\operatorname{P}} \renewcommand{\L}{\operatorname{L}} [/math]

Let [math]n \gt k[/math] be integers.

  • Implement a random projection [math]T\colon\mathbb{R}^d\rightarrow\mathbb{R}^k[/math] and test it for small [math]d[/math] and [math]k[/math].
  • Put [math]d=1000[/math], generate 10,000 points in [math]\mathbb{R}^d[/math] at random, project them via [math]T[/math] to [math]\mathbb{R}^k[/math] and compute, for different values of [math]k[/math], the worst distorsion of a pairwise distance
    [[math]] \epsilon:=\max\Bigl(1-\min_{x\not=y}\medfrac{\|Tx-Ty\|}{\sqrt{k}\|x-y\|}, \max_{x\not=y}\medfrac{\|Tx-Ty\|}{\sqrt{k}\|x-y\|}-1\Bigr) [[/math]]
    that occurs.
  • Compare the outcome of your experiment with the relation of [math]\epsilon[/math] and [math]k[/math] that is given in the Johnson-Lindenstrauss Lemma, by replicating Figure based on the data from (ii).
  • Explain how you would pick [math]\epsilon[/math] and [math]k[/math] if you are given a dataset in [math]\mathbb{R}^d[/math] and want to do a dimensionality reduction that does not corrupt a classifier based on nearest neighbors.