Revision as of 01:53, 8 May 2024 by Bot
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Finding the distribution of some Random Variables

[math] \newcommand{\R}{\mathbb{R}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\Rbar}{\overline{\mathbb{R}}} \newcommand{\Bbar}{\overline{\mathcal{B}}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\E}{\mathbb{E}} \newcommand{\p}{\mathbb{P}} \newcommand{\one}{\mathds{1}} \newcommand{\0}{\mathcal{O}} \newcommand{\mat}{\textnormal{Mat}} \newcommand{\sign}{\textnormal{sign}} \newcommand{\CP}{\mathcal{P}} \newcommand{\CT}{\mathcal{T}} \newcommand{\CY}{\mathcal{Y}} \newcommand{\F}{\mathcal{F}} \newcommand{\mathds}{\mathbb}[/math]

The case of Sums of independent Random Variables

Let [math]X[/math] be a Poisson distributed r.v. with parameter [math]\lambda \gt 0[/math]. We already know that for all [math]\xi\in \R[/math], we then have [math]\E\left[e^{i\xi X}\right]=\exp(-\lambda(1-e^{i\xi}))[/math]. Let [math]X_1,...,X_n[/math] be [math]n[/math] independent r.v.'s for [math]n\in\N[/math] with [math]X_j[/math] being a Poisson distributed r.v. with parameter [math]\lambda_j \gt 0[/math] for all [math]1\leq j\leq n[/math]. Now let [math]S_n=\sum_{j=1}^nX_j[/math]. We want to figure out what the law of [math]S_n[/math] is. We have

[[math]] \begin{align*} \E\left[e^{i\xi S_n}\right]&=\E\left[ e^{i\xi(X_1+...+X_n)}\right]=\E\left[e^{i\xi X_1}\dotsm e^{i\xi X_n}\right]=\E\left[e^{i\xi X_1}\right]\dotsm\E\left[e^{i\xi X_n}\right]\\ &=\exp(-\lambda_1(1-e^{i\xi}))\dotsm \exp(-\lambda_n(1-e^{i\xi}))\\ &=\exp(-(\lambda_1+...+\lambda_n)(1-e^{i\xi}))\\ &=\exp(-Y(1-e^{i\xi})), \end{align*} [[/math]]

with [math]Y=\lambda_1+...+\lambda_n[/math]. Since the characteristic function uniquely characterizes the probability distributions, we can conclude that

[[math]] S_n\sim \Pi(Y=\lambda_1+...+\lambda_n). [[/math]]

Let now [math]X[/math] be a r.v. with [math]X\sim\mathcal{N}(m,\sigma^2)[/math]. Then we know

[[math]] \E\left[e^{i\xi X}\right]=e^{i\xi m}e^{-i\sigma^2\frac{\xi^2}{2}}. [[/math]]

Now let [math]X_1,...,X_n[/math] be [math]n[/math] independent r.v.'s for [math]n\in \N[/math], such that [math]X_j\sim\mathcal{N}(m_j,\sigma_j^2)[/math] for all [math]0\leq j\leq n[/math]. Set again [math]S_n=X_1+...+X_n[/math]. Therefore

[[math]] \E\left[e^{i\xi S_n}\right]=e^{im_1\xi}e^{-\sigma^2_1\frac{\xi^2}{2}}\dotsm e^{im_n\xi}e^{-\sigma_n^2\frac{\xi^2}{2}}=e^{i(m_1+...+m_n)\xi}e^{-(\sigma_1^2+....+\sigma_n^2)\frac{\xi^2}{2}}, [[/math]]

which implies, because of the same argument as above, that

[[math]] S_n\sim \mathcal{N}(m_1+...+m_n,\sigma_1^2+....+\sigma_n^2). [[/math]]

Using change of variables

Let [math]g:\R^n\to\R^n[/math] be a measurable function given as [math]g(x)=(g_1(x),...,g_n(x))[/math] for [math]x\in\R^n[/math]. Then the jacobian of [math]g[/math] is given by

[[math]] J_g(x)=\left(\frac{\partial g_i(x)}{\partial x_j}\right)_{1\leq i,j\leq n}. [[/math]]

Recall that for [math]g:G\subset \R^n\to\R^n[/math], where [math]G[/math] is a open subset of [math]\R^n[/math], with [math]J_g[/math] injective such that [math]\det(J_g(x))\not=0[/math] for all [math]x\in G[/math], we have for every measurable and positive map [math]f:\R^n\to\R_+[/math], or for every integrable [math]f=\one_{G}[/math], that

[[math]] \int_{g(G)}f(y)dy=\int_Gf(g(x))\vert \det(J_g(x))\vert dx, [[/math]]

where [math]g(G)=\{y\in\R^n\mid \exists x\in G; g(x)=y\}[/math].

Theorem

Let [math](\Omega,\A,\p)[/math] be a probability space. Let [math]X=(X_1,...,X_n)[/math] be a r.v. on [math]\R^n[/math] for [math]n\in\N[/math], having a joint density [math]f[/math]. Let [math]g:\R^n\to\R^n\in C^1(\R^n)[/math] be an injective measurable map, such that [math]\det(J_g(x))\not=0[/math] for all [math]x\in\R^n[/math]. Then [math]Y=g(X)[/math] has the density

[[math]] f_Y(y)=\begin{cases}f_X(g^{-1}(y))\vert\det{(J_{g^{-1}}(y))\vert},& y\in g(x)\\ 0,&\text{otherwise}\end{cases} [[/math]]


Show Proof

Let [math]B\in\B(\R^n)[/math] be a Borel set and [math]A=g^{-1}(B)[/math]. Then we have

[[math]] \p[X\in A]=\int_Af_X(x)dx=\int_{g^{-1}(B)}f_X(x)dx=\int_Bf_X(g^{-1}(x))\vert \det(J_{g^{-1}}(x))\vert dx. [[/math]]
But we know [math]\p[Y\in B]=\p[X\in A][/math], for all [math]B\in\B(\R^n)[/math] with

[[math]] \p[Y\in B]=\int_Bf_X(g^{-1}(x))\vert\det(J_{g^{-1}}(x))\vert dx. [[/math]]

It follows that [math]Y[/math] has density given by

[[math]] f_Y(x)=f_X(g^{-1}(x))\vert \det(J_{g^{-1}}(x))\vert. [[/math]]

Example

We got the following examples:

  • Let [math]X[/math] and [math]Y[/math] be two independent [math]\mathcal{N}(0,1)[/math] distributed r.v.'s. We want to know what is the joint union distribution of [math](U,V)=(X+Y,X-Y)[/math]. Therefore, let [math]g:\R^2\to\R^2[/math] be given by [math](x,y)\mapsto (x+y,x-y)[/math]. The inverse [math]g^{-1}:\R^2\to\R^2[/math] is then given by [math](u,v)\mapsto \left(\frac{u+v}{2},\frac{u-v}{2}\right)[/math]. We have the following jacobian
    [[math]] J_{g^{-1}}=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&-\frac{1}{2}\end{pmatrix},\det(J_{g^{-1}})=-\frac{1}{2}. [[/math]]
    Moreover we get
    [[math]] \begin{align*} f_{(U,V)}(u,v)&=f_{(X,Y)}\left(\frac{u+v}{2},\frac{u-v}{2}\right)\det(J_{g^{-1}})=\left(\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{u+v}{2}\right)^2}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{u-v}{2}\right)^2}\right)\left(\frac{1}{2}\right)\\ &=\frac{1}{\sqrt{4\pi}}e^{-\frac{u^2}{4}}\frac{1}{\sqrt{4\pi}}e^{-\frac{v^2}{4}}. \end{align*} [[/math]]
    Thus [math]U[/math] and [math]V[/math] are independent and [math]U\stackrel{law}{=}V\sim\mathcal{N}(0,2)[/math].
  • Let [math](X,Y)[/math] be a r.v. on [math]\R^2[/math] with joint density [math]f[/math]. We want to find the density of [math]Z=XY[/math]. In this case, consider [math]h:\R^2\to\R^2[/math], given by [math](x,y)\mapsto xy[/math]. We then define [math]g:\R^2\to\R^2[/math], given by [math](x,y)\mapsto (xy,x)[/math]. We write [math]S_0=\{(x,y)\mid x=0,y\in\R\}[/math] and [math]S_1=\R^2\setminus S_0[/math]. Now [math]g[/math] is injective from [math]S_1[/math] to [math]\R\setminus\{0\}[/math] and [math]g^{-1}(u,v)=\left(v,\frac{u}{v}\right)[/math]. The jacobian is thus given by
    [[math]] J_{g^{-1}}(u,v)=\begin{pmatrix}0&\frac{1}{v}\\ 1&-\frac{u}{v^2}\end{pmatrix},\det(J_{g^{-1}}(u,v))=-\frac{1}{v}. [[/math]]
    Moreover we have
    [[math]] f_{(U,V)}(u,v)=f_{(X,Y)}\left(u,\frac{u}{v}\right)\frac{1}{\vert v\vert}\one_{V\not=0}. [[/math]]
    Therefore, we get
    [[math]] f_U(u)=\int_{\R}f_{(X,Y)}(u,v)\frac{1}{\vert v\vert}dv. [[/math]]

General references

Moshayedi, Nima (2020). "Lectures on Probability Theory". arXiv:2010.16280 [math.PR].