Finding the distribution of some Random Variables
The case of Sums of independent Random Variables
Let [math]X[/math] be a Poisson distributed r.v. with parameter [math]\lambda \gt 0[/math]. We already know that for all [math]\xi\in \R[/math], we then have [math]\E\left[e^{i\xi X}\right]=\exp(-\lambda(1-e^{i\xi}))[/math]. Let [math]X_1,...,X_n[/math] be [math]n[/math] independent r.v.'s for [math]n\in\N[/math] with [math]X_j[/math] being a Poisson distributed r.v. with parameter [math]\lambda_j \gt 0[/math] for all [math]1\leq j\leq n[/math]. Now let [math]S_n=\sum_{j=1}^nX_j[/math]. We want to figure out what the law of [math]S_n[/math] is. We have
with [math]Y=\lambda_1+...+\lambda_n[/math]. Since the characteristic function uniquely characterizes the probability distributions, we can conclude that
Let now [math]X[/math] be a r.v. with [math]X\sim\mathcal{N}(m,\sigma^2)[/math]. Then we know
Now let [math]X_1,...,X_n[/math] be [math]n[/math] independent r.v.'s for [math]n\in \N[/math], such that [math]X_j\sim\mathcal{N}(m_j,\sigma_j^2)[/math] for all [math]0\leq j\leq n[/math]. Set again [math]S_n=X_1+...+X_n[/math]. Therefore
which implies, because of the same argument as above, that
Using change of variables
Let [math]g:\R^n\to\R^n[/math] be a measurable function given as [math]g(x)=(g_1(x),...,g_n(x))[/math] for [math]x\in\R^n[/math]. Then the jacobian of [math]g[/math] is given by
Recall that for [math]g:G\subset \R^n\to\R^n[/math], where [math]G[/math] is a open subset of [math]\R^n[/math], with [math]J_g[/math] injective such that [math]\det(J_g(x))\not=0[/math] for all [math]x\in G[/math], we have for every measurable and positive map [math]f:\R^n\to\R_+[/math], or for every integrable [math]f=\one_{G}[/math], that
where [math]g(G)=\{y\in\R^n\mid \exists x\in G; g(x)=y\}[/math].
Let [math](\Omega,\A,\p)[/math] be a probability space. Let [math]X=(X_1,...,X_n)[/math] be a r.v. on [math]\R^n[/math] for [math]n\in\N[/math], having a joint density [math]f[/math]. Let [math]g:\R^n\to\R^n\in C^1(\R^n)[/math] be an injective measurable map, such that [math]\det(J_g(x))\not=0[/math] for all [math]x\in\R^n[/math]. Then [math]Y=g(X)[/math] has the density
Let [math]B\in\B(\R^n)[/math] be a Borel set and [math]A=g^{-1}(B)[/math]. Then we have
It follows that [math]Y[/math] has density given by
Example
We got the following examples:
- Let [math]X[/math] and [math]Y[/math] be two independent [math]\mathcal{N}(0,1)[/math] distributed r.v.'s. We want to know what is the joint union distribution of [math](U,V)=(X+Y,X-Y)[/math]. Therefore, let [math]g:\R^2\to\R^2[/math] be given by [math](x,y)\mapsto (x+y,x-y)[/math]. The inverse [math]g^{-1}:\R^2\to\R^2[/math] is then given by [math](u,v)\mapsto \left(\frac{u+v}{2},\frac{u-v}{2}\right)[/math]. We have the following jacobian
[[math]] J_{g^{-1}}=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&-\frac{1}{2}\end{pmatrix},\det(J_{g^{-1}})=-\frac{1}{2}. [[/math]]Moreover we get[[math]] \begin{align*} f_{(U,V)}(u,v)&=f_{(X,Y)}\left(\frac{u+v}{2},\frac{u-v}{2}\right)\det(J_{g^{-1}})=\left(\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{u+v}{2}\right)^2}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{u-v}{2}\right)^2}\right)\left(\frac{1}{2}\right)\\ &=\frac{1}{\sqrt{4\pi}}e^{-\frac{u^2}{4}}\frac{1}{\sqrt{4\pi}}e^{-\frac{v^2}{4}}. \end{align*} [[/math]]Thus [math]U[/math] and [math]V[/math] are independent and [math]U\stackrel{law}{=}V\sim\mathcal{N}(0,2)[/math].
- Let [math](X,Y)[/math] be a r.v. on [math]\R^2[/math] with joint density [math]f[/math]. We want to find the density of [math]Z=XY[/math]. In this case, consider [math]h:\R^2\to\R^2[/math], given by [math](x,y)\mapsto xy[/math]. We then define [math]g:\R^2\to\R^2[/math], given by [math](x,y)\mapsto (xy,x)[/math]. We write [math]S_0=\{(x,y)\mid x=0,y\in\R\}[/math] and [math]S_1=\R^2\setminus S_0[/math]. Now [math]g[/math] is injective from [math]S_1[/math] to [math]\R\setminus\{0\}[/math] and [math]g^{-1}(u,v)=\left(v,\frac{u}{v}\right)[/math]. The jacobian is thus given by
[[math]] J_{g^{-1}}(u,v)=\begin{pmatrix}0&\frac{1}{v}\\ 1&-\frac{u}{v^2}\end{pmatrix},\det(J_{g^{-1}}(u,v))=-\frac{1}{v}. [[/math]]Moreover we have[[math]] f_{(U,V)}(u,v)=f_{(X,Y)}\left(u,\frac{u}{v}\right)\frac{1}{\vert v\vert}\one_{V\not=0}. [[/math]]Therefore, we get[[math]] f_U(u)=\int_{\R}f_{(X,Y)}(u,v)\frac{1}{\vert v\vert}dv. [[/math]]
General references
Moshayedi, Nima (2020). "Lectures on Probability Theory". arXiv:2010.16280 [math.PR].