Transformations
If [math]X[/math] is a random variable with cumulative distribution function [math]F_X[/math], we may produce other random variables by applying a transformation of [math]X[/math] of the form [math]g(X)[/math] for suitable functions g. These transformations are used often in probability and statistics since it is often the case that the transformation of random variables yields new random variables that have desirable properties and such properties can yield results pertaining to the original set of random variables. In this page, we are mainly concerned with computing the probability distribution of the transformation in terms of the probability distribution of the original random variable.
Linear Transformations
We first consider the simplest possible transformation: the linear transformation. If [math]a[/math] and [math]b[/math] are real numbers, then we may consider the random variable
If [math]a[/math] is zero then there isn't anything to discuss since the transformation is just the constant [math]b[/math], so we may assume that [math]a[/math] is non-zero.
a > 0
If [math]a[/math] is positive then [math]T[/math] is a strictly increasing function and we have:
a < 0 and [math]X[/math] continuous
If [math]a[/math] is negative and [math]X[/math] is a continuous random variable, then [math]T[/math] is a strictly decreasing function and we have:
Monotone Transformations
Strictly Increasing
Suppose that the transformation, denoted by [math]T[/math], is a transformation that is strictly increasing:
We denote by [math]T^{-1}[/math] the unique transformation with the property
with [math]I[/math] the identity function. Following the approach for linear transformations, we have
Thus we have the following simple relation:
Strictly Decreasing and X Continuous
If the transformation [math]T[/math] is strictly decreasing and [math]F_X[/math] is continuous, then
and thus
Probability Density Functions
If the cumulative distribution function [math]F_X[/math] has a density say [math]f_X[/math], then we see from \ref{transform-rel-up} and \ref{transform-rel-down} that the following relation holds:
Example: Exponentiation
Consider the transformation [math]T(x) = \exp(x)[/math]. By \ref{transform-rel-up}, we have [math]F_Y(y) = F_{X}(\ln(y))[/math]. If [math]F_X [/math] has a density [math]f_X[/math] then, by \ref{monotone-density-relation},
provided that
For instance, let [math]X[/math] be a random variable with a standard normal distribution:
The random variable [math]\exp(X)[/math] is said to have a lognormal distribution. It is fairly easy to show that
and thus the density for the lognormal distribution is given by
General Case
For a general transformation [math]T[/math] where [math]Y = T(X)[/math], there is no simple and explicit relation between [math]F_X[/math] and [math]F_Y[/math]. That being said, there are situations when we can use conditioning as well as the relations we've already derived to compute the distribution of [math]Y[/math]. More precisely, given a partition (splitting up) of the real line
, we let [math]X_i[/math] denote a random variable with distribution equal to the conditional distribution of [math]X[/math] given that [math]X[/math] lies in the interval [math](a_{i-1},a_i][/math] and let [math]Y_i = T(X_i)[/math]. Then we have
If the partition is chosen in such a way that [math]T[/math] -- when applied to any of the [math]X_i[/math] -- satisfies a property that we have encountered in previous sections (linear or monotone), then we can use \ref{gen-case-cdf-transform} to derive a relatively simple expression for the distribution function of [math]Y[/math].
Example: Squaring
Let [math]T(x) = x^2 [/math] and suppose that [math]F_X \gt 0 [/math]. Then we set
If [math]p = F_X(0)[/math] and, recalling \ref{gen-case-cdf-transform}, we obtain
By \ref{transform-rel-up} and \ref{transform-rel-down}, we know that
and
Combining \ref{square-1},\ref{square-2} and \ref{square-3}, we finally obtain the relation
For [math]y[/math] not equal to zero, the derivative of [math]F_Y[/math] equals
Therefore we obtain the following relation:
provided that
To demonstrate this technique, consider squaring a random variable [math]X[/math] with a standard normal distribution. The integrability condition \ref{square-density-condition} is equivalent to
which is definitely true. The distribution of [math]X^2[/math] is a chi-square distribution with 1 degree of freedom, and its density equals