The result follows from the multivariate change of variables formula in calculus. Using the change of variables theorem, If \( X \) and \( Y \) have discrete distributions then \( Z = X + Y \) has a discrete distribution with probability density function \( g * h \) given by \[ (g * h)(z) = \sum_{x \in D_z} g(x) h(z - x), \quad z \in T \], If \( X \) and \( Y \) have continuous distributions then \( Z = X + Y \) has a continuous distribution with probability density function \( g * h \) given by \[ (g * h)(z) = \int_{D_z} g(x) h(z - x) \, dx, \quad z \in T \], In the discrete case, suppose \( X \) and \( Y \) take values in \( \N \). Suppose that \(Z\) has the standard normal distribution. Let \( g = g_1 \), and note that this is the probability density function of the exponential distribution with parameter 1, which was the topic of our last discussion. How to transform features into Normal/Gaussian Distribution From part (a), note that the product of \(n\) distribution functions is another distribution function. Recall that the Pareto distribution with shape parameter \(a \in (0, \infty)\) has probability density function \(f\) given by \[ f(x) = \frac{a}{x^{a+1}}, \quad 1 \le x \lt \infty\] Members of this family have already come up in several of the previous exercises. The formulas above in the discrete and continuous cases are not worth memorizing explicitly; it's usually better to just work each problem from scratch. Transform a normal distribution to linear - Stack Overflow Theorem (The matrix of a linear transformation) Let T: R n R m be a linear transformation. For the following three exercises, recall that the standard uniform distribution is the uniform distribution on the interval \( [0, 1] \). Clearly we can simulate a value of the Cauchy distribution by \( X = \tan\left(-\frac{\pi}{2} + \pi U\right) \) where \( U \) is a random number. In this section, we consider the bivariate normal distribution first, because explicit results can be given and because graphical interpretations are possible. Also, for \( t \in [0, \infty) \), \[ g_n * g(t) = \int_0^t g_n(s) g(t - s) \, ds = \int_0^t e^{-s} \frac{s^{n-1}}{(n - 1)!} I have an array of about 1000 floats, all between 0 and 1. The last result means that if \(X\) and \(Y\) are independent variables, and \(X\) has the Poisson distribution with parameter \(a \gt 0\) while \(Y\) has the Poisson distribution with parameter \(b \gt 0\), then \(X + Y\) has the Poisson distribution with parameter \(a + b\). It must be understood that \(x\) on the right should be written in terms of \(y\) via the inverse function. Similarly, \(V\) is the lifetime of the parallel system which operates if and only if at least one component is operating. If S N ( , ) then it can be shown that A S N ( A , A A T). We shine the light at the wall an angle \( \Theta \) to the perpendicular, where \( \Theta \) is uniformly distributed on \( \left(-\frac{\pi}{2}, \frac{\pi}{2}\right) \). It is widely used to model physical measurements of all types that are subject to small, random errors. Share Cite Improve this answer Follow When \(n = 2\), the result was shown in the section on joint distributions. \(\bs Y\) has probability density function \(g\) given by \[ g(\bs y) = \frac{1}{\left| \det(\bs B)\right|} f\left[ B^{-1}(\bs y - \bs a) \right], \quad \bs y \in T \]. Vary \(n\) with the scroll bar and note the shape of the probability density function. On the other hand, \(W\) has a Pareto distribution, named for Vilfredo Pareto. Location transformations arise naturally when the physical reference point is changed (measuring time relative to 9:00 AM as opposed to 8:00 AM, for example). But a linear combination of independent (one dimensional) normal variables is another normal, so aTU is a normal variable. Suppose that \(X\) has a continuous distribution on \(\R\) with distribution function \(F\) and probability density function \(f\). The Irwin-Hall distributions are studied in more detail in the chapter on Special Distributions. However, when dealing with the assumptions of linear regression, you can consider transformations of . Then \(X = F^{-1}(U)\) has distribution function \(F\). Suppose that \((X_1, X_2, \ldots, X_n)\) is a sequence of independent real-valued random variables, with common distribution function \(F\). Find the probability density function of the position of the light beam \( X = \tan \Theta \) on the wall. Find the probability density function of \(Z = X + Y\) in each of the following cases. Find the probability density function of each of the following random variables: In the previous exercise, \(V\) also has a Pareto distribution but with parameter \(\frac{a}{2}\); \(Y\) has the beta distribution with parameters \(a\) and \(b = 1\); and \(Z\) has the exponential distribution with rate parameter \(a\). Then \(Y_n = X_1 + X_2 + \cdots + X_n\) has probability density function \(f^{*n} = f * f * \cdots * f \), the \(n\)-fold convolution power of \(f\), for \(n \in \N\). Then \(Y\) has a discrete distribution with probability density function \(g\) given by \[ g(y) = \sum_{x \in r^{-1}\{y\}} f(x), \quad y \in T \], Suppose that \(X\) has a continuous distribution on a subset \(S \subseteq \R^n\) with probability density function \(f\), and that \(T\) is countable. Normal distributions are also called Gaussian distributions or bell curves because of their shape. In this particular case, the complexity is caused by the fact that \(x \mapsto x^2\) is one-to-one on part of the domain \(\{0\} \cup (1, 3]\) and two-to-one on the other part \([-1, 1] \setminus \{0\}\). Suppose that \(\bs X = (X_1, X_2, \ldots)\) is a sequence of independent and identically distributed real-valued random variables, with common probability density function \(f\). The result now follows from the change of variables theorem. Standardization as a special linear transformation: 1/2(X . Thus, suppose that \( X \), \( Y \), and \( Z \) are independent random variables with PDFs \( f \), \( g \), and \( h \), respectively. It's best to give the inverse transformation: \( x = r \cos \theta \), \( y = r \sin \theta \). linear algebra - Normal transformation - Mathematics Stack Exchange Suppose that \( r \) is a one-to-one differentiable function from \( S \subseteq \R^n \) onto \( T \subseteq \R^n \). Suppose that \(Z\) has the standard normal distribution, and that \(\mu \in (-\infty, \infty)\) and \(\sigma \in (0, \infty)\). \( f \) is concave upward, then downward, then upward again, with inflection points at \( x = \mu \pm \sigma \). Proposition Let be a multivariate normal random vector with mean and covariance matrix . Suppose that \(X\) and \(Y\) are random variables on a probability space, taking values in \( R \subseteq \R\) and \( S \subseteq \R \), respectively, so that \( (X, Y) \) takes values in a subset of \( R \times S \). Let \(Y = X^2\). In the previous exercise, \(Y\) has a Pareto distribution while \(Z\) has an extreme value distribution. \(U = \min\{X_1, X_2, \ldots, X_n\}\) has probability density function \(g\) given by \(g(x) = n\left[1 - F(x)\right]^{n-1} f(x)\) for \(x \in \R\). Another thought of mine is to calculate the following. Systematic component - \(x\) is the explanatory variable (can be continuous or discrete) and is linear in the parameters. Recall again that \( F^\prime = f \). Suppose that \( (X, Y) \) has a continuous distribution on \( \R^2 \) with probability density function \( f \). }, \quad n \in \N \] This distribution is named for Simeon Poisson and is widely used to model the number of random points in a region of time or space; the parameter \(t\) is proportional to the size of the regtion. The general form of its probability density function is Samples of the Gaussian Distribution follow a bell-shaped curve and lies around the mean. The problem is my data appears to be normally distributed, i.e., there are a lot of 0.999943 and 0.99902 values. In probability theory, a normal (or Gaussian) distribution is a type of continuous probability distribution for a real-valued random variable. I have to apply a non-linear transformation over the variable x, let's call k the new transformed variable, defined as: k = x ^ -2. Linear transformation. The standard normal distribution does not have a simple, closed form quantile function, so the random quantile method of simulation does not work well. This page titled 3.7: Transformations of Random Variables is shared under a CC BY 2.0 license and was authored, remixed, and/or curated by Kyle Siegrist (Random Services) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. The transformation \(\bs y = \bs a + \bs B \bs x\) maps \(\R^n\) one-to-one and onto \(\R^n\). In the classical linear model, normality is usually required. Recall that \( F^\prime = f \). Linear combinations of normal random variables - Statlect The generalization of this result from \( \R \) to \( \R^n \) is basically a theorem in multivariate calculus. The sample mean can be written as and the sample variance can be written as If we use the above proposition (independence between a linear transformation and a quadratic form), verifying the independence of and boils down to verifying that which can be easily checked by directly performing the multiplication of and . Let be an real vector and an full-rank real matrix. The distribution of \( Y_n \) is the binomial distribution with parameters \(n\) and \(p\). The distribution arises naturally from linear transformations of independent normal variables. \(X = -\frac{1}{r} \ln(1 - U)\) where \(U\) is a random number. Suppose that \(X\) and \(Y\) are independent random variables, each having the exponential distribution with parameter 1. Save. Hence the following result is an immediate consequence of our change of variables theorem: Suppose that \( (X, Y) \) has a continuous distribution on \( \R^2 \) with probability density function \( f \), and that \( (R, \Theta) \) are the polar coordinates of \( (X, Y) \). In particular, suppose that a series system has independent components, each with an exponentially distributed lifetime. The number of bit strings of length \( n \) with 1 occurring exactly \( y \) times is \( \binom{n}{y} \) for \(y \in \{0, 1, \ldots, n\}\). Transforming data to normal distribution in R. I've imported some data from Excel, and I'd like to use the lm function to create a linear regression model of the data. Hence \[ \frac{\partial(x, y)}{\partial(u, w)} = \left[\begin{matrix} 1 & 0 \\ w & u\end{matrix} \right] \] and so the Jacobian is \( u \). Since \( X \) has a continuous distribution, \[ \P(U \ge u) = \P[F(X) \ge u] = \P[X \ge F^{-1}(u)] = 1 - F[F^{-1}(u)] = 1 - u \] Hence \( U \) is uniformly distributed on \( (0, 1) \). Impact of transforming (scaling and shifting) random variables In statistical terms, \( \bs X \) corresponds to sampling from the common distribution.By convention, \( Y_0 = 0 \), so naturally we take \( f^{*0} = \delta \). Then, a pair of independent, standard normal variables can be simulated by \( X = R \cos \Theta \), \( Y = R \sin \Theta \). Clearly convolution power satisfies the law of exponents: \( f^{*n} * f^{*m} = f^{*(n + m)} \) for \( m, \; n \in \N \). (2) (2) y = A x + b N ( A + b, A A T). The linear transformation of a normally distributed random variable is still a normally distributed random variable: . This follows directly from the general result on linear transformations in (10). Also, a constant is independent of every other random variable. . The Poisson distribution is studied in detail in the chapter on The Poisson Process. As in the discrete case, the formula in (4) not much help, and it's usually better to work each problem from scratch. We have seen this derivation before. In the dice experiment, select fair dice and select each of the following random variables. So the main problem is often computing the inverse images \(r^{-1}\{y\}\) for \(y \in T\). It is mostly useful in extending the central limit theorem to multiple variables, but also has applications to bayesian inference and thus machine learning, where the multivariate normal distribution is used to approximate . Random variable \(X\) has the normal distribution with location parameter \(\mu\) and scale parameter \(\sigma\). Linear/nonlinear forms and the normal law: Characterization by high The associative property of convolution follows from the associate property of addition: \( (X + Y) + Z = X + (Y + Z) \). In the second image, note how the uniform distribution on \([0, 1]\), represented by the thick red line, is transformed, via the quantile function, into the given distribution. Simple addition of random variables is perhaps the most important of all transformations. \( G(y) = \P(Y \le y) = \P[r(X) \le y] = \P\left[X \le r^{-1}(y)\right] = F\left[r^{-1}(y)\right] \) for \( y \in T \). For the next exercise, recall that the floor and ceiling functions on \(\R\) are defined by \[ \lfloor x \rfloor = \max\{n \in \Z: n \le x\}, \; \lceil x \rceil = \min\{n \in \Z: n \ge x\}, \quad x \in \R\]. Normal distribution - Quadratic forms - Statlect Find the probability density function of \(Y\) and sketch the graph in each of the following cases: Compare the distributions in the last exercise. For \( y \in \R \), \[ G(y) = \P(Y \le y) = \P\left[r(X) \in (-\infty, y]\right] = \P\left[X \in r^{-1}(-\infty, y]\right] = \int_{r^{-1}(-\infty, y]} f(x) \, dx \]. Find the probability density function of. PDF 4. MULTIVARIATE NORMAL DISTRIBUTION (Part I) Lecture 3 Review Related. Keep the default parameter values and run the experiment in single step mode a few times. It is also interesting when a parametric family is closed or invariant under some transformation on the variables in the family. There is a partial converse to the previous result, for continuous distributions. . Hence for \(x \in \R\), \(\P(X \le x) = \P\left[F^{-1}(U) \le x\right] = \P[U \le F(x)] = F(x)\). Then \[ \P\left(T_i \lt T_j \text{ for all } j \ne i\right) = \frac{r_i}{\sum_{j=1}^n r_j} \]. Find the probability density function of \(X = \ln T\). If \(B \subseteq T\) then \[\P(\bs Y \in B) = \P[r(\bs X) \in B] = \P[\bs X \in r^{-1}(B)] = \int_{r^{-1}(B)} f(\bs x) \, d\bs x\] Using the change of variables \(\bs x = r^{-1}(\bs y)\), \(d\bs x = \left|\det \left( \frac{d \bs x}{d \bs y} \right)\right|\, d\bs y\) we have \[\P(\bs Y \in B) = \int_B f[r^{-1}(\bs y)] \left|\det \left( \frac{d \bs x}{d \bs y} \right)\right|\, d \bs y\] So it follows that \(g\) defined in the theorem is a PDF for \(\bs Y\). Let \(\bs Y = \bs a + \bs B \bs X\), where \(\bs a \in \R^n\) and \(\bs B\) is an invertible \(n \times n\) matrix. A linear transformation of a multivariate normal random vector also has a multivariate normal distribution. How do you calculate the cdf of a linear transformation of the normal = e^{-(a + b)} \frac{1}{z!} Note that \(\bs Y\) takes values in \(T = \{\bs a + \bs B \bs x: \bs x \in S\} \subseteq \R^n\). Both results follows from the previous result above since \( f(x, y) = g(x) h(y) \) is the probability density function of \( (X, Y) \). Then the inverse transformation is \( u = x, \; v = z - x \) and the Jacobian is 1. Note that the minimum \(U\) in part (a) has the exponential distribution with parameter \(r_1 + r_2 + \cdots + r_n\). With \(n = 5\), run the simulation 1000 times and compare the empirical density function and the probability density function. \exp\left(-e^x\right) e^{n x}\) for \(x \in \R\). Suppose now that we have a random variable \(X\) for the experiment, taking values in a set \(S\), and a function \(r\) from \( S \) into another set \( T \). Initialy, I was thinking of applying "exponential twisting" change of measure to y (which in this case amounts to changing the mean from $\mathbf{0}$ to $\mathbf{c}$) but this requires taking . Recall that the sign function on \( \R \) (not to be confused, of course, with the sine function) is defined as follows: \[ \sgn(x) = \begin{cases} -1, & x \lt 0 \\ 0, & x = 0 \\ 1, & x \gt 0 \end{cases} \], Suppose again that \( X \) has a continuous distribution on \( \R \) with distribution function \( F \) and probability density function \( f \), and suppose in addition that the distribution of \( X \) is symmetric about 0. To check if the data is normally distributed I've used qqplot and qqline . Part (a) hold trivially when \( n = 1 \). This is shown in Figure 0.1, with random variable X fixed, the distribution of Y is normal (illustrated by each small bell curve). More simply, \(X = \frac{1}{U^{1/a}}\), since \(1 - U\) is also a random number. In this case, the sequence of variables is a random sample of size \(n\) from the common distribution. In general, beta distributions are widely used to model random proportions and probabilities, as well as physical quantities that take values in closed bounded intervals (which after a change of units can be taken to be \( [0, 1] \)). \(g(t) = a e^{-a t}\) for \(0 \le t \lt \infty\) where \(a = r_1 + r_2 + \cdots + r_n\), \(H(t) = \left(1 - e^{-r_1 t}\right) \left(1 - e^{-r_2 t}\right) \cdots \left(1 - e^{-r_n t}\right)\) for \(0 \le t \lt \infty\), \(h(t) = n r e^{-r t} \left(1 - e^{-r t}\right)^{n-1}\) for \(0 \le t \lt \infty\). e^{t-s} \, ds = e^{-t} \int_0^t \frac{s^{n-1}}{(n - 1)!} Multiplying by the positive constant b changes the size of the unit of measurement. The precise statement of this result is the central limit theorem, one of the fundamental theorems of probability. Suppose that \((X, Y)\) probability density function \(f\). Linear Transformation of Gaussian Random Variable Theorem Let , and be real numbers . For our next discussion, we will consider transformations that correspond to common distance-angle based coordinate systemspolar coordinates in the plane, and cylindrical and spherical coordinates in 3-dimensional space. By far the most important special case occurs when \(X\) and \(Y\) are independent. Here is my code from torch.distributions.normal import Normal from torch. In terms of the Poisson model, \( X \) could represent the number of points in a region \( A \) and \( Y \) the number of points in a region \( B \) (of the appropriate sizes so that the parameters are \( a \) and \( b \) respectively). However, it is a well-known property of the normal distribution that linear transformations of normal random vectors are normal random vectors. \(G(z) = 1 - \frac{1}{1 + z}, \quad 0 \lt z \lt \infty\), \(g(z) = \frac{1}{(1 + z)^2}, \quad 0 \lt z \lt \infty\), \(h(z) = a^2 z e^{-a z}\) for \(0 \lt z \lt \infty\), \(h(z) = \frac{a b}{b - a} \left(e^{-a z} - e^{-b z}\right)\) for \(0 \lt z \lt \infty\). This section studies how the distribution of a random variable changes when the variable is transfomred in a deterministic way. Linear transformations (or more technically affine transformations) are among the most common and important transformations. Then \[ \P(Z \in A) = \P(X + Y \in A) = \int_C f(u, v) \, d(u, v) \] Now use the change of variables \( x = u, \; z = u + v \). In the reliability setting, where the random variables are nonnegative, the last statement means that the product of \(n\) reliability functions is another reliability function. Normal distribution non linear transformation - Mathematics Stack Exchange With \(n = 4\), run the simulation 1000 times and note the agreement between the empirical density function and the probability density function. Returning to the case of general \(n\), note that \(T_i \lt T_j\) for all \(j \ne i\) if and only if \(T_i \lt \min\left\{T_j: j \ne i\right\}\). \(f^{*2}(z) = \begin{cases} z, & 0 \lt z \lt 1 \\ 2 - z, & 1 \lt z \lt 2 \end{cases}\), \(f^{*3}(z) = \begin{cases} \frac{1}{2} z^2, & 0 \lt z \lt 1 \\ 1 - \frac{1}{2}(z - 1)^2 - \frac{1}{2}(2 - z)^2, & 1 \lt z \lt 2 \\ \frac{1}{2} (3 - z)^2, & 2 \lt z \lt 3 \end{cases}\), \( g(u) = \frac{3}{2} u^{1/2} \), for \(0 \lt u \le 1\), \( h(v) = 6 v^5 \) for \( 0 \le v \le 1 \), \( k(w) = \frac{3}{w^4} \) for \( 1 \le w \lt \infty \), \(g(c) = \frac{3}{4 \pi^4} c^2 (2 \pi - c)\) for \( 0 \le c \le 2 \pi\), \(h(a) = \frac{3}{8 \pi^2} \sqrt{a}\left(2 \sqrt{\pi} - \sqrt{a}\right)\) for \( 0 \le a \le 4 \pi\), \(k(v) = \frac{3}{\pi} \left[1 - \left(\frac{3}{4 \pi}\right)^{1/3} v^{1/3} \right]\) for \( 0 \le v \le \frac{4}{3} \pi\). Obtain the properties of normal distribution for this transformed variable, such as additivity (linear combination in the Properties section) and linearity (linear transformation in the Properties . f Z ( x) = 3 f Y ( x) 4 where f Z and f Y are the pdfs. 116. These can be combined succinctly with the formula \( f(x) = p^x (1 - p)^{1 - x} \) for \( x \in \{0, 1\} \). Find the probability density function of each of the follow: Suppose that \(X\), \(Y\), and \(Z\) are independent, and that each has the standard uniform distribution. So if I plot all the values, you won't clearly . Let A be the m n matrix For \(i \in \N_+\), the probability density function \(f\) of the trial variable \(X_i\) is \(f(x) = p^x (1 - p)^{1 - x}\) for \(x \in \{0, 1\}\). Now we can prove that every linear transformation is a matrix transformation, and we will show how to compute the matrix. It su ces to show that a V = m+AZ with Z as in the statement of the theorem, and suitably chosen m and A, has the same distribution as U. In many respects, the geometric distribution is a discrete version of the exponential distribution. Bryan 3 years ago \( f \) increases and then decreases, with mode \( x = \mu \). Subsection 3.3.3 The Matrix of a Linear Transformation permalink. So \((U, V, W)\) is uniformly distributed on \(T\). As usual, the most important special case of this result is when \( X \) and \( Y \) are independent. In this case, \( D_z = \{0, 1, \ldots, z\} \) for \( z \in \N \). A multivariate normal distribution is a vector in multiple normally distributed variables, such that any linear combination of the variables is also normally distributed. Set \(k = 1\) (this gives the minimum \(U\)). Linear transformations (addition and multiplication of a constant) and their impacts on center (mean) and spread (standard deviation) of a distribution. \(\left|X\right|\) has probability density function \(g\) given by \(g(y) = f(y) + f(-y)\) for \(y \in [0, \infty)\). This general method is referred to, appropriately enough, as the distribution function method. \(U = \min\{X_1, X_2, \ldots, X_n\}\) has distribution function \(G\) given by \(G(x) = 1 - \left[1 - F_1(x)\right] \left[1 - F_2(x)\right] \cdots \left[1 - F_n(x)\right]\) for \(x \in \R\). Set \(k = 1\) (this gives the minimum \(U\)). Linear transformation of multivariate normal random variable is still multivariate normal. Suppose that \( (X, Y, Z) \) has a continuous distribution on \( \R^3 \) with probability density function \( f \), and that \( (R, \Theta, Z) \) are the cylindrical coordinates of \( (X, Y, Z) \). Given our previous result, the one for cylindrical coordinates should come as no surprise. Suppose that \(r\) is strictly increasing on \(S\). Conversely, any continuous distribution supported on an interval of \(\R\) can be transformed into the standard uniform distribution. Find the distribution function and probability density function of the following variables. The change of temperature measurement from Fahrenheit to Celsius is a location and scale transformation. Find the probability density function of. Convolution (either discrete or continuous) satisfies the following properties, where \(f\), \(g\), and \(h\) are probability density functions of the same type. Then \(U\) is the lifetime of the series system which operates if and only if each component is operating. \Only if part" Suppose U is a normal random vector. calculus - Linear transformation of normal distribution - Mathematics Here we show how to transform the normal distribution into the form of Eq 1.1: Eq 3.1 Normal distribution belongs to the exponential family. Suppose that \((T_1, T_2, \ldots, T_n)\) is a sequence of independent random variables, and that \(T_i\) has the exponential distribution with rate parameter \(r_i \gt 0\) for each \(i \in \{1, 2, \ldots, n\}\). \(X = a + U(b - a)\) where \(U\) is a random number. Suppose that \(X\) and \(Y\) are independent random variables, each with the standard normal distribution. The commutative property of convolution follows from the commutative property of addition: \( X + Y = Y + X \). The Erlang distribution is studied in more detail in the chapter on the Poisson Process, and in greater generality, the gamma distribution is studied in the chapter on Special Distributions. In the context of the Poisson model, part (a) means that the \( n \)th arrival time is the sum of the \( n \) independent interarrival times, which have a common exponential distribution. This is one of the older transformation technique which is very similar to Box-cox transformation but does not require the values to be strictly positive. As we all know from calculus, the Jacobian of the transformation is \( r \). 1 Converting a normal random variable 0 A normal distribution problem I am not getting 0 Recall that a Bernoulli trials sequence is a sequence \((X_1, X_2, \ldots)\) of independent, identically distributed indicator random variables. Since \(1 - U\) is also a random number, a simpler solution is \(X = -\frac{1}{r} \ln U\). compute a KL divergence for a Gaussian Mixture prior and a normal When appropriately scaled and centered, the distribution of \(Y_n\) converges to the standard normal distribution as \(n \to \infty\). Let $\eta = Q(\xi )$ be the polynomial transformation of the . a^{x} b^{z - x} \\ & = e^{-(a+b)} \frac{1}{z!} Types Of Transformations For Better Normal Distribution \(U = \min\{X_1, X_2, \ldots, X_n\}\) has distribution function \(G\) given by \(G(x) = 1 - \left[1 - F(x)\right]^n\) for \(x \in \R\). Vary \(n\) with the scroll bar and set \(k = n\) each time (this gives the maximum \(V\)). Suppose that \(X\) has the exponential distribution with rate parameter \(a \gt 0\), \(Y\) has the exponential distribution with rate parameter \(b \gt 0\), and that \(X\) and \(Y\) are independent. \(g(u, v) = \frac{1}{2}\) for \((u, v) \) in the square region \( T \subset \R^2 \) with vertices \(\{(0,0), (1,1), (2,0), (1,-1)\}\). The distribution of \( R \) is the (standard) Rayleigh distribution, and is named for John William Strutt, Lord Rayleigh. This distribution is often used to model random times such as failure times and lifetimes. An introduction to the generalized linear model (GLM) . Once again, it's best to give the inverse transformation: \( x = r \sin \phi \cos \theta \), \( y = r \sin \phi \sin \theta \), \( z = r \cos \phi \). This follows from part (a) by taking derivatives with respect to \( y \). Chi-square distributions are studied in detail in the chapter on Special Distributions. The dice are both fair, but the first die has faces labeled 1, 2, 2, 3, 3, 4 and the second die has faces labeled 1, 3, 4, 5, 6, 8. The minimum and maximum transformations \[U = \min\{X_1, X_2, \ldots, X_n\}, \quad V = \max\{X_1, X_2, \ldots, X_n\} \] are very important in a number of applications. In the continuous case, \( R \) and \( S \) are typically intervals, so \( T \) is also an interval as is \( D_z \) for \( z \in T \). \(h(x) = \frac{1}{(n-1)!} Using the theorem on quotient above, the PDF \( f \) of \( T \) is given by \[f(t) = \int_{-\infty}^\infty \phi(x) \phi(t x) |x| dx = \frac{1}{2 \pi} \int_{-\infty}^\infty e^{-(1 + t^2) x^2/2} |x| dx, \quad t \in \R\] Using symmetry and a simple substitution, \[ f(t) = \frac{1}{\pi} \int_0^\infty x e^{-(1 + t^2) x^2/2} dx = \frac{1}{\pi (1 + t^2)}, \quad t \in \R \].