Posts

Showing posts from March, 2022

Expected Values-Variance,Co Variance and Correlation

Image
The Expected Value of a Random Variable The concept of the expected value of a random variable parallels the notion of a weighted average. The possible values of the random variable are weighted by their probabilities, as specified in the following definition. $E(X)$ is also referred to as the mean of $X$ and is often denoted by $μ$ or $μX$ . It might be helpful to think of the expected value of $X$ as the center of mass of the frequency function. Imagine placing the masses $p(x_i )$ at the points $x_i$ on a beam; the balance point of the beam is the expected value of  $X$. The definition of expectation for a continuous random variable is a fairly obvious extension of the discrete case—summation is replaced by integration. Expectations of Functions of Random Variables We often need to find E[g(X)], where X is a random variable and g is a fixed function Now suppose that $Y = g(X1, . . . , Xn)$, where $Xi$ have a joint distribution, and that we want to find $E(Y )$. We do not have to fin

Joint Distributions

Image
Joint Distributions concerned with joint probability structure of two or more random variables defined on the same sample space.Joint distributions arise naturally in many applications. • The joint probability distribution of the x, y, and z components of wind velocity can be experimentally measured in studies of atmospheric turbulence. • The joint distribution of the values of various physiological variables in a population of patients is often of interest in medical studies. • A model for the joint distribution of age and length in a population of fish can be used to estimate the age distribution from the length distribution. The age distribution is relevant to the setting of reasonable harvesting policies. The joint behavior of two random variables, $X$ and $Y$, is determined by the cumulative distribution function $F(x, y) = P(X ≤ x, Y ≤ y)$ regardless of whether $X$ and $Y$ are continuous or discrete. The cdf gives the probability that the point $(X, Y )$ belongs to a semi-inf

Random Variables- Discrete and Continuous

Image
Discrete Random Variables A random variable is essentially a random number. As motivation for a definition, let us consider an example. A coin is thrown three times, and the sequence of heads and tails is observed; thus, $\Omega= \{hhh, hht, htt, hth, ttt, tth, thh, tht\}$ Examples of random variables defined on $\Omega$ are (1) the total number of heads, (2) the total number of tails, and (3) the number of heads minus the number of tails. Each of these is a real-valued function defined on $\Omega$; that is, each is a rule that assigns a real number to every point $\omega \in \Omega$ . Since the outcome in $\Omega$ is random, the corresponding number is random as well. In general, a random variable is a function from $\Omega$ to the real numbers. Because the outcome of the experiment with sample space $\Omega$ is random, the number produced by the function is random as well. It is conventional to denote random variables by italic uppercase letters from the end of the alphabet. Fo

Central Limit Theorem

Image
The Law of Large Numbers It is commonly believed that if a fair coin is tossed many times and the proportion of heads is calculated, that proportion will be close to 1/2 . John Kerrich, a South African mathematician, tested this belief empirically while detained as a prisoner during World War II. He tossed a coin 10,000 times and observed 5067 heads. The law of large numbers is a mathematical formulation of this belief. The successive tosses of the coin are modeled as independent random trials. The random variable $X_i$ takes on the value 0 or 1 according to whether the $i$th trial results in a tail or a head, and the proportion of heads in $n$ trials is $\bar{X_n}=\frac{1}{n}\sum_i^n X_i$ The law of large numbers states that $\bar{X_n}$ approaches 1/2 in a sense that is specified by the following theorem. Convergence in Distribution and the Central Limit Theorem In applications, we often want to find $P(a < X < b)$ when we do not know the cdf of $X$ precisely; it is sometimes