Determine whether each of the following statement is true (i.e., always true) or false (i.e., not always true).

1. Let X be a random variable that takes values between 0 and c only, for some c≥0, so that P(0≤X≤c)=1. Then, var(X)≤c2/4. TRUE

2. X and Y are continuous random variables. If X∼N(μ,σ2) (i.e., normal with mean μ and variance σ2), Y=aX+b, and a>0, then Y∼N(aμ+b,aσ2). FALSE

3. The expected value of a non-negative continuous random variable X, which is defined by E[X]=∫∞0xfX(x)dx, also satisfies E[X]=∫∞0P(X>t)dt. TRUE

Answered in full...

1. True: Since X takes values between 0 and c only and the probability of X being between 0 and c is 1, we can conclude that X has a certain range of values. For a random variable with a fixed range, the variance is maximized when values are spread evenly throughout the range. In this case, the maximum variance occurs when X is equally likely to take any value between 0 and c. The variance of X is given by var(X) = (c/2)^2 = c^2/4, which means that var(X) is always less than or equal to c^2/4.

2. False: If X follows a normal distribution with mean μ and variance σ^2, and Y is defined as Y = aX + b, then Y follows a normal distribution with mean aμ + b. However, the variance of Y is not equal to aσ^2. Instead, the variance of Y is given by Var(Y) = a^2σ^2.

3. True: The expected value of a non-negative continuous random variable X is defined as the integral of xfX(x)dx over the entire range of X. This integral represents the weighted average of the values of X, where the weights are given by the probability density function fX(x). On the other hand, P(X>t) represents the probability that X is greater than t. Integrating P(X>t) with respect to t from 0 to infinity gives the expected value of X. Therefore, we can conclude that E[X] = ∫∞0P(X>t)dt.

1. To determine whether this statement is true, we need to examine the relationship between the variance of a random variable and its range. The variance of a random variable X is defined as Var(X) = E[(X - E[X])^2], where E[X] is the expected value of X.

Given that X takes values between 0 and c only, we can rewrite Var(X) as Var(X) = E[X^2] - (E[X])^2.

Since X takes values between 0 and c only, we know that E[X] lies between 0 and c, and consequently, (E[X])^2 is less than or equal to c^2.

Furthermore, the largest possible value for E[X^2] occurs when X always takes the value c. In this case, E[X^2] = c^2.

Therefore, we have Var(X) = E[X^2] - (E[X])^2 ≤ c^2 - (c^2) = 0.

This implies that var(X) ≤ c^2/4 is true.

Hence, the statement is TRUE.

2. To determine whether this statement is true, we need to examine the properties of the normal distribution.

If X ~ N(μ, σ^2), then the variable Y = aX + b follows a normal distribution with mean aμ + b and variance (a^2)(σ^2).

Therefore, Y ~ N(aμ + b, (a^2)(σ^2)).

Since this statement claims that Y ~ N(aμ+b, aσ^2), it is false.

Hence, the statement is FALSE.

3. To determine whether this statement is true, we need to examine the properties of the expected value of a continuous random variable.

The expected value E[X] of a non-negative continuous random variable X is indeed defined as E[X] = ∫∞₀ x fX(x)dx, where fX(x) is the probability density function (pdf) of X.

To show that this is equivalent to E[X] = ∫∞₀ P(X > t) dt, we can use the definition of the complementary cumulative distribution function (CCDF).

The CCDF of X is defined as P(X > t), which represents the probability that X is greater than some given value t.

Therefore, when integrating P(X>t) with respect to t from 0 to infinity, we are essentially summing up the probability of X being greater than any value t.

This is equivalent to ∫∞₀ P(X > t) dt.

Hence, the statement is TRUE.

In summary:
1. TRUE
2. FALSE
3. TRUE

To determine whether each statement is true or false, we can break down the statements and analyze their components.

1. Statement: Let X be a random variable that takes values between 0 and c only, for some c≥0, so that P(0≤X≤c)=1. Then, var(X)≤c^2/4.

To determine the truth of this statement, we need to understand the concepts of random variables and variance.

- A random variable is a variable that can take on different values based on the outcomes of a random process or experiment.
- Variance is a measure of how spread out the values of a random variable are around the mean.

In this statement, X is a random variable that can take values between 0 and c only, with a probability of 1. This means that X is confined to this range and will always fall within it.

The variance of X (denoted as var(X)) is defined as the average of the squared differences of each value of X from its mean.

To determine if var(X)≤c^2/4, we first need to find the mean of X. Since X can only take values between 0 and c, the mean (denoted as μ) will be the average of these values.

Therefore, μ = (0 + c)/2 = c/2.

Next, we compute the variance using the formula:

var(X) = E[(X - μ)^2],

where E represents the expectation or average.

For X to take values between 0 and c with a probability of 1, the probability distribution function fX(x) will be a constant between 0 and c. Therefore, fX(x) = 1/c for 0≤ x ≤ c, and 0 otherwise.

Plugging these values into the expectation formula, we get:

var(X) = E[(X - c/2)^2]
= ∫[0,c] (x - c/2)^2 * (1/c) dx
= 1/c ∫[0,c] (x^2 - cx + c^2/4) dx
= 1/c [(x^3/3 - c/2 * x^2 + c^2/4 * x)] ∣[0,c]
= 1/c [(c^3/3 - c/2 * c^2 + c^2/4 * c) - (0)]
= 1/c [(c^3/3 - c^3/2 + c^3/4)]
= 1/c * (c^3/12)
= c^2/12.

Now we compare c^2/12 with c^2/4:

c^2/12 ≤ c^2/4,

Simplifying, we see that:

1/12 ≤ 1/4,

Since 1/12 is indeed less than or equal to 1/4, we can conclude that var(X) ≤ c^2/4 is true. Therefore, the statement is true.

2. Statement: X and Y are continuous random variables. If X∼N(μ,σ^2), Y=aX+b, and a>0, then Y∼N(aμ+b,a^2σ^2).

To determine the truth of this statement, we need to understand the concept of transformations of random variables and the properties of normal distributions.

- In this statement, X is a normally distributed random variable with mean μ and variance σ^2 (denoted as X∼N(μ,σ^2)).
- Y is a new random variable formed by transforming X, where Y=aX+b. Here, a is a scaling factor and b is a shift factor.

To find the distribution of Y, we need to determine its mean and variance.

The mean of Y (denoted as μY) can be found by substituting the transformation into the mean of X:

μY = aμ + b.

The variance of Y (denoted as var(Y)) can be found using the properties of variances and the transformation of variables:

var(Y) = a^2 * var(X).

From the given statement, it claims that Y∼N(aμ+b,a^2σ^2).

To verify the truth of this claim, we compare the derived mean and variance of Y with the claimed mean and variance.

In the statement, it is stated that a>0 which implies that a is positive. However, the derived variance of Y is a^2 * var(X). Therefore, the given statement that Y∼N(aμ+b,a^2σ^2) is false, as it should be Y∼N(aμ+b,|a|σ^2) instead.

3. Statement: The expected value of a non-negative continuous random variable X, which is defined by E[X]=∫∞0 xfX(x)dx, also satisfies E[X]=∫∞0 P(X>t)dt.

To determine the truth of this statement, we need to understand the concept of expected value and the connection between expected value and the survival function of a random variable.

- The expected value (denoted as E[X]) is a measure of the center or average of a random variable X. It represents the weighted sum of all possible values of X, with the weights given by their probabilities.

In this statement, the expected value of X is defined using the integral form:

E[X] = ∫∞0 xfX(x)dx,

where fX(x) is the probability density function of X.

The survival function of a random variable X is defined as the probability that X is greater than some value t:

P(X > t).

To compare the expected value with the survival function, we can relate it to the concept of integration by parts.

Using integration by parts, we can rewrite the integral in the statement as:

∫∞0 xfX(x)dx = -∫∞0 x d[P(X > x)].

Applying integration by parts, we get:

∫∞0 xfX(x)dx = -[x * P(X > x) ∣[0,∞] - ∫∞0 P(X > x)dx].

The term [x * P(X > x) ∣[0,∞]] evaluates to 0 when evaluated at both 0 and ∞, as P(X > ∞) = 0.

Therefore, the integral becomes:

∫∞0 xfX(x)dx = -∫∞0 P(X > x)dx.

By convention, we switch the sign to make it positive:

∫∞0 xfX(x)dx = ∫∞0 P(X > x)dx.

Comparing this result with the statement, we can see that E[X] = ∫∞0 P(X > t)dt is indeed true. Therefore, the statement is true.