[4:08 p. m., 10/7/2021] Ejh: 4. Let be i.i.d. normal variable following the distribution , where is the mean and is the variance.

Denote by and the maximum likelihood estimators of and respectively based on the i.i.d. observations .
(In our usual notation, . We use in this problem to make clear that the parameter being estimated is not .)
(a)
1 punto posible (calificable, resultado oculto)
Is the estimator of asymptotically normal?
yes
no
[4:27 p. m., 10/7/2021] Ejh: Let
g(u,t)=2u^2+t
and let be the Fisher information matrix of X1~N(u,t)
The asymptotic variance of is...
• Nabla g(u,t)^T I(u,t) nabla(u,t)
•Nabla g(u,t)^T (I(u,t)^-1 Nabla g(u,t)
•Nabla g(u,t)^T I(u,t)
•Nabla g(u,t)^T(I(u,t))



c. Using the results from above and referring back to homework solutions if necessary, compute the asymptotic variance of the estimator .
Hint: The inverse of a diagonal matrix where is the diagonal matrix .

V(2u^2+t) or the estimator 2u^2+t

Usar la inversa de (a 0, 0 a) es (1/a 0, 0 1/a)

yes

second

16*mu^2*tau+2*tau^2

To determine if the estimator of is asymptotically normal, we need to examine the asymptotic distribution of the estimator. In this case, since is the mean and is the variance, we are interested in the asymptotic distribution of the estimator .

The maximum likelihood estimator is asymptotically normal if it satisfies the following conditions:

1. Consistency: The estimator converges to the true value of the parameter as the sample size increases.

2. Asymptotic normality: The estimator has an asymptotic normal distribution.

To determine the asymptotic distribution of the estimator , we need to calculate its Fisher information matrix (I) and apply the asymptotic theory.

The Fisher information matrix (I) is given by:

I(u, t) = E[-Hessian(log f(x; u, t))],

where f(x; u, t) represents the density function of the normal distribution with mean u and variance t, and Hessian(log f(x; u, t)) represents the Hessian matrix of the log-likelihood function.

Given that the random variable X1 ~ N(u, t), the log-likelihood function can be expressed as:

log f(x; u, t) = -n/2 log(2πt) - (1/2t) ∑(xi - u)^2,

where xi represents the observed values.

By taking the partial derivatives of the log-likelihood function with respect to u and t, we can obtain the Hessian matrix:

Hessian(log f(x; u, t)) =
[∑(1/t) + (n/t^2) ∑(xi - u)^2, -n/(2t^2)]
[-n/(2t^2), n/(2t^3)]

Evaluating the Hessian matrix at the maximum likelihood estimators and , we have:

Hessian(log f(x; , )) =
[(n/ - (n/)), -n/(2t^2)]
[-n/(2t^2), n/(2t^3)]

Next, we calculate the Fisher information matrix (I) by taking the expected value of the Hessian matrix:

I(, ) =
[(n/ - (n/)), -n/(2t^2)]
[-n/(2t^2), n/(2t^3)]

To compute the asymptotic variance of the estimator , we need to calculate the gradient of the function g(u, t) = 2u^2 + t:

∇g(u, t) = [4u, 1]

Using the results from above, the asymptotic variance of the estimator is given by:

∇g(u, t)^T(I(u, t)^-1)∇g(u, t),

where ∇g(u, t)^T represents the transpose of the gradient, I(u, t)^-1 represents the inverse of the Fisher information matrix, and ∇g(u, t) represents the gradient.

Applying the formula, we have:

V(2u^2 + t) = ∇g(u, t)^T(I(u, t)^-1)∇g(u, t) = [4u, 1]^T[(n/ - (n/)), -n/(2t^2)][4u, 1].

Simplifying this expression, we get:

V(2u^2 + t) = 16(u^2 - ) + 1/n.

Therefore, the asymptotic variance of the estimator 2u^2 + t is 16(u^2 - ) + 1/n.

To determine whether the estimator of is asymptotically normal, we need to consider the properties of the maximum likelihood estimator (MLE) and the central limit theorem.

The MLE is obtained by maximizing the likelihood function, which is the joint probability density function (pdf) of the i.i.d. observations. In this case, the likelihood function would be the product of the pdf's of the normal variables.

To check if the MLE is asymptotically normal, we need to verify two conditions:

1. Consistency: The MLE should converge to the true value of as the sample size increases. This condition ensures that the estimator is unbiased and converges to the true parameter value.

2. Asymptotic Normality: The MLE should have an asymptotic normal distribution, which means that the distribution of the estimator approaches a normal distribution as the sample size increases. This condition relies on the central limit theorem.

To answer the question, one would need to verify if both conditions hold for the estimator .

Now, moving on to the second part of the question regarding the calculation of the asymptotic variance of the estimator .

Given the function g(u, t) = 2u^2 + t, and using the Fisher information matrix I(u, t) of X1 ~ N(u, t), the asymptotic variance of the estimator can be calculated using the following formula:

V(estimator) = nabla g(u, t)^T I(u, t) nabla g(u, t),

where nabla g(u, t) represents the gradient vector of g(u, t) with respect to (u, t).

In this case, nabla g(u, t) would be the vector [4u, 1], and I(u, t) would be a 2x2 matrix representing the Fisher information of the normal distribution.

To compute the asymptotic variance of the estimator 2u^2 + t, we can substitute the appropriate values into the formula:

V(2u^2 + t) = [4u, 1]^T I(u, t) [4u, 1],

where I(u, t) can be determined based on the properties of the normal distribution and the inverse of a diagonal matrix as mentioned in the hint.

By evaluating the expression above, you can determine the asymptotic variance of the estimator .

Please note that the computations and solving further parts of the problem may require specific values or additional information that is not provided in the given question.