X_1,\ldots ,X_ n\sim X are i.i.d. random variables with density f_\theta, for some unknown \theta \in (0,1):

f_\theta (x)=\left\{ \begin{array}{ll} \theta ^2& \text {if } -1 \le x<0\\ 1- \theta ^2 & \text {if } 0\leq x\leq 1\\ 0 & \text {otherwise.} \end{array}\right.

Consider the following hypotheses:

\displaystyle \displaystyle H_0 \displaystyle : X \sim \textsf{Unif}(-1,1)
\displaystyle H_1 \displaystyle : X \text { not distributed as } \textsf{Unif}(-1,1).
Write down the test statistic \, T_ n^{\text {Wald}}\, (that is quadratic in \hat\theta ^{\text {MLE}}) for Wald's test (involving a chi-square distribution) for the above hypothesis. Use the value of \theta that defines H_0 as the argument of the asymptotic variance \, V(\theta )\,.

The maximum likelihood estimator (MLE) of θ is given by:

\(\hat{\theta}^{\text{MLE}} = \frac{n_2}{n_1 + n_2}\)

where n_1 is the number of observations in the interval [-1, 0) and n_2 is the number of observations in the interval [0, 1].

The test statistic \(T_n^{\text{Wald}}\) is quadratic in \(\hat{\theta}^{\text{MLE}}\) and is given by:

\(T_n^{\text{Wald}} = \frac{(\hat{\theta}^{\text{MLE}} - \theta_0)^2}{V(\theta_0)}\)

where \(\theta_0\) is the value of θ that defines H_0 and \(V(\theta)\) is the asymptotic variance of \(\hat{\theta}^{\text{MLE}}\).

Since H_0 assumes a uniform distribution, \(\theta_0 = \frac{1}{2}\).

To find the asymptotic variance \(V(\theta)\), we first need to find the Fisher information, which is given by:

\(I(\theta) = -E\left(\frac{d^2}{d\theta^2}\log f_{\theta}(X)\right)\)

The log-likelihood function is given by:

\(\log f_{\theta}(x) = \begin{cases} \log(\theta^2) & \text{if } -1 \le x < 0 \\ \log(1-\theta^2) & \text{if } 0 \le x \le 1 \end{cases}\)

Differentiating twice with respect to θ, we get:

\(\frac{d^2}{d\theta^2}\log f_{\theta}(x) = \begin{cases} -\frac{1}{\theta^2} & \text{if } -1 \le x < 0 \\ \frac{1}{1-\theta^2} & \text{if } 0 \le x \le 1 \end{cases}\)

Taking the expectation, we need to integrate over the respective intervals:

\(I(\theta) = \int_{-1}^0 (-\frac{1}{\theta^2}) (\theta^2) dx + \int_0^1 \frac{1}{1-\theta^2} (1-\theta^2) dx\)

Simplifying, we get:

\(I(\theta) = -1 + 1 = 0\)

Given that the Fisher information is 0, the asymptotic variance \(V(\theta)\) is also 0.

Therefore, the test statistic \(T_n^{\text{Wald}}\) simplifies to:

\(T_n^{\text{Wald}} = \frac{(\hat{\theta}^{\text{MLE}} - \theta_0)^2}{V(\theta_0)} = \frac{\left(\frac{n_2}{n_1 + n_2} - \frac{1}{2}\right)^2}{0}\)

Since the asymptotic variance is 0, the test statistic \(T_n^{\text{Wald}}\) does not have a chi-square distribution and therefore Wald's test cannot be used in this case.

To obtain the test statistic for Wald's test, we first need to find the maximum likelihood estimator (MLE) of \(\theta\).

The likelihood function is given by:

\[
L(\theta) = \prod_{i=1}^{n} f_\theta(X_i)
\]

Taking the logarithm of the likelihood function, we get:

\[
\log L(\theta) = \sum_{i=1}^{n} \log f_\theta(X_i)
\]

Substituting the density function \(f_\theta\) in the expression, we have:

\[
\log L(\theta) = \sum_{i=1}^{n} \log\left( \begin{cases} \theta^2 & \text{if } -1 \le X_i < 0 \\ 1-\theta^2 & \text{if } 0 \le X_i \le 1 \\ 0 & \text{otherwise} \end{cases} \right)
\]

Taking the derivative of \(\log L(\theta)\) with respect to \(\theta\), we obtain:

\[
\frac{\partial}{\partial \theta} \log L(\theta) = \sum_{i=1}^{n} \frac{2\theta}{\theta^2}
\]

Setting this derivative equal to zero to find the MLE of \(\theta\), we get:

\[
\frac{2n \hat{\theta}}{\hat{\theta}^2} = 0
\]

Simplifying, we find that \(\hat{\theta} = \frac{n}{2}\).

The asymptotic variance of \(\hat{\theta}\) under the null hypothesis \(H_0\) is given by:

\[
V(\theta) = \frac{1}{I(\theta)}
\]

where \(I(\theta)\) is the Fisher information. In this case, \(I(\theta)\) is given by:

\[
I(\theta) = -E\left(\frac{\partial^2}{\partial \theta^2} \log L(\theta)\right)
\]

Taking the second derivative of \(\log L(\theta)\) with respect to \(\theta\), we have:

\[
\frac{\partial^2}{\partial \theta^2} \log L(\theta) = \sum_{i=1}^{n} \frac{-2}{\theta^2}
\]

Taking the expectation of this expression, we get:

\[
E\left(\frac{\partial^2}{\partial \theta^2} \log L(\theta)\right) = \frac{-2n}{\theta^2}
\]

Therefore, the asymptotic variance \(V(\theta)\) is:

\[
V(\theta) = \frac{1}{-\frac{2n}{\theta^2}} = -\frac{\theta^2}{2n}
\]

Now, we can define the test statistic \(T_n^{\text{Wald}}\) for Wald's test as:

\[
T_n^{\text{Wald}} = \left(\frac{\hat{\theta} - \theta_0}{\sqrt{V(\theta)}}\right)^2
\]

where \(\theta_0\) is the value of \(\theta\) that defines the null hypothesis \(H_0\).

In this case, the null hypothesis \(H_0\) states that \(X\) is distributed as \(\textsf{Unif}(-1,1)\), which implies \(\theta_0 = \frac{1}{2}\).

Substituting the values, we have:

\[
T_n^{\text{Wald}} = \left(\frac{\frac{n}{2} - \frac{1}{2}}{\sqrt{-\frac{1}{2n}}}\right)^2
\]

Simplifying further, we get the final expression for the test statistic \(T_n^{\text{Wald}}\) for Wald's test.