Let \Theta be a Bernoulli random variable that indicates which one of two hypotheses is true, and let \mathbf{P}(\Theta =1)=p. Under the hypothesis \Theta =0, the random variable X has a normal distribution with mean 0, and variance 1. Under the alternative hypothesis \Theta =1, X has a normal distribution with mean 2 and variance 1.

Consider the MAP rule for deciding between the two hypotheses, given that X=x.

Suppose for this part of the problem that p=2/3. The MAP rule can choose in favor of the hypothesis \Theta =1 if and only if x\geq c_1. Find the value of c_1.

c_1=\quad
unanswered
For this part, assume again that p=2/3. Find the conditional probability of error for the MAP decision rule, given that the hypothesis \Theta =0 is true.

\mathbf{P}(\mbox{error}|\Theta =0)=\quad
unanswered
Find the overall (unconditional) probability of error associated with the MAP rule for p=1/2.

To find the value of c_1, we need to find the threshold value of x such that the probability of \Theta = 1 given X = x is greater than or equal to the probability of \Theta = 0 given X = x.

Using Bayes' theorem, we have:

\mathbf{P}(\Theta = 1|X = x) = \frac{\mathbf{P}(X = x|\Theta = 1)\mathbf{P}(\Theta = 1)}{\mathbf{P}(X = x|\Theta = 0)\mathbf{P}(\Theta = 0) + \mathbf{P}(X = x|\Theta = 1)\mathbf{P}(\Theta = 1)}

Substituting the given values, we have:

\mathbf{P}(\Theta = 1|X = x) = \frac{p \cdot \frac{1}{\sqrt{2\pi}}e^{-(x-2)^2/2}}{(1 - p) \cdot \frac{1}{\sqrt{2\pi}}e^{-x^2/2} + p \cdot \frac{1}{\sqrt{2\pi}}e^{-(x-2)^2/2}}

Simplifying, we have:

\mathbf{P}(\Theta = 1|X = x) = \frac{p \cdot e^{-(x-2)^2/2}}{(1 - p) \cdot e^{-x^2/2} + p \cdot e^{-(x-2)^2/2}}

Now, we can set this probability equal to the probability of \Theta = 0 given X = x and solve for c_1:

\frac{p \cdot e^{-(c_1-2)^2/2}}{(1 - p) \cdot e^{-c_1^2/2} + p \cdot e^{-(c_1-2)^2/2}} = 1 - \frac{2}{3}

Simplifying and solving for c_1, we get:

\frac{2}{3}e^{-(c_1-2)^2/2} = \frac{1}{3}e^{-c_1^2/2}

Taking the natural logarithm of both sides, we have:

\ln\left(\frac{2}{3}e^{-(c_1-2)^2/2}\right) = \ln\left(\frac{1}{3}e^{-c_1^2/2}\right)

Simplifying further, we get:

\ln\left(\frac{2}{3}\right) + (c_1-2)^2/2 = \ln\left(\frac{1}{3}\right) + c_1^2/2

Rearranging terms, we have:

(c_1-2)^2/2 - c_1^2/2 = \ln\left(\frac{1}{3}\right) - \ln\left(\frac{2}{3}\right)

Simplifying and solving for c_1, we get:

c_1^2 - 4c_1 + 2\ln\left(\frac{2}{3}\right) = 0

Using the quadratic formula, we can solve for c_1:

c_1 = \frac{4 \pm \sqrt{16 - 8\ln\left(\frac{2}{3}\right)}}{2}

c_1 = 2 \pm \sqrt{4 - 2\ln\left(\frac{2}{3}\right)}

We choose the positive root since c_1 should be positive. Therefore:

c_1 = 2 + \sqrt{4 - 2\ln\left(\frac{2}{3}\right)}

Now we can move on to finding the conditional probability of error for the MAP decision rule, given that the hypothesis \Theta = 0 is true.

The conditional probability of error is the probability of deciding in favor of the hypothesis \Theta = 1 when \Theta = 0 is true. In other words, it is the probability of x ≥ c_1 when \Theta = 0.

Using the given distribution of X under the hypothesis \Theta = 0 (normal distribution with mean 0 and variance 1), we have:

\mathbf{P}(X \geq c_1|\Theta = 0) = 1 - \mathbf{P}(X < c_1|\Theta = 0)

Since X has a normal distribution, we can calculate this probability using the cumulative distribution function (CDF) of a standard normal distribution.

\mathbf{P}(X < c_1|\Theta = 0) = \Phi(c_1)

where \Phi denotes the CDF of a standard normal distribution.

Therefore, the conditional probability of error is:

\mathbf{P}(\mbox{error}|\Theta = 0) = 1 - \Phi(c_1)

Finally, to find the overall (unconditional) probability of error associated with the MAP rule for p=1/2, we need to consider the two possible cases when \Theta = 0 or \Theta = 1. We can calculate this using the law of total probability:

\mathbf{P}(\mbox{error}) = \mathbf{P}(\mbox{error}|\Theta = 0)\mathbf{P}(\Theta = 0) + \mathbf{P}(\mbox{error}|\Theta = 1)\mathbf{P}(\Theta = 1)

Since the error only occurs when the decision is made in favor of \Theta = 1, we have:

\mathbf{P}(\mbox{error}|\Theta = 0) = 1

\mathbf{P}(\mbox{error}|\Theta = 1) = \mathbf{P}(X < c_1|\Theta = 1) = \Phi(c_1 - 2)

Substituting p = 1/2, we can calculate the overall probability of error.

To find the value of c1, we need to determine the threshold value for x in order to choose hypothesis Θ=1 under the MAP rule.

Given that p=2/3, we want to find the threshold value (c1) such that x≥c1.

Since X follows a normal distribution under Θ=0 with a mean of 0 and variance of 1, and under Θ=1 with a mean of 2 and variance of 1, we can calculate the conditional probabilities as follows:

P(X≥c1 | Θ=0)=1−P(X<c1 | Θ=0)
P(X≥c1 | Θ=1)=1−P(X<c1 | Θ=1)

Under Θ=0, X follows a standard normal distribution. Therefore, P(X<c1 | Θ=0) can be calculated using the standard normal distribution table or a calculator, and it corresponds to the cumulative distribution function (CDF) of the standard normal distribution evaluated at c1.

Under Θ=1, X follows a normal distribution with mean 2 and variance 1. Therefore, P(X<c1 | Θ=1) can be calculated using the normal distribution table or a calculator, and it corresponds to the CDF of the normal distribution with mean 2 and variance 1 evaluated at c1.

Taking into account the MAP rule, we want to choose hypothesis Θ=1 only if x≥c1. This means that we need to find c1 such that P(X<c1 | Θ=0) > P(X<c1 | Θ=1).

Therefore, we need to find the value of c1 that satisfies the inequality:

1−P(X<c1 | Θ=0) > 1−P(X<c1 | Θ=1)

Simplifying the inequality, we get:

P(X<c1 | Θ=1) > P(X<c1 | Θ=0)

Now, you can use the normal distribution table or a calculator to find the value of c1 that satisfies this inequality.

Please note that without specific values for x or p, we can only provide a general method to find c1.