We are given a biased coin, where the probability of Heads is q. The bias q is itself the realization of a random variable Q which is uniformly distributed on the interval [0,1]. We want to estimate the bias of this coin. We flip it 5 times, and define the (observed) random variable N as the number of Heads in this experiment.
1) Given the observation N =3, calculate the posterior distribution of the bias
Q. That is, find the conditional distribution of Q, given N = 3.
For 0≤q≤1,
2) What is the LMS estimate of Q, given N=3?
1.140*y^3*(1-y)^3
2.0.5
3. What is the resulting conditional mean squared error of the LMS estimator, given N=3?
ans. 1/36
any hint, how did you do the calculation?
This is the case for flipping 6 times.
Hint: Use a normalization constant for the Bayes theorem
To calculate the posterior distribution of the bias Q, given the observation N = 3, we need to use Bayes' theorem. Bayes' theorem states that:
P(Q|N) = (P(N|Q) * P(Q)) / P(N)
1) To calculate the posterior distribution of Q, given N = 3, we need to find P(Q|N=3).
P(Q|N=3) = (P(N=3|Q) * P(Q)) / P(N=3)
The likelihood, P(N=3|Q), is the probability of observing 3 heads out of 5 flips given the bias Q. Since each flip is independent, this can be calculated using the binomial probability formula:
P(N=3|Q) = (5 choose 3) * q^3 * (1-q)^2
The prior, P(Q), is the probability distribution of the bias Q. In this case, Q is uniformly distributed between 0 and 1, so the prior can be represented as a constant 1 over the interval [0, 1].
P(Q) = 1, for 0 ≤ Q ≤ 1
The evidence or marginal likelihood, P(N=3), is the probability of observing 3 heads out of 5 flips, regardless of the bias Q. We can calculate this by summing the likelihood over all possible values of Q:
P(N=3) = ∫P(N=3|Q) * P(Q) dQ
= ∫(5 choose 3) * q^3 * (1-q)^2 dQ, for 0 ≤ Q ≤ 1
Now we can substitute these values into the Bayes' theorem formula:
P(Q|N=3) = (P(N=3|Q) * P(Q)) / P(N=3)
P(Q|N=3) = [(5 choose 3) * q^3 * (1-q)^2 * 1] / P(N=3)
P(Q|N=3) = [(10) * q^3 * (1-q)^2] / P(N=3)
To find the LMS (Least Mean Square) estimate of Q, given N=3, we need to find the value of Q that maximizes the posterior distribution P(Q|N=3). This can be done by finding the mode of the distribution, which is the value of Q that maximizes the density function.
2) The LMS estimate of Q, given N=3, is the value of Q that maximizes P(Q|N=3). We can find this estimate by taking the derivative of P(Q|N=3) with respect to Q, setting it equal to zero, and solving for Q.
d/dQ (P(Q|N=3)) = d/dQ [(10) * q^3 * (1-q)^2] / P(N=3)
Setting the derivative equal to zero:
d/dQ [(10) * q^3 * (1-q)^2] / P(N=3) = 0
Solving for Q will give us the LMS estimate.