6.1.3 problem 12
(a). Note that, when Y
is a discrete r.v. whose possible values are
y1,...,yn,
P((1nn∑j=1yj)2≤1nn∑j=1y2j)=P((1nn∑j=1yj)2−1nn∑j=1y2j≤0)=P(1nn∑j=1y2j−(1nn∑j=1yj)2≥0)=P(E(Y2)−E(Y)2≥0)=P(Var(Y)≥0)=1,
where Var(Y)=0 if and
only if y1=...=yn.
Since Xj are
continuous r.v.s, P(X1=...=Xn)=P(T1=T2)=0,
therefore,
P(T1≤T2)=P(T1<T2)=P(T2−T1>0)=P(1nn∑j=1X2j−(1nn∑j=1Xj)2>0)=1.
(b). Note that we want to estimate sigma2,
hence θ=c2.
To find the bias of T1,
we want to find E(T1)−c2.
E(T1)=E(¯¯¯X2n)=Var(¯¯¯Xn)+(E(¯¯¯Xn))2=Var(1nn∑j=1Xj)+(E(1nn∑j=1Xj))2=1n2Var(n∑j=1Xj)+(1nE(n∑j=1Xj))2=1n2Var(n∑j=1Xj)+1n2(E(n∑j=1Xj))2
Since each Xj
is mutually independent, and by linearity of expectation,
E(T1)=1n2n∑j=1Var(Xj)+1n2(n∑j=1E(Xj))2=1n2n∑j=1σ2+1n2(n∑j=1c)2=1n2⋅nσ2+1n2⋅(nc)2=σ2n+c2
Hence the bias of T1
is
Finding the bias of T2,
we need to find E(T2)−c2.
By linearity of expectation,
E(T2)=1nn∑j=1E(X2j)=1nn∑j=1(Var(Xj)+(E(Xj))2)=1nn∑j=1(σ2+c2)=1n⋅n(σ2+c2)=σ2+c2.
Hence the bias of T2
is
In fact, T1 is the better
estimator, as when n becomes
large, the bias of T1 decreases,
whereas the bias of T2
does not.