Tải bản đầy đủ

Solution manual mathematical statistics with applications 7th edition, wackerly chapter16

Chapter 16: Introduction to Bayesian Methods of Inference
16.1

Refer to Table 16.1.
a. β (10,30)
b. n = 25
c. β (10,30) , n = 25
d. Yes
e. Posterior for the β (1,3) prior.

16.2

a.-d. Refer to Section 16.2

16.3

a.-e. Applet exercise, so answers vary.

16.4

a.-d. Applex exercise, so answers vary.


16.5

It should take more trials with a beta(10, 30) prior.

16.6

⎛n⎞
Here, L( y | p ) = p( y | p ) = ⎜⎜ ⎟⎟ p y (1 − p ) n− y , where y = 0, 1, …, n and 0 < p < 1. So,
⎝ y⎠
⎛n⎞
Γ( α + β) α −1
f ( y , p ) = ⎜⎜ ⎟⎟ p y (1 − p ) n − y ×
p (1 − p )β−1
Γ(α)Γ(β)
⎝ y⎠
so that
1
⎛ n ⎞ Γ(α + β) y + α −1
Γ(α + β) Γ( y + α )Γ( n − y + β)
.
m( y ) = ∫ ⎜⎜ ⎟⎟
p
(1 − p ) n − y +β −1 dp =
y Γ(α )Γ(β)
Γ( α )Γ(β)
Γ( n + α + β)
0⎝ ⎠
The posterior density of p is then
Γ( n + α + β)
g * ( p | y) =
p y + α −1 (1 − p ) n − y +β−1 , 0 < p < 1.
Γ( y + α)Γ( n − y + β)
This is the identical beta density as in Example 16.1 (recall that the sum of n i.i.d.
Bernoulli random variables is binomial with n trials and success probability p).

16.7

a. The Bayes estimator is the mean of the posterior distribution, so with a beta posterior
with α = y + 1 and β = n – y + 3 in the prior, the posterior mean is


1
Y +1
Y
=
+
pˆ B =
.
n+4 n+4 n+4
E (Y ) + 1 np + 1
V (Y )
np(1 − p )
b. E ( pˆ B ) =
=
=
≠ p , V ( pˆ ) =
2
n+4
n+4
( n + 4)
( n + 4) 2

16.8

a. From Ex. 16.6, the Bayes estimator for p is pˆ B = E ( p | Y ) =

Y +1
.
n+2

b. This is the uniform distribution in the interval (0, 1).
c. We know that pˆ = Y / n is an unbiased estimator for p. However, for the Bayes
estimator,
326


Chapter 16: Introduction to Bayesian Methods of Inference

327
Instructor’s Solutions Manual

E ( pˆ B ) =

E (Y ) + 1 np + 1
V (Y )
np(1 − p )
=
and V ( pˆ B ) =
.
=
2
n+2
n+2
( n + 2)
( n + 2) 2
2

np(1 − p ) ⎛ np + 1
np(1 − p ) + (1 − 2 p ) 2

.
+⎜
− p⎟ =
( n + 2) 2 ⎝ n + 2
( n + 2) 2

d. For the unbiased estimator pˆ , MSE( pˆ ) = V( pˆ ) = p(1 – p)/n. So, holding n fixed, we
must determine the values of p such that
np(1 − p ) + (1 − 2 p ) 2 p(1 − p )
.
<
n
( n + 2) 2
The range of values of p where this is satisfied is solved in Ex. 8.17(c).

Thus, MSE ( pˆ B ) = V ( pˆ B ) + [ B( pˆ B )]2 =

16.9

a. Here, L( y | p ) = p( y | p ) = (1 − p ) y −1 p , where y = 1, 2, … and 0 < p < 1. So,
Γ( α + β) α −1
f ( y , p ) = (1 − p ) y −1 p ×
p (1 − p )β −1
Γ(α)Γ(β)
so that
1
Γ(α + β) Γ( α + 1)Γ( y + β − 1)
Γ(α + β) α
.
m( y ) = ∫
p (1 − p )β+ y − 2 dp =
Γ
(
α
)
Γ
(
β
)
Γ
(
y
+
α
+
β
)
Γ
(
α
)
Γ
(
β
)
0
The posterior density of p is then
Γ(α + β + y )
g * ( p | y) =
pα (1 − p) β + y − 2 , 0 < p < 1.
Γ(α + 1)Γ( β + y − 1)
This is a beta density with shape parameters α* = α + 1 and β* = β + y – 1.
b. The Bayes estimators are
α +1
(1) pˆ B = E ( p | Y ) =
,
α +β +Y

( 2) [ p(1 − p )] B

= E( p | Y ) − E( p 2 | Y ) =
=

(α + 2)(α + 1)
α +1

α + β + Y (α + β + Y + 1)(α + β + Y )

(α + 1)(β + Y − 1)
,
(α + β + Y + 1)(α + β + Y )

where the second expectation was solved using the result from Ex. 4.200. (Alternately,
1

the answer could be found by solving E[ p(1 − p ) | Y ] = ∫ p(1 − p ) g * ( p | Y )dp .
0

16.10 a. The joint density of the random sample and θ is given by the product of the marginal
densities multiplied by the gamma prior:


328

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual

f ( y 1 , … , y n , θ) =
=

[∏

n

i =1

θ exp( −θy i )

]Γ(α1)β

α

θ α −1 exp(−θ / β)



n
θ n + α −1
θ n + α −1
β

⎜− θ
exp
y
/
exp

θ

θ
β
=

n
α
α
i =1 i


Γ(α )β
Γ(α )β

β∑i =1 y i + 1 ⎟⎠


(

)




β
1

⎟dθ , but this integral resembles
n + α −1
θ
exp − θ
b. m( y1 ,…, y n ) =
α ∫
n

Γ( α)β 0
β∑i =1 y i + 1 ⎟⎠

β
.
that of a gamma density with shape parameter n + α and scale parameter
n
β∑i =1 y i + 1



1
β


Thus, the solution is m( y1 ,…, y n ) =
(
n
)
Γ
+
β
⎜ β n y +1⎟
Γ(α )β α

⎝ ∑i =1 i

n+α

.

c. The solution follows from parts (a) and (b) above.
d. Using the result in Ex. 4.111,



β
1

(
)
μˆ B = E (μ | Y ) = E (1 / θ | Y ) = * *
=⎢
n
+
α

1
β (α − 1) ⎢ β∑n Yi + 1

i =1


β∑i =1Yi + 1
n

=
e. The prior mean for 1/θ is E (1 / θ) =

β(n + α − 1)

=



n

Y

−1

1
n + α − 1 β(n + α − 1)
i =1 i

+

1
(again by Ex. 4.111). Thus, μˆ B can be
β( α − 1)

written as
n
1 ⎛ α −1 ⎞


μˆ B = Y ⎜
⎟+

⎟,
⎝ n + α − 1 ⎠ β(α − 1) ⎝ n + α − 1 ⎠
which is a weighted average of the MLE and the prior mean.
f. We know that Y is unbiased; thus E(Y ) = μ = 1/θ. Therefore,
n
1 ⎛ α −1 ⎞ 1 ⎛
n
1 ⎛ α −1 ⎞



E (μˆ B ) = E (Y )⎜
⎟+

⎟= ⎜
⎟+

⎟.
⎝ n + α − 1 ⎠ β(α − 1) ⎝ n + α − 1 ⎠ θ ⎝ n + α − 1 ⎠ β(α − 1) ⎝ n + α − 1 ⎠
Therefore, μˆ B is biased. However, it is asymptotically unbiased since
E (μˆ B ) − 1 / θ → 0 .
Also,


Chapter 16: Introduction to Bayesian Methods of Inference

329
Instructor’s Solutions Manual

2

2

1 ⎛
1
n
n
n



→0.
V (μˆ B ) = V (Y )⎜
⎟ = 2
⎟ = 2 ⎜
θ (n + α − 1)2
θ n ⎝ n + α −1⎠
⎝ n + α −1⎠
p
So, μˆ B ⎯
⎯→
1 / θ and thus it is consistent.

16.11 a. The joint density of U and λ is

( nλ ) u exp( − nλ )
1
×
λα −1 exp(−λ / β)
α
u!
Γ(α )β
u
n
=
λu + α−1 exp(− nλ − λ / β)
α
u!Γ(α)β

f ( u, λ ) = p( u | λ ) g ( λ ) =


nu
u + α −1
exp
=
λ
⎢− λ
u!Γ(α)β α


⎛ β ⎞⎤
⎜⎜
⎟⎟⎥
⎝ nβ + 1 ⎠ ⎦

⎛ β ⎞⎤
⎜⎜
⎟⎟⎥ dλ , but this integral resembles that of a
⎝ nβ + 1 ⎠ ⎦
β
. Thus, the
gamma density with shape parameter u + α and scale parameter
nβ + 1

b. m(u ) =



nu
λu + α −1 exp⎢− λ
α ∫
u!Γ( α )β 0


⎛ β ⎞
nu
⎟⎟
Γ(u + α )⎜⎜
solution is m(u ) =
α
u! Γ(α )β
⎝ nβ + 1 ⎠

u+α

.

c. The result follows from parts (a) and (b) above.

⎛ β ⎞
⎟⎟ .
d. λˆ B = E (λ | U ) = α *β * = (U + α)⎜⎜
⎝ nβ + 1 ⎠
e. The prior mean for λ is E(λ) = αβ. From the above,
⎛ β ⎞
⎛ nβ ⎞
⎛ 1 ⎞
n
⎟⎟ = Y ⎜⎜
⎟⎟ + αβ⎜⎜
⎟⎟ ,
λˆ B = ∑i =1 Yi + α ⎜⎜
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
which is a weighted average of the MLE and the prior mean.

(

)

f. We know that Y is unbiased; thus E(Y ) = λ Therefore,
⎛ 1 ⎞
⎛ nβ ⎞
⎛ 1 ⎞
⎛ nβ ⎞
⎟⎟ .
⎟⎟ + αβ⎜⎜
⎟⎟ = λ⎜⎜
⎟⎟ + αβ⎜⎜
E (λˆ B ) = E (Y )⎜⎜
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
So, λˆ is biased but it is asymptotically unbiased since
B

E (λˆ B ) – λ → 0.
Also,


330

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual
2

2

⎛ nβ ⎞
λ ⎛ nβ ⎞

⎟⎟ = λ
⎟⎟ = ⎜⎜
V (λˆ B ) = V (Y )⎜⎜
→ 0.
n ⎝ nβ + 1 ⎠
(nβ + 1)2
⎝ nβ + 1 ⎠
p
So, λˆ B ⎯
⎯→
λ and thus it is consistent.

16.12 First, it is given that W = vU = v ∑i =1 (Yi − μ 0 ) 2 is chi–square with n degrees of freedom.
n

Then, the density function for U (conditioned on v) is given by
1
1
(uv )n / 2−1 e −uv / 2 =
f U (u | v ) = v fW ( uv ) = v
u n / 2−1 v n / 2 e − uv / 2 .
n/2
n/2
Γ( n / 2)2
Γ( n / 2)2
a. The joint density of U and v is then
1
1
f ( u, v ) = f U ( u | v ) g ( v ) =
u n / 2−1 v n / 2 exp(−uv / 2) ×
v α −1 exp(− v / β)
n/2
Γ( n / 2)2
Γ( α)β α
1
u n / 2−1 v n / 2+ α−1 exp( −uv / 2 − v / β)
=
Γ( n / 2)Γ(α )2 n / 2 β α
=


1
u n / 2−1 v n / 2+ α−1 exp⎢− v
n/2 α
Γ( n / 2)Γ(α )2 β


⎛ 2β ⎞⎤
⎜⎜
⎟⎟⎥ .
⎝ uβ + 2 ⎠ ⎦



⎛ 2β ⎞⎤
1
n / 2 −1
⎟⎟⎥ dv , but this integral
u
v n / 2+ α−1 exp⎢− v ⎜⎜
n/2 α

u
2
β
+
Γ( n / 2)Γ( α )2 β

⎠⎦
0

resembles that of a gamma density with shape parameter n/2 + α and scale parameter

b. m( u ) =

⎛ 2β ⎞
u n / 2 −1

⎟⎟
Γ( n / 2 + α)⎜⎜
. Thus, the solution is m(u ) =
n/2 α
uβ + 2
Γ( n / 2)Γ( α)2 β
⎝ uβ + 2 ⎠

n / 2+ α

.

c. The result follows from parts (a) and (b) above.
d. Using the result in Ex. 4.111(e),

σˆ 2B = E (σ 2 | U ) = E (1 / v | U ) =

⎛ Uβ + 2 ⎞
1
1
Uβ + 2
⎟⎟ =
⎜⎜
.
=
*
β ( α − 1) n / 2 + α − 1 ⎝ 2β ⎠ β(n + 2α − 2 )
*

1
. From the above,
β( α − 1)
Uβ + 2
U⎛
n
1 ⎛ 2(α − 1) ⎞

σˆ 2B =
= ⎜
⎟+

⎟.
β(n + 2α − 2 ) n ⎝ n + 2α − 2 ⎠ β(α − 1) ⎝ n + 2α − 2 ⎠

e. The prior mean for σ 2 = 1 / v =

16.13 a. (.099, .710)
b. Both probabilities are .025.


Chapter 16: Introduction to Bayesian Methods of Inference

331
Instructor’s Solutions Manual

c. P(.099 < p < .710) = .95.
d.-g. Answers vary.
h. The credible intervals should decrease in width with larger sample sizes.
16.14 a.-b. Answers vary.
16.15 With y = 4, n = 25, and a beta(1, 3) prior, the posterior distribution for p is beta(5, 24).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbeta(.025,5,24)
[1] 0.06064291
> qbeta(.975,5,24)
[1] 0.3266527

16.16 With y = 4, n = 25, and a beta(1, 1) prior, the posterior distribution for p is beta(5, 22).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbeta(.025,5,22)
[1] 0.06554811
> qbeta(.975,5,22)
[1] 0.3486788

This is a wider interval than what was obtained in Ex. 16.15.
16.17 With y = 6 and a beta(10, 5) prior, the posterior distribution for p is beta(11, 10). Using
R, the lower and upper endpoints of the 80% credible interval for p are given by:
> qbeta(.10,11,10)
[1] 0.3847514
> qbeta(.90,11,10)
[1] 0.6618291

16.18 With n = 15,



n

i =1

y i = 30.27, and a gamma(2.3, 0.4) prior, the posterior distribution for

θ is gamma(17.3, .030516). Using R, the lower and upper endpoints of the 80% credible
interval for θ are given by
> qgamma(.10,shape=17.3,scale=.0305167)
[1] 0.3731982
> qgamma(.90,shape=17.3,scale=.0305167)
[1] 0.6957321

The 80% credible interval for θ is (.3732, .6957). To create a 80% credible interval for
1/θ, the end points of the previous interval can be inverted:
.3732 < θ < .6957
1/(.3732) > 1/θ > 1/(.6957)
Since 1/(.6957) = 1.4374 and 1/(.3732) = 2.6795, the 80% credible interval for 1/θ is
(1.4374, 2.6795).


332

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual

16.19 With n = 25,



n

i =1

y i = 174, and a gamma(2, 3) prior, the posterior distribution for λ is

gamma(176, .0394739). Using R, the lower and upper endpoints of the 95% credible
interval for λ are given by
> qgamma(.025,shape=176,scale=.0394739)
[1] 5.958895
> qgamma(.975,shape=176,scale=.0394739)
[1] 8.010663

16.20 With n = 8, u = .8579, and a gamma(5, 2) prior, the posterior distribution for v is
gamma(9, 1.0764842). Using R, the lower and upper endpoints of the 90% credible
interval for v are given by
> qgamma(.05,shape=9,scale=1.0764842)
[1] 5.054338
> qgamma(.95,shape=9,scale=1.0764842)
[1] 15.53867

The 90% credible interval for v is (5.054, 15.539). Similar to Ex. 16.18, the 90% credible
interval for σ2 = 1/v is found by inverting the endpoints of the credible interval for v,
given by (.0644, .1979).
16.21 From Ex. 6.15, the posterior distribution of p is beta(5, 24). Now, we can find
P * ( p ∈ Ω 0 ) = P * ( p < .3) by (in R):
> pbeta(.3,5,24)
[1] 0.9525731

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .3) = 1 – .9525731 = .0474269. Since the probability
associated with H0 is much larger, our decision is to not reject H0.
16.22 From Ex. 6.16, the posterior distribution of p is beta(5, 22). We can find
P * ( p ∈ Ω 0 ) = P * ( p < .3) by (in R):
> pbeta(.3,5,22)
[1] 0.9266975

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .3) = 1 – .9266975 = .0733025. Since the probability
associated with H0 is much larger, our decision is to not reject H0.
16.23 From Ex. 6.17, the posterior distribution of p is beta(11, 10). Thus,
P * ( p ∈ Ω 0 ) = P * ( p < .4) is given by (in R):
> pbeta(.4,11,10)
[1] 0.1275212

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .4) = 1 – .1275212 = .8724788. Since the probability
associated with Ha is much larger, our decision is to reject H0.
16.24 From Ex. 16.18, the posterior distribution for θ is gamma(17.3, .0305). To test
H0: θ > .5 vs. Ha: θ ≤ .5,
*
*
we calculate P (θ ∈ Ω 0 ) = P (θ > .5) as:


Chapter 16: Introduction to Bayesian Methods of Inference

333
Instructor’s Solutions Manual

> 1 - pgamma(.5,shape=17.3,scale=.0305)
[1] 0.5561767

Therefore, P * (θ ∈ Ω a ) = P * (θ ≥ .5) = 1 – .5561767 = .4438233. The probability
associated with H0 is larger (but only marginally so), so our decision is to not reject H0.
16.25 From Ex. 16.19, the posterior distribution for λ is gamma(176, .0395). Thus,
P * (λ ∈ Ω 0 ) = P * (λ > 6) is found by
> 1 - pgamma(6,shape=176,scale=.0395)
[1] 0.9700498

Therefore, P * (λ ∈ Ω a ) = P * ( λ ≤ 6) = 1 – .9700498 = .0299502. Since the probability
associated with H0 is much larger, our decision is to not reject H0.
16.26 From Ex. 16.20, the posterior distribution for v is gamma(9, 1.0765). To test:
H0: v < 10 vs. Ha: v ≥ 10,
*
*
we calculate P ( v ∈ Ω 0 ) = P ( v < 10) as
> pgamma(10,9, 1.0765)
[1] 0.7464786

Therefore, P * (λ ∈ Ω a ) = P * ( v ≥ 10) = 1 – .7464786 = .2535214. Since the probability
associated with H0 is larger, our decision is to not reject H0.



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay

×