Tải bản đầy đủ

Stochastic differential equations solutions i, oksendal

Stochastic Differential Equations, Sixth Edition
Solution of Exercise Problems
Yan Zeng
July 16, 2006
This is a solution manual for the SDE book by Øksendal, Stochastic Differential Equations, Sixth Edition.
It is complementary to the books own solution, and can be downloaded at www.math.fsu.edu/˜zeng. If you
have any comments or find any typos/errors, please email me at yz44@cornell.edu.
This version omits the problems from the chapters on applications, namely, Chapter 6, 10, 11 and 12. I
hope I will find time at some point to work out these problems.
2.8. b)
Proof.



E[eiuBt ] =
k=0

ik
E[Btk ]uk =
k!


So
E[Bt2k ] =

1
t k
k! (− 2 )
(−1)k
(2k)!

=



k=0

1
t
(− )k u2k .
k! 2

(2k)! k
t .
k! · 2k

d)
Proof.
n

E x [|Bt − Bs |4 ]

(i)

(i)

E x [(Bt − Bs(i) )4 ] +

=
i=1

(j)



E x [(Bt − Bs(i) )2 (Bt

− Bs(j) )2 ]

i=j

4!
= n·
· (t − s)2 + n(n − 1)(t − s)2
2! · 4
= n(n + 2)(t − s)2 .

2.11.
Proof. Prove that the increments are independent and stationary, with Gaussian distribution. Note for
Gaussian random variables, uncorrelatedness=independence.
2.15.
d

Proof. Since Bt − Bs ⊥ Fs := σ(Bu : u ≤ s), U (Bt − Bs ) ⊥ Fs . Note U (Bt − Bs ) = N (0, t − s).
3.2.

1


Proof. WLOG, we assume t = 1, then
n

B13

3
3
− B(j−1)/n
)
(Bj/n

=
j=1
n

[(Bj/n − B(j−1)/n )3 + 3B(j−1)/n Bj/n (Bj/n − B(j−1)/n )]

=
j=1
n

n
3

2
3B(j−1)/n
(Bj/n − B(j−1)/n )

(Bj/n − B(j−1)/n ) +

=
j=1

j=1
n

3B(j−1)/n (Bj/n − B(j−1)/n )2

+
j=1

:= I + II + III
By Problem EP1-1 and the continuity of Brownian motion.
n

(Bj/n − B(j−1)/n )2 ] max |Bj/n − B(j−1)/n | → 0 a.s.

I≤[

1≤j≤n

j=1
1

To argue II → 3 0 Bt2 dBt as n → ∞, it suffices to show E[
2
B(j−1)/n
1{(j−1)/n
n
j=1

n

1
0

(n)

2
E[(Bt2 − B(j−1)/n
)2 ]dt
j=1

(j−1)/n

We note (Bt2 − B 2j−1 )2 is equal to
n

(Bt − B j−1 )4 + 4(Bt − B j−1 )3 B j−1 + 4(Bt − B j−1 )2 B 2j−1
n

n

n

n

n

2
so E[(B(j−1)/n
− Bt2 )2 ] = 3(t − (j − 1)/n)2 + 4(t − (j − 1)/n)(j − 1)/n, and
j
n
j−1
n

Hence E[

1
(Bt
0

(n)

− Bt )2 dt] =
1
0

To argue III → 3

n
2j−1
j=1 n3

E[(B 2j−1 − Bt2 )2 ]dt =
n

Bt dt as n → ∞, it suffices to prove
n

B(j−1)/n (Bj/n − B(j−1)/n )2 −
j=1

2j + 1
n3

→ 0 as n → ∞.

n

B(j−1)/n (
j=1

2

(n)

− Bt )2 dt] → 0, where Bt

j/n

(n)

|Bt2 − Bt |2 dt] =

E[

1
(Bt2
0

j
j−1

) → 0 a.s.
n
n

=


By looking at a subsequence, we only need to prove the L2 -convergence. Indeed,

2
n
1
E
B(j−1)/n [(Bj/n − B(j−1)/n )2 − ]
n
j=1
n
2
E B(j−1)/n
[(Bj/n − B(j−1)/n )2 −

=
j=1
n

=
j=1
n

=
j=1
n

=
j=1

1 2
]
n

2
1
j−1
E (Bj/n − B(j−1)/n )4 − (Bj/n − B(j−1)/n )2 + 2
n
n
n
1
1
j−1 1
(3 2 − 2 2 + 2 )
n
n
n
n
2(j − 1)
→0
n3

as n → ∞. This completes our proof.
3.9.
Proof. We first note that
B tj +tj+1 (Btj+1 − Btj )
2

j

(B tj +tj+1 − Btj )2 .

B tj +tj+1 (Btj+1 − B tj +tj+1 ) + Btj (B tj +tj+1 − Btj ) +

=

2

j

2

2

2

j

T

The first term converges in L2 (P ) to 0 Bt dBt . For the second term, we note

2 
2
t


E 
B tj +tj+1 − Btj −  
2
2
j


= E 

2

B tj +tj+1 − Btj



2

j

j
2

=

E

B tj +tj+1 − Btj
2

j, k

=

E

B 2tj+1 −tj −
2

j



=
j





tj+1 − tj
2

tj+1 − tj
2

2 
tj+1 − tj  

2

tj+1 − tj
2

2

B tk +tk+1 − Btk
2



tk+1 − tk
2

2

2

T
max |tj+1 − tj | → 0,
2 1≤j≤n

since E[(Bt2 − t)2 ] = E[Bt4 − 2tBt2 + t2 ] = 3E[Bt2 ]2 − 2t2 + t2 = 2t2 . So
T

B tj +tj+1 (Btj+1 − Btj ) →
j

2

Bt dBt +
0

3

T
1
= BT2 in L2 (P ).
2
2


3.10.
Proof. According to the result of Exercise 3.9., it suffices to show


f (tj , ω)∆Bj  → 0.

f (tj , ω)∆Bj −

E



j

j

Indeed, note



f (tj , ω)∆Bj −

E
j



f (tj , ω)∆Bj 
j

E[|f (tj ) − f (tj )||∆Bj |]
j

E[|f (tj ) − f (tj )|2 ]E[|∆Bj |2 ]


j





K|tj − tj |

1+
2

1

|tj − tj | 2

j


=


|tj − tj |1+ 2

K


j

T K max |tj − tj | 2
1≤j≤n

→ 0.

3.11.
(N )

(N ) 2

Proof. Assume W is continuous, then by bounded convergence theorem, lims→t E[(Wt − Ws
(N )
(N )
Since Ws and Wt are independent and identically distributed, so are Ws and Wt . Hence
(N )

E[(Wt

(N ) 2

− Ws(N ) )2 ] = E[(Wt

(N )

) ] − 2E[Wt

(N ) 2

]E[Ws(N ) ] + E[(Ws(N ) )2 ] = 2E[(Wt

(N )

(N )

) ] = 0.

(N ) 2

) ] − 2E[Wt

] .

(N )

Since the RHS=2V ar(Wt ) is independent of s, we must have RHS=0, i.e. Wt
= E[Wt ] a.s. Let
(N )
N → ∞ and apply dominated convergence theorem to E[Wt ], we get Wt = 0. Therefore W· ≡ 0.
3.18.
Proof. If t > s, then
E

1 2
Mt
E[eσBt−s ]
|Fs = E eσ(Bt −Bs )− 2 σ (t−s) |Fs = 1 σ2 (t−s) = 1
Ms
e2

The second equality is due to the fact Bt − Bs is independent of Fs .
4.4.
Proof. For part a), set g(t, x) = ex and use Theorem 4.12. For part b), it comes from the fundamental
property of Itˆ
o integral, i.e. Itˆ
o integral preserves martingale property for integrands in V.
Comments: The power of Itˆ
o formula is that it gives martingales, which vanish under expectation.
4.5.

4


Proof.
t

Btk =
0

1
kBsk−1 dBs + k(k − 1)
2

t

Bsk−2 ds
0

Therefore,
k(k − 1)
2

βk (t) =

t

βk−2 (s)ds
0

This gives E[Bt4 ] and E[Bt6 ]. For part b), prove by induction.
4.6. (b)
Proof. Apply Theorem 4.12 with g(t, x) = ex and Xt = ct +
constant coefficient.

n
j=1

αj Bj . Note

n
j=1

t

t

αj Bj is a BM, up to a

4.7. (a)
Proof. v ≡ In×n .
(b)
Proof. Use integration by parts formula (Exercise 4.3.), we have
t

Xt2 = X02 + 2

t

0

So Mt = X02 + 2

t
0

|vs |2 ds = X02 + 2

Xs dX +
0

|vs |2 ds.

Xs vs dBs +
0

0

Xs vs dBs . Let C be a bound for |v|, then
t

t

|Xs vs |2 ds ≤ C 2 E

E
0

0

t

= C2

s

|vu |2 du ds ≤

E
0

t

|Xs |2 ds = C 2

0

2

s

vu dBu

E
0

ds

0

C 4 t2
.
2

So Mt is a martingale.
4.12.
Proof. Let Yt =
Y

t

t
0

(n)

u(s, ω)ds. Then Y is a continuous {Ft }-martingale with finite variation. On one hand,
|Ytk+1 − Ytk |2 ≤ lim (total variation of Y on [0, t]) · max |Ytk+1 − Ytk | = 0.

= lim

∆tk →0

tk ≤t

tk

∆tk →0

On the other hand, integration by parts formula yields
t

Yt2 = 2

Ys dYs + Y t .
0

So Yt2 is a local martingale. If (Tn )n is a localizing sequence of stopping times, by Fatou’s lemma,
2
E[Yt2 ] ≤ lim E[Yt∧T
] = E[Y02 ] = 0.
n
n

So Y· ≡ 0. Take derivative, we conclude u = 0.
4.16. (a)
Proof. Use Jensen’s inequality for conditional expectations.
(b)
5


T

t

Proof. (i) Y = 2 0 Bs dBs . So Mt = T + 2 0 Bs dBs .
T
T
T
T
t
(ii) BT3 = 0 3Bs2 dBs + 3 0 Bs ds = 3 0 Bs2 dBs + 3(BT T − 0 sdBs ). So Mt = 3 0 Bs2 dBs + 3T Bt −
t
t
3 0 sdBs = 0 3(Bs2 + (T − s) dBs .
(iii)Mt = E[exp(σBT )|Ft ] = E[exp(σBT − 12 σ 2 T )|Ft ] exp( 12 σ 2 T ) = Zt exp( 12 σ 2 T ), where Zt = exp(σBt −
1 2
2 σ t). Since Z solves the SDE dZt = Zt σdBt , we have
t

Mt = (1 +
0

t

1
1
Zs σdBs ) exp( σ 2 T ) = exp( σ 2 T ) +
2
2

0

1
σ exp(σBs + σ 2 (T − s))dBs .
2

5.1. (ii)
Proof. Set f (t, x) = x/(1 + t), then by Itˆ
o’s formula, we have
dXt = df (t, Bt ) = −

dBt
Xt
dBt
Bt
dt +
=−
dt +
(1 + t)2
1+t
1+t
1+t

(iii)
Proof. By Itˆ
o’s formula, dXt = cos Bt dBt −
0 : Bs ∈ [− π2 , π2 ]}. Then

1
2

t∧τ

Xt∧τ

cos Bs dBs −

=
0

1
2

Xs ds
0

0
t∧τ

1 − Xs2 dBs −
0

So for t < τ , Xt =

t
0

1 − Xs2 dBs −

1
2

t
0

Xs ds. Let τ = inf{s >

t∧τ

1
2

1 − sin2 Bs 1{s≤τ } dBs −

=

t
0

Xs ds

t

=

1
2

0

cos Bs 1{s≤τ } dBs −
0

cos Bs dBs −

t∧τ

t

=

t
0

sin Bt dt. So Xt =

1
2

1
2

t∧τ

Xs ds
0

t∧τ

Xs ds.
0

Xs ds.

(iv)
Proof. dXt1 = dt is obvious. Set f (t, x) = et x, then
dXt2 = df (t, Bt ) = et Bt dt + et dBt = Xt2 dt + et dBt

5.3.
Proof. Apply Itˆ
o’s formula to e−rt Xt .
5.5. (a)
Proof. d(e−µt Xt ) = −µe−µt Xt dt + e−µt dXt = σe−µt dBt . So Xt = eµt X0 +
(b)

6

t
0

σeµ(t−s) dBs .


Proof. E[Xt ] = eµt E[X0 ] and
t

t

e−µs dBs )2 + 2σe2µt X0

Xt2 = e2µt X02 + σ 2 e2µt (

e−µs dBs .

0

0

So
t

E[Xt2 ]

=

e−2µs ds

e2µt E[X02 ] + σ 2 e2µt
0
t −µs
e
dBs
0

since
=
=

is a martingale vanishing at time 0

e−2µt − 1
−2µ
2µt
e

1
e2µt E[X02 ] + σ 2
.


e2µt E[X02 ] + σ 2 e2µt

So V ar[Xt ] = E[Xt2 ] − (E[Xt ])2 = e2µt V ar[X0 ] + σ 2 e

2µt

−1
2µ .

5.6.
Proof. We find the integrating factor Ft by the follows. Suppose Ft satisfies the SDE dFt = θt dt + γt dBt .
Then
d(Ft Yt )

=

Ft dYt + Yt dFt + dYt dFt

=

Ft (rdt + αYt dBt ) + Yt (θt dt + γt dBt ) + αγt Yt dt

=

(rFt + θt Yt + αγt Yt )dt + (αFt Yt + γt Yt )dBt .

(1)

Solve the equation system
θt + αγt = 0
αFt + γt = 0,
2

we get γt = −αFt and θt = α2 Ft . So dFt = α2 Ft dt − αFt dBt . To find Ft , set Zt = e−α t Ft , then
2

2

2

dZt = −α2 e−α t Ft dt + e−α t dFt = e−α t (−α)Ft dBt = −αZt dBt .
Hence Zt = Z0 exp(−αBt − α2 t/2). So
2

1

2

Ft = eα t F0 e−αBt − 2 α

t

1

2

= F0 e−αBt + 2 α t .

Choose F0 = 1 and plug it back into equation (1), we have d(Ft Yt ) = rFt dt. So
t

Yt = Ft−1 (F0 Y0 + r

1

t

2

1

eα(Bt −Bs )− 2 α

Fs ds) = Y0 eαBt − 2 α t + r
0

0

5.7. (a)
Proof. d(et Xt ) = et (Xt dt + dXt ) = et (mdt + σdBt ). So
t

Xt = e−t X0 + m(1 − e−t ) + σe−t

es dBs .
0

(b)
7

2

(t−s)

ds.


Proof. E[Xt ] = e−t E[X0 ] + m(1 − e−t ) and
t

E[Xt2 ]

=

E[(e−t X0 + m(1 − e−t ))2 ] + σ 2 e−2t E[

e2s ds]
0

=

1
e−2t E[X02 ] + 2m(1 − e−t )e−t E[X0 ] + m2 (1 − e−t )2 + σ 2 (1 − e−2t ).
2

Hence V ar[Xt ] = E[Xt2 ] − (E[Xt ])2 = e−2t V ar[X0 ] + 12 σ 2 (1 − e−2t ).
5.9.
Proof. Let b(t, x) = log(1 + x2 ) and σ(t, x) = 1{x>0} x, then
|b(t, x)| + |σ(t, x)| ≤ log(1 + x2 ) + |x|
Note log(1 + x2 )/|x| is continuous on R − {0}, has limit 0 as x → 0 and x → ∞. So it’s bounded on R.
Therefore, there exists a constant C, such that
|b(t, x)| + |σ(t, x)| ≤ C(1 + |x|)
Also,
|b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)| ≤

2|ξ|
|x − y| + |1{x>0} x − 1{y>0} y|
1 + ξ2

for some ξ between x and y. So
|b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)| ≤ |x − y| + |x − y|
Conditions in Theorem 5.2.1 are satisfied and we have existence and uniqueness of a strong solution.
5.10.
t

t

Proof. Xt = Z + 0 b(s, Xs )ds + 0 σ(s, Xs )dBs . Since Jensen’s inequality implies (a1 + · · · + an )p ≤
np−1 (ap1 + · · · + apn ) (p ≥ 1, a1 , · · · , an ≥ 0), we have
2

t

E[|Xt |2 ]

≤ 3 E[|Z|2 ] + E

σ(s, Xs )dBs
0

0
t

≤ 3 E[|Z|2 ] + E[

2

t

+E

b(s, Xs )ds

t

|b(s, Xs )|2 ds] + E[
0

|σ(s, Xs )|2 ds]
0

t

≤ 3(E[|Z|2 ] + C 2 E[

t

(1 + |Xs |)2 ds] + C 2 E[
0

(1 + |Xs |)2 ds])
0

t

=

2

2

2

(1 + |Xs |) ds])

3(E[|Z| ] + 2C E[
0
t

≤ 3(E[|Z|2 ] + 4C 2 E[

(1 + |Xs |2 )ds])
0
t

≤ 3E[|Z|2 ] + 12C 2 T + 12C 2

E[|Xs |2 ]ds
0

t

E[|Xs |2 ]ds,

= K1 + K2
0

where K1 = 3E[|Z|2 ] + 12C 2 T and K2 = 12C 2 . By Gronwall’s inequality, E[|Xt |2 ] ≤ K1 eK2 t .
5.11.

8


Proof. First, we check by integration-by-parts formula,
t

−a + b −

dYt =

0

Set Xt = (1 − t)

t dBs
,
0 1−s

dBs
1−s

dt + (1 − t)

dBt
b − Yt
=
dt + dBt
1−t
1−t

then Xt is centered Gaussian, with variance
t

E[Xt2 ] = (1 − t)2
0

ds
= (1 − t) − (1 − t)2
(1 − s)2

2

So Xt converges in L to 0 as t → 1. Since Xt is continuous a.s. for t ∈ [0, 1), we conclude 0 is the unique
a.s. limit of Xt as t → 1.
5.14. (i)
Proof.
dZt

=

d(u(B1 (t), B2 (t)) + iv(B1 (t), B2 (t)))
1
i
=
u · (dB1 (t), dB2 (t)) + ∆udt + i v · (dB1 (t), dB2 (t)) + ∆vdt
2
2
= ( u + i v) · (dB1 (t), dB2 (t))
∂u
∂v
∂v
∂u
=
(B(t))dB1 (t) −
(B(t))dB2 (t) + i( (B(t))dB1 (t) +
(B(t))dB2 (t))
∂x
∂x
∂x
∂x
∂v
∂v
∂u
∂u
+ i (B(t)))dB2 (t)
= ( (B(t)) + i (B(t)))dB1 (t) + (i
∂x
∂x
∂x
∂x
= F (B(t))dB(t).

(ii)
Proof. By result of (i), we have deαB(t) = αeαB(t) dB(t). So Zt = eαB(t) + Z0 solves the complex SDE
dZt = αZt dB(t).
5.15.
2
t
Proof. The deterministic analog of this SDE is a Bernoulli equation dy
dt = rKyt − ryt . The correct substitu−2
−1
tion is to multiply −yt on both sides and set zt = yt . Then we’ll have a linear equation dzt = −rKzt + r.
Similarly, we multiply −Xt−2 on both sides of the SDE and set Zt = Xt−1 . Then



rKdt
dBt
dXt
=−
+ rdt − β
Xt2
Xt
Xt

and
dZt = −

dXt · dXt
1
dXt
+
= −rKZt dt + rdt − βZt dBt + 3 β 2 Xt2 dt = rdt − rKZt dt + β 2 Zt dt − βZt dBt .
Xt2
Xt3
Xt

Define Yt = e(rK−β

2

)t

dYt = e(rK−β

Zt , then

2

)t

(dZt + (rK − β 2 )Zt dt) = e(rK−β

2

)t

(rdt − βZt dBt ) = re(rK−β

2

)t

dt − βYt dBt .

Now we imitate the solution of Exercise 5.6. Consider an integrating factor Nt , such that dNt = θt dt + γt dBt
and
d(Yt Nt ) = Nt dYt + Yt dNt + dNt · dYt = Nt re(rK−β

9

2

)t

dt − βNt Yt dBt + Yt θt dt + Yt γt dBt − βγt Yt dt.


Solve the equation
θt = βγt
γt = βNt ,
1

we get dNt = β 2 Nt dt + βNt dBt . So Nt = N0 eβBt + 2 β
d(Yt Nt ) = Nt re(rK−β
Choose N0 = 1, we have Nt Yt = Y0 +
Xt = Zt−1 = e(rK−β

2

)t

Yt−1 =

t
0

re(rK−

β2
2

2

)t

t
0

t

and
1

dt = N0 re(rK− 2 β

)s+βBs

e(rK−β
Y0 +

2

2

)t

1

)t+βBt

dt.

ds with Y0 = Z0 = X0−1 . So
1

Nt

re(rK− 2 β

2

2 )s+βB

s

ds

=

e(rK− 2 β
x−1 +

t
0

2

)t+βBt
1

re(rK− 2 β

2 )s+βB

s

ds

.

5.15. (Another solution)
Proof. We can also use the method in Exercise 5.16. Then f (t, x) = rKx − rx2 and c(t) ≡ β. So Ft =
1 2
e−βBt + 2 β t and Yt satisfies
dYt = Ft (rKFt−1 Yt − rFt−2 Yt2 )dt.
Divide −Yt2 on both sides, we have


dYt
=
Yt2



rK
+ rFt−1 dt.
Yt

So dYt−1 = −Yt−2 dYt = (−rKYt−1 + rFt−1 )dt, and
d(erKt Yt−1 ) = erKt (rKYt−1 dt + dYt−1 ) = erKt rFt−1 dt.
Hence erKt Yt−1 = Y0−1 + r

t rKs βBs − 1 β 2 s
2
ds
e
e
0
1

Xt = Ft−1 Yt = eβBt − 2 β

2

and
1

erKt

t

Y0−1

+r

t βBs +(rK− 1 β 2 )s
2
ds
e
0

=

e(rK− 2 β
x−1 + r

2

)t+βBt

.
t (rK− 1 β 2 )s+βBs
2
ds
e
0

5.16. (a) and (b)
Proof. Suppose Ft is a process satisfying the SDE dFt = θt dt + γt dBt , then
d(Ft Xt )

=

Ft (f (t, Xt )dt + c(t)Xt dBt ) + Xt θt dt + Xt γt dBt + c(t)γt Xt dt

=

(Ft f (t, Xt ) + c(t)γt Xt + Xt θt )dt + (c(t)Ft Xt + γt Xt )dBt .

Solve the equation
c(t)γt + θt = 0
c(t)Ft + γt = 0,
we have
γt = −c(t)Ft
θt = c2 (t)F (t).
1

So dFt = c2 (t)Ft dt − c(t)Ft dBt . Hence Ft = F0 e 2
integrating factor Ft and d(Ft Xt ) = Ft f (t, Xt )dt.

10

t
0

c2 (s)ds−

t
0

c(s)dBs

. Choose F0 = 1, we get desired


(c)
1

2

Proof. In this case, f (t, x) = x1 and c(t) ≡ α. So Ft satisfies Ft = e−αBt + 2 α t and Yt satisfies dYt =
2
1
Ft · F −1
dt = Ft2 Yt−1 dt. Since dYt2 = 2Yt dYt + dYt · dYt = 2Ft2 dt = 2e−2αBt +α t dt, we have Yt2 =
Y
t

2

t

t −2αBs +α2 s
e
ds
0

+ Y02 , where Y0 = F0 X0 = X0 = x. So
1

2

Xt = eαBt − 2 α

t
t

e−2αBs +α2 s ds.

x2 + 2
0

(d)
1

2

Proof. f (t, x) = xγ and c(t) ≡ α. So Ft = e−αBt + 2 α

t

and Yt satisfies the SDE

dYt = Ft (Ft−1 Yt )γ dt = Ft1−γ Ytγ dt.
Note dYt1−γ = (1 − γ)Yt−γ dYt = (1 − γ)Ft1−γ dt, we conclude Yt1−γ = Y01−γ + (1 − γ)
Y0 = F0 X0 = X0 = x. So
1

t

2

Yt = eαBt − 2 α t (x1−γ + (1 − γ)

e−α(1−γ)Bs +

α2 (1−γ)
s
2

t
0

Fs1−γ ds with

1

ds) 1−γ .

0

5.17.
Proof. Assume A = 0 and define ω(t) =

t
0

v(s)ds, then ω (t) ≤ C + Aω(t) and

d −At
(e
ω(t)) = e−At (ω (t) − Aω(t)) ≤ Ce−At .
dt
So e−At ω(t) − ω(0) ≤

C
−At
),
A (1 − e

i.e. ω(t) ≤

C At
A (e

At
− 1). So v(t) = ω (t) ≤ C + A · C
− 1) = CeAt .
A (e

5.18. (a)
Proof. Let Yt = log Xt , then
dYt =
So

σ 2 Xt2 dt
1
dXt
(dXt )2
= κ(α − Yt )dt + σdBt −
= (κα − α2 )dt − κYt dt + σdBt .

2
2
Xt
2Xt
2Xt
2
1
d(eκt Yt ) = κYt eκt dt + eκt dYt = eκt [(κα − σ 2 )dt + σdBt ]
2

and eκt Yt − Y0 = (κα − 12 σ 2 ) e

κt

−1
κ



t κs
e dBs .
0

Xt = exp{e−κt log x + (α −

Therefore
σ2
)(1 − e−κt ) + σe−κt


t

eκs dBs }.
0

(b)
t

2

t

Proof. E[Xt ] = exp{e−κt log x+(α− σ2κ )(1−e−κt )}E[exp{σe−κt 0 eκs dBs }]. Note 0 eκs dBs ∼ N (0, e
so
t
1 2 −2κt e2κt − 1
σ 2 (1 − e−2κt )
E[exp{σe−κt
eκs dBs }] = exp
σ e
= exp
.
2


0

11

2κt

−1
2κ ),


5.19.
Proof. We follow the hint.
T

b(s, Ys(K) ) − b(s, Ys(K−1) ) ds > 2−K−1

P
0
T



D Ys(K) − Ys(K−1) ds > 2−K−1

P
0


22K+2 E 



2

T

D Ys(K) − Ys(K−1) ds




0
T

2

Ys(K) − Ys(K−1) dsT

22K+2 E D2



0
T

2

Ys(K) − Ys(K−1) ds

22K+2 D2 T E



0
T

K
AK
2 t
ds
K!



D2 T 22K+2

=

D2 T 22K+2 AK
2
T K+1 .
(K + 1)!

0

t

P

σ(s, Ys(K) ) − σ(s, Ys(K−1) ) dBs > 2−K−1

sup
0≤t≤T

0
2

t

σ(s, Ys(K) ) − σ(s, Ys(K−1) ) dBs

≤ 22K+2 E
0
t

≤ 22K+2 E

σ(s, Ys(K) ) − σ(s, Ys(K−1) )

2

ds

0
t

≤ 22K+2 E

D2 |Ys(K) − Ys(K−1) |2 ds
0
T

≤ 22K+2 D2
0

=

K
AK
2 t
dt
K!

22K+2 D2 AK
2
T K+1 .
(K + 1)!

So
(K+1)

P [ sup |Yt
0≤t≤T

(K)

− Yt

| > 2−K ] ≤ D2 T

22K+2 AK
22K+2 AK
(A3 T )K+1
2
2
T K+1 + D2
T K+1 ≤
,
(K + 1)!
(K + 1)!
(K + 1)!

where A3 = 4(A2 + 1)(D2 + 1)(T + 1).
7.2. Remark: When an Itˆ
o diffusion is explicitly given, it’s usually straightforward to find its infinitesimal
generator, by Theorem 7.3.3. The converse is not so trivial, as we’re faced with double difficulties: first, the
desired n-dimensional Itˆ
o diffusion dXt = b(Xt )dt + σ(Xt )dBt involves an m-dimensional BM Bt , where m
is unknown a priori; second, even if m can be determined, we only know σσ T , which is the product of an
n × m and an m × n matrix. In general, it’s hard to find σ according to σσ T . This suggests maybe there’s
more than one diffusion that has the given generator. Indeed, when restricted to C02 (R+ ), BM, BM killed
at 0 and reflected BM all have Laplacian operator as generator. What differentiate them is the domain of
generators: domain is part of the definition of a generator!

12


With the above theoretical background, it should be OK if we find more than one Itˆo diffusion process
with given generator. A basic way to find an Itˆo diffusion with given generator can be trial-and-error. To
tackle the first problem, we try m = 1, m = 2, · · · . To tackle the second problem, note σσ T is symmetric, so
we can write σσ T as AM AT where M is the diagonalization of σσ T , and then set σ = AM 1/2 . In general, to
deal directly with σ T σ instead of σ, we should use the martingale problem approach of Stoock and Varadhan.
See the preface of their classical book for details.
a)
Proof. dXt = dt +



2dBt .

b)
Proof.
d

X1 (t)
X2 (t)

=

1
0
dt +
dBt .
cX2 (t)
αX2 (t)

c)
Proof. σσ T =

1 + x21
x1

x1
. If
1
d

X1 (t)
X2 (t)

=

2X2 (t)
a
dt +
dBt ,
log(1 + X12 (t) + X22 (t))
b

a2
ab

ab
, which is impossible since x21 = (1 + x21 ) · 1. So we try 2-dim. BM as the
b2
1 x1
1 0
driving process. Linear algebra yields σσ T =
. So we can choose
0 1
x1 1
then σσ T has the form

dXt =

2X2 (t)
1
dt +
log(1 + X12 (t) + X22 (t))
0

X1 (t)
1

dBt (t)
.
dB2 (t)

7.3.
Proof. Set FtX = σ(Xs : s ≤ t) and FtB = σ(Bs : s ≤ t). Since σ(Xt ) = σ(Bt ), we have, for any bounded
Borel function f (x),
E[f (Xt+s )|FtX ] = E[f (xec(t+s)+αBt+s )|FtB ] = E Bt [f (xec(t+s)+αBs )] ∈ σ(Bt ) = σ(Xt ).
So E[f (Xt+s )|FtX ] = E[f (Xt+s )|Xt ].
7.4. a)
Proof. Choose b ∈ R+ , so that 0 < x < b. Define τ0 = inf{t > 0 : Bt = 0}, τb = inf{t > 0 : Bt = b}
and τ0b = τ0 ∧ τb . Clearly, limb→∞ τb = ∞ a.s. by the continuity of Brownian motion. Consequently,
{τ0 < τb } ↑ {τ0 < ∞} as b ↑ ∞. Note (Bt2 − t)t≥0 is a martingale, by Doob’s optional stopping theorem, we
2
have E x [Bt∧τ
] = E x [t ∧ τ0b ]. Apply bounded convergence theorem to the LHS and monotone convergence
0b
theorem to the RHS, we get E x [τ0b ] = E x [Bτ20b ] < ∞. In particular, τ0b < ∞ a.s. Moreover, by considering
the martingale (Bt )t≥0 and similar argument, we have E x [Bτ0b ] = E x [B0 ] = x. This leads to the equation
P x (τ0 < τb ) · 0 + P x (τ0 > τb ) · b = x
P x (τ0 < τb ) + P x (τ0 > τb ) = 1.
Solving it gives P x (τ0 < τb ) = 1 − xb . So P x (τ0 < ∞) = limb→∞ P x (τ0 < τb ) = 1.

13


b)
Proof. E x [τ ] = limb→∞ E x [τ0b ] = limb→∞ E x [Bτ20b ] = limb→∞ b2 ·

x
b

= ∞.

Remark: (1) Another easy proof is based on the following result, which can be proved independently
and via elementary method: let W = (Wt )t≥0 be a Wiener process, and T be a stopping time such that
E[T ] < ∞. Then E[WT ] = 0 and E[WT2 ] = E[T ] ([6]).
(2) The solution in the book is not quite right, since Dynkin’s formula assumes E x [τK ] < ∞, which needs
proof in this problem.
7.5.
Proof. The hint is detailed enough. But if we want to be really rigorous, note Theorem 7.4.1. (Dynkin’s
formula) studies Itˆ
o diffusions, not Itˆ
o processes, to which standard form semi-group theory (in particular,
the notion of generator) doesn’t apply. So we start from scratch, and re-deduce Dynkin’s formula for Itˆ
o
processes.
First of all, we note b(t, x), σ(t, x) are bounded in a bounded domain of x, uniformly in t. This suffices
to give us martingales, not just local martingales. Indeed, Itˆo’s formula says
|X(t)|2
t

t

= |X(0)|2 +

2Xi (s)dXi (s) +
0

dXi (s)
0

i

i

t

= |X(0)|2 + 2

t

Xi (s)bi (s, X(s))ds + 2
i

0

t
2
σii
(s, Xs )ds.

Xi (s)σij (s, X(s))dBj (s) +
ij

0

i

0

Let τ = t ∧ τR where τR = inf{t > 0 : |Xt | ≥ R}. Then by previous remark on the boundedness of σ and b,
t∧τR
Xi (s)σij (s, X(s))dBj (s) is a martingale. Take expectation, we get
0
E[|X(τ )|2 ]
τ

=

E[|X(0)|2 ] + 2

E[

t

0

i

2
E[σii
(s, X(s))]ds

Xi (s)bi (s, X(s))ds] +
i

0

τ



E[|X(0)|2 ] + 2C

t

0

i

C 2 E[(1 + |X(s)|)2 ]ds.

|Xi (s)|(1 + |X(s)|)ds] +

E[

0

Let R → ∞ and use Fatou’s Lemma, we have
E[|X(t)|2 ]
t



E[|X(0)|2 ] + 2C
i
t



E[|X(0)|2 ] + K

t

|Xi (s)|(1 + |X(s)|)ds] + C 2

E[
0

E[(1 + |X(s)|)2 ]ds
0

(1 + E[|X(s)|2 ])ds,
0

for some K dependent on C only. To apply Gronwall’s inequality, note for v(t) = 1 + E[|X(t)|2 ], we have
t
v(t) ≤ v(0) + K 0 v(s)ds. So v(t) ≤ v(0)eKt , which is the desired inequality.
Remark: Compared with Exercise 5.10, the power of this problem’s method comes from application of
Itˆ
o formula, or more precisely, martingale theory, while Exercise 5.10 only resorts to H¨older inequality.
7.7. a)
Proof. Let U be an orthogonal matrix, then B = U · B is again a Brownian motion. For any G ∈ ∂D,
x
x
x
x
x
µX
D (G) = P (BτD ∈ G) = P (U · BτD ∈ U · G) = P (BτD ∈ U · G) = µD (U · G). So µD is rotation
invariant.

14


b)
Proof.
u(x)

=

E x [φ(BτW )] = E x [E x [φ(BτW )|BτD ]] = E x [E x [φ(BτW ◦ θτD )|BτD ]]

=

E x [E BτD [φ(BτW ]] = E x [u(BτD )] =

u(y)µxD (dy) =
∂D

u(y)σ(dy).
∂D

c)
Proof. See, for example, Evans: Partial Differential Equations, page 26.
7.8. a)
Proof. {τ1 ∧ τ2 ≤ t} = {τ1 ≤ t} ∪ {τ2 ≤ t} ∈ Nt . And since {τi ≥ t} = {τi < t}c ∈ Nt , {τ1 ∨ τ2 ≥ t} = {τ1 ≥
t} ∪ {τ2 ≥ t} ∈ Nt .
b)
Proof. {τ < t} = ∪n {τn < t} ∈ Nt .
c)
Proof. By b) and the hint, it suffices to show for any open set G, τG = inf{t > 0 : Xt ∈ G} is an Mt -stopping
time. This is Example 7.2.2.
7.9. a)
2

2

d2
dx2 . For f (x)
2
2
γ (r− α2 + α2 γ )γt

d
Proof. By Theorem 7.3.3, A restricted to C02 (R) is rx dx
+ α 2x

definition. Indeed, Xt = xe

2
(r− α2

)t+αBt

lim
t↓0

So f ∈ DA and Af (x) = (rγ +

, and E x [f (Xt )] = x e

= xγ , Af can be calculated by

. So

E x [f (Xt )] − f (x)
α2
= (rγ +
γ(γ − 1))xγ
t
2

α2
2 γ(γ

− 1))xγ .

b)
Proof. We choose ρ such that 0 < ρ < x < R. We choose f0 ∈ C02 (R) such that f0 = f on (ρ, R).
Define τ(ρ,R) = inf{t > 0 : Xt ∈ (ρ, R)}. Then by Dynkin’s formula, and the fact Af0 (x) = Af (x) =
2
γ1 xγ1 (r + α2 (γ1 − 1)) = 0 on (ρ, R), we get
E x [f0 (Xτ(ρ,R) ∧k )] = f0 (x)
2

The condition r < α2 implies Xt → 0 a.s. as t → 0. So τ(ρ,R) < ∞ a.s.. Let k ↑ ∞, by bounded convergence
theorem and the fact τ(ρ,R) < ∞, we conclude
f0 (ρ)(1 − p(ρ)) + f0 (R)p(ρ) = f0 (x)
where p(ρ) = P x {Xt exits (ρ, R) by hitting R first}. Then
ρ(p) =

xγ1 − ργ1
Rγ1 − ργ1

Let ρ ↓ 0, we get the desired result.
c)
15


Proof. We consider ρ > 0 such that ρ < x < R. τ(ρ,R) is the first exit time of X from (ρ, R). Choose
f0 ∈ C02 (R) such that f0 = f on (ρ, R). By Dynkin’s formula with f (x) = log x and the fact Af0 (x) =
2
Af (x) = r − α2 for x ∈ (ρ, R), we get
E x [f0 (Xτ(ρ,R) ∧k )] = f0 (x) + (r −
Since r >

α2
2 ,

α2 x
)E [τ(ρ,R) ∧ k]
2

Xt → ∞ a.s. as t → ∞. So τ(ρ,R) < ∞ a.s.. Let k ↑ ∞, we get
E x [τ(ρ,R) ] =

f0 (R)p(ρ) + f0 (ρ)(1 − p(ρ)) − f0 (x)
2
r − α2

where p(ρ) = P x (Xt exits (ρ, R) by hitting R first). To get the desired formula, we only need to show
limρ→0 p(ρ) = 1 and limρ→0 log ρ(1−p(ρ)) = 0. This is trivial to see once we note by our previous calculation
in part b),
xγ1 − ργ1
p(ρ) = γ1
R − ρ γ1

7.10. a)
t

Proof. E x [XT |Ft ] = E Xt [XT −t ]. By Exercise 5.10. or 7.5., 0 Xs dBs is a martingale. So E x [Xt ] = x +
t
t
r 0 E x [Xs ]ds. Set E x [Xt ] = v(t), we get v(t) = x + r 0 v(s)ds or equivalently, the initial value problem
v (t) = rv(t)
. So v(t) = xert . Hence E x [XT |Ft ] = Xt er(T −t) .
v(0) = x
b)
Proof. Since Mt is a martingale, E x [XT |Ft ] = xerT E x [MT |Ft ] = xerT Mt = Xt er(T −t) .
7.11.
Proof. By change-of-variable formula, we have
Fubini’s Theorem and strong Markov property,


Ex[

f (Xt )dt =


0

f (Xτ +t )dt =



f (Xt )dt] = E x [E x [
τ


τ


0

f (Xt ◦ θτ )dt. So by



f (Xt ) ◦ θτ dt|Fτ ]] = E x [E Xτ [
0

f (Xt )dt]] = E x [g(Xτ )].
0

7.12. a)
Proof. For any t, s with 0 ≤ s < t ≤ T and τK , we have E[Zt∧τK |Fs ] = Zs∧τK . Let K → ∞, then Zs∧τK → Zs
a.s. and Zt∧τK → Zt a.s. Since (Zτ )τ ≤T is uniformly integrable, Zs∧τK → Zs and Zt∧τK → Zt in L1 as well.
So E[Zt |Fs ] = limK→∞ E[Zt∧τK |Fs ] = limK→∞ Zs∧τK = Zs . Hence (Zt )t≤T is a martingale.
b)
Proof. The given condition implies (Zτ )τ ≤T is uniformly integrable.
c)
Proof. Without loss of generality, we assume Z ≥ 0. Then by Fatou’s lemma, for t > s ≥ 0,
E[Zt |Fs ] ≤ lim E[Zt∧τk |Fs ] = lim Zs∧τk = Zs .
k→∞

k→∞

16


d)
Proof. Define τk = inf{t > 0 :

t
0

φ2 (s, ω)ds ≥ k}, then
t∧τk

Zt∧τk =

t

φ(s, ω)dBs =
0

T
0

is a martingale, since E[

φ2 (s, ω)1{s≤τk } ds] = E[

φ(s, ω)1{s≤τk } dBs
0

T ∧τk
0

φ2 (s, ω)ds] ≤ k.

7.13. a)
Proof. Take f ∈ C02 (R2+ ) so that f (x) = ln |x| on {x : ≤ |x| ≤ R}. Then
2

df (B(t))

=
i=1
2

=
i=1

=

Bi (t)
1 B22 (t) − B12 (t)
1 B12 (t) − B22 (t)
dBi (t) +
dt +
dt
2
4
|B(t)|
2
|B(t)|
2
|B(t)|4
Bi (t)
dBi (t)
|B(t)|2

B(t) · dB(t)
.
|B(t)|2

B(t)
Since |B(t)|
2 1{t≤τ } ∈ V(0, T ), we conclude f (B(t ∧ τ )) = ln |B(t ∧ τ )| is a martingale. To show ln |B(t)| is
a local martingale, it suffices to show τ → ∞ as ↓ 0 and R ↑ ∞. Indeed, by optional stopping theorem,
ln |x| = E x [ln |B(t ∧ τ )|] = P x (τ < τR ) ln + P x (τ > τR ) ln R, where τ = inf{t > 0 : |B(t)| ≤ } and
|x|
τR = inf{t > 0 : |B(t)| ≥ R}. So P x (τ < τR ) = lnlnR−ln
R−ln . By continuity of B, limR→∞ τR = ∞. If
we define τ0 = inf{t > 0 : |B(t)| = 0}, then τ0 = lim ↓0 τ . So P x (τ0 < ∞) = limR↑∞ P x (τ0 < τR ) =
limR↑∞ lim ↓0 P x (τ < τR ) = 0. This shows lim ↓0 τ = τ0 = ∞ a.s.

b)
Proof. Similar to part a).
Remark: Note neither example is a martingale, as they don’t have finite expectation.
7.14. a)
Proof. According to Theorem 7.3.3, for any f ∈ C02 ,
Af (x) =
i

1 ∂h(x) ∂f (x) 1
2
+ ∆f (x) =
h(x) ∂xi ∂xi
2



f + h∆f
∆(hf )
=
,
2h
2h

where the last equation is due to the harmonicity of h.
7.15.
Proof. If we assume formula (7.5.5), then (7.5.6) is straightforward from Markov property. As another
solution, we derive (7.5.6) directly.
t
We define Mt = E x [F |Ft ] (t ≤ T ), then Mt = E[F ] + 0 φ(s)dBs . Set f (z, u) = E z [(Bu − K)+ ], then
x
+
Bt
+
Mt = E [(BT − K) |Ft ] = E [(BT −t − K) ] = f (Bt , T − t). By Itˆo’s formula,
1
dMt = fz (Bt , T − t)dBt + fu (Bt , T − t)(−dt) + fzz (Bt , T − t)dt.
2
So φ(t, ω) = fz (Bt , T − t). Note by elementary calculus,


f (z, u) =

2


e−x /2u
K −z
K −z
(z + x − K)+ √
dx = uN ( √ ) − (K − z) + (K − z)N ( √ ),
u
u
2πu
−∞
17


where N (·) is the distribution function of standard normal random variable. So it’s easy to see fz (z, u) =
2

t
√ ). Hence φ(t, ω) = 1 − N ( K−B

1 − N ( K−z
)= √
u
T −t

1
2π(T −t)

t)
∞ − (x−B
e 2(T −t)
K

dx.

7.17.
Proof. If t ≤ τ , then Y clearly satisfies the integral equation corresponding to (7.5.8), since
t

Yt = Xt = X0 +
0

If t > τ , then Yt = 0 = Xτ =
t
0

2

t

1 13
Xs ds +
3

1
τ
τ 1
X 3 ds + 0
0 3 s

t

2

Xs3 dBs = Y0 +
0

0

2

Xs3 dBs + X0 = Y0 +

1 13
Ys ds +
3

τ 1 13
τ
Y ds + 0
0 3 s

t

2

Ys3 dBs .
0
2

Xs3 dBs = Y0 +

t 1 31
Y ds +
0 3 s

Ys3 dBs . So Y is also a strong solution of (7.5.8).
2
1
If we write (7.5.8) in the form of dXt = b(Xt )dt + σ(Xt )dBt , then b(x) = 31 x 3 and σ(x) = x 3 . Neither
of them satisfies the Lipschiz condition (5.2.2). So this does not conflict with Theorem 5.2.1.
7.18. a)
Proof. The line of reasoning is exactly what we have done for 7.9 b). Just replace xγ with a general function
f (x) satisfying certain conditions.
b)
Proof. The characteristic operator A =
we are done.

1 d2
2 dx2

and f (x) = x are such that Af (x) = 0. By formula (7.5.10),

c)
d
Proof. A = µ dx
+

σ 2 d2
2 dx2 .



So we can choose f (x) = e− σ2 x . Therefore
p=

2µx

2µa

2µb

2µa

e− σ 2 − e− σ 2
e− σ2 − e− σ2

7.19. a)


Proof. Following
the hint, and by Doob’s optional sampling thoerem, E x [e− 2λBt∧τ −λt∧τ ] =√E x [Mt∧τ ] =

E x [M0 ] = e− 2λx . Let t ↑ ∞ and apply bounded convergence theorem, we get E x [e−λτ ] = e− 2λx .
b)
Proof.

x3
∞ −λt
√ x e− 2t
e
0
2πt3

dt.

8.1. a)
Proof. g(t, x) = E x [φ(Bt )], where B is a Brownian motion.
b)
Proof. Note the equation to be solved has the form (α − A)u = ψ with A = 21 ∆, so we should apply Theorem
8.1.5. More precisely, since ψ ∈ Cb (Rn ), by Theorem 8.1.5. b), we know (α − 12 ∆)Rα ψ = ψ, where Rα is the

α-resolvent corresponding to Brownian motion. So Rα ψ(x) = E x [ 0 e−αt ψ(Bt )dt] is a bounded solution of
the equation (α − 21 ∆)u = ψ in Rn . To see the uniqueness, it suffices to show (α − 12 ∆)u = 0 has only zero
solution. Indeed, if u ≡ 0, we can find un ∈ C02 (Rn ) such that un = u in B(0, n). Then (α − 12 ∆)un = 0
in B(0, n). Applying Theorem 8.1.5.a), un = Rα (α − 12 ∆)un = 0. So u ≡ 0 in B(0, n). Let n ↑ ∞, we are
done.
18


8.2.
Proof. By Kolmogorov’s backward equation (Theorem 8.1.1), it suffices to solve the SDE dXt = αXt dt +
βXt dBt . This is the geometric Brownian motion Xt = X0 e(α−


β2
2

)t+βBt

y2

2

(α− β2 )t+βy

x

f (xe

u(t, x) = E [f (Xt )] =

. Then

−∞

e− 2t
)√
dy.
2πt

8.3.
Proof. By (8.6.34) and Dynkin’s formula, we have
E x [f (Xt )]

=

f (y)pt (x, y)dy
Rn
t

= f (x) + E x [

Af (Xs )ds]
0

t

Ps Af (x)ds

= f (x) +
0
t

= f (x) +

ps (x, y)Ay f (y)dyds.
Rn

0

Differentiate w.r.t. t, we have
f (y)
Rn

∂pt (x, y)
dy =
∂t

pt (x, y)Ay f (y)dy =
Rn

Rn

A∗y pt (x, y)f (y)dy,

where the second equality comes from integration by parts. Since f is arbitrary, we must have
A∗y pt (x, y).

∂pt (x,y)
∂t

=

8.4.
Proof. The expected total length of time that B· stays in F is


T = E[



0

(Sufficiency) If m(F ) = 0, then

F



1F (Bt )dt] =
0

√1 e
2πt

2
− x2t

dx = 0 for every t > 0, hence T = 0.
2

(Necessity) If T = 0, then for a.s. t,
Rn , we must have m(F ) = 0.

F

F

1 − x2
e 2t dxdt.
2πt

x
√ 1 e− 2t
2πt

x2

dx = 0. For such a t > 0, since e− 2t > 0 everywhere in

8.5.
Proof. Apply the Feynman-Kac formula, we have
u(t, x) = E x [e

t
0

ρds

n

f (Bt )] = eρt (2πt)− 2

e−

(x−y)2
2t

f (y)dy.

Rn

8.6.
Proof. The major difficulty is to make legitimate using Feynman-Kac formula while (x − K)+ ∈ C02 . For
the conditions under which we can indeed apply Feynman-Kac formula to (x − K)+ ∈ C02 , c f. the book of
Karatzas & Shreve, page 366.
19


8.7.
Proof. Let αt = inf{s > 0 : βs > t}, then Xαt is a Brownian motion. Since β· is continuous and limt→∞ βt =
∞ a.s., by the law of iterated logarithm for Brownian motion, we have
lim sup √
t→∞

Xαβt
2βt log log βt

= 1, a.s.

Assume αβt = t (this is true when, for example, beta· is strictly increasing), then we are done.
8.8.
Proof. Since dNt = (u(t) − E[u(t)|Gt ])dt + dBt = dZt − E[u(t)|Gt ]dt, Nt = σ(Ns : s ≤ t) ⊂ Gt . So
E[u(t) − E[u(t)|Gt ]|Nt ] = 0. By Corollary 8.4.5, N is a Brownian motion.
8.9.
Proof. By Theorem 8.5.7,
and αt =

t2
,
1+ 23 t3

we have e

αt
0
αt

es dBs =

t αs
e
0

˜s , where B
˜t is a Brownian motion. Note eαt =
αs dB

1 + 23 t3

αt = t.

8.10.
Proof. By Itˆ
o’s formula, dXt = 2Bt dBt + dt. By Theorem 8.4.3, and 4Bt2 = 4|Xt |, we are done.
8.11. a)
2

Proof. Let Zt = exp{−Bt − t2 }, then it’s easy to see Z is a martingale. Define QT by dQT = ZT dP , then
QT is a probability measure on FT and QT ∼ P . By Girsanov’s theorem (Theorem 8.6.6), (Yt )t≥0 is a
Brownian motion under QT . Since Z is a martingale, dQ|Ft = ZT dP |Ft = Zt dP = dQt for any t ≤ T . This
allows us to define a measure Q on F∞ by setting Q|FT = QT , for all T > 0.
b)
ˆ is a Brownian motion, then
Proof. By the law of iterated logarithm, if B
lim sup √
t→∞

Bt
Bt
= 1 a.s. and lim inf
= −1, a.s.
t→∞ 2t log log t
2t log log t

So under P ,
lim sup Yt = lim sup
t→∞

t→∞

Bt
t
+√
2t log log t
2t log log t

2t log log t = ∞, a.s.

Similarly, lim inf t→∞ Yt = ∞ a.s. Hence P (limt→∞ Yt = ∞) = 1. Under Q, Y is a Brownian motion.
The law of iterated logarithm implies limt→∞ Yt does’nt exist. So Q(limt→∞ Yt = ∞) = 0. This is not a
contradiction, since Girsanov’s theorem only requires Q ∼ P on FT for any T > 0, but not necessarily on
F∞ .
8.12.
Proof. dYt = βdt + θdBt where β =
u=

−3
. Put Mt = exp{−
1

t
0

0
1

udBs −

and θ =
1
2

t
0

1
−1

3
. We solve the equation θu = β and get
−2

u2 ds} = exp{3B1 (t) − B2 (t) − 5t} and dQ = MT dP on FT ,

˜t with B
˜t =
then by Theorem 8.6.6, dYt = θdB

−3t
+ B(t) a Brownian motion w.r.t. Q.
t

8.13. a)

20


Proof. {Xtx ≥ M } ∈ Ft , so it suffices to show Q(Xtx ≥ M ) > 0 for any probability measure Q which is
equivalent to P on Ft . By Girsanov’s theorem, we can find such a Q so that Xt is a Brownian motion w.r.t.
Q. So Q(Xtx ≥ M ) > 0, which implies P (Xtx ≥ M ) > 0.
b)
Proof. Use the law of iterated logarithm and the proof is similar to that of Exercise 8.11.b).
8.15. a)
Proof. We define a probability measure Q by dQ|Ft = Mt dP |Ft , where
t

α(Bs )dBs −

Mt = exp{
0

t

1
2

α2 (Bs )ds}.
0


t
ˆt =
Then by Girsanov’s theorem, B
Bt − 0 α(Bs )ds is a Brownian motion. So Bt satisfies the SDE dBt =
ˆt . By Theorem 8.1.4, the solution can be represented as
α(Bt )dt + dB
t
x
EQ
[f (Bt )] = E x [exp(

α(Bs )dBs −
0

1
2

t

α2 (Bs )ds)f (Bt )].
0

Remark: To see the advantage of this approach, we note the given PDE is like Kolmogorovs backward
equation. So directly applying Theorem 8.1.1, we get the solution E x [f (Xt)] where X solves the SDE
dXt = α(Xt)dt + dBt. However, the formula E x [f (Xt)] is not sufficiently explicit if α is non-trivial and the
expression of X is hard to obtain. Resorting to Girsanovs theorem makes the formula more explicit.
b)
Proof.
e

t
0

α(Bs )dBs − 21

t
0

α2 (Bs )ds

=e

t
0

γ(Bs )dBs − 21

t
0

γ 2 (Bs )ds

So

1

u(t, x) = e−γ(x) E x eγ(Bt ) f (Bt )e− 2

1

= eγ(Bt )−γ(B0 )− 2
t
(
0

t
0

∆γ(Bs )ds− 21

γ 2 (Bs )+∆γ(Bs ))ds

t
0

γ 2 (Bs )ds

.

c)
Proof. By Feynman-Kac formula and part b),
1

v(t, x) = E x eγ(Bt ) f (Bt )e− 2

t
(
0

γ 2 +∆γ)(Bs )ds

= eγ(x) u(t, x).

8.16 a)
Proof. Let Lt = −
T
0

t
0

2

n
∂h
i
i=1 ∂xi (Xs )dBs .

Then L is a square-integrable martingale. Furthermore, L
C01 (Rn ).

|
h(Xs )| ds is bounded, since h ∈
By Novikov’s condition, Mt = exp{Lt −
martingale. We define P¯ on FT by dP¯ = MT dP . Then
dXt =

h(Xt )dt + dBt

defines a BM under P¯ .

21

1
2

T

=

L t } is a


E x [f (Xt )]
¯ x [Mt−1 f (Xt )]
= E
¯ x [e
= E
x

= E [e

t
0

n
∂h
i=1 ∂xi

(Xs )dXsi − 21

t
0

|

h(Xs )|2 ds

t
0

n
∂h
i=1 ∂xi

(Bs )dBsi − 12

t
0

|

h(Bs )|2 ds

f (Xt )]

f (Bt )]

Apply Itˆ
o’s formula to Zt = h(Bt ), we get
t n

h(Bt ) − h(B0 ) =
0 i=1

∂h
1
(Bs )dBsi +
∂xi
2

So
E x [f (Xt )] = E x [eh(Bt )−h(B0 ) e−

t
0

t n
0 i=1

V (Bs )ds

∂2h
(Bs )ds
∂x2i

f (Bt )]

b)
Proof. If Y is the process obtained by killing Bt at a certain rate V , then it has transition operator
TtY (g, x) = E x [e−

t
0

V (Bs )ds

g(Bt )]

So the equality in part a) can be written as
TtX (f, x) = e−h(x) TtY (f eh , x)

8.17.
Proof.
dY (t) =

dY1 (t)
dY2 (t)

=

1
1

2
2

β1 (t)
1
dt +
β2 (t)
1

2
2



dB1 (t)
3 
dB2 (t) .
2
dB3 (t)

So equation (8.6.17) has the form
 
u
3  1
u2 =
2
u3

β1 (t)
.
β2 (t)

The general solution is u1 = −2u2 + β1 − 3(β1 − β2 ) = −2u2 − 2β1 + 3β2 and u3 = β1 − β2 . Define Q by
(8.6.19), then there are infinitely many equivalent martingale measure Q, as u2 varies.
9.2. (i)
Proof. The book’s solution is detailed enough. We only comment that for any bounded or positive g ∈
B(R+ × R),
E s,x [g(Xt )] = E[g(s + t, Btx )],
where the left hand side is expectation under the measure induced by Xts,x on R2 , while the right hand side
is expectation under the original given probability measure P .
Remark: The adding-one-dimension trick in the solution is quite typical and useful. Often in applications,
the SDE of our interest may not be homogeneous and the coefficients are functions of both X and t. However,
to obtain (strong) Markov property, it is necessary that the SDE is homogeneous. If we augment the original
SDE with an additional equation dXt = dt or dXt = −dt, then the SDE system is an (n + 1)-dimension SDE
driven by an m-dimensional BM. The solution Yts,x = (Xt , Xt ) (X0 = s and X0 = x) can be identified with
22


a probability measure P s,x on Rn+1 , with P s,x = Y s,x (P ), where Y s,x (P ) means the distribution function
of Y s,x . With this perspective, we have E s,x [g(Xt )] = E[g(t + s, Btx )].
Abstractly speaking, the (strong) Markov property of SDE solution can be formulated precisely as follows.
Suppose we have a filtered probability space (Ω, F, (Ft )t≥0 , P ), on which an m-dimensional continuous
semimartingale Z is defined. Then we can consider an n-dimensional SDE driven by Z, dXt = f (t, Xt )dZt .
If X x is a solution with X0 = x, the distribution X x (P ) of X x , denoted by P x , induces a probability measure
on C(R+ , Rn ). The (strong) Markov property then means the coordinate process defined on C(R+ , Rn ) is a
(strong) Markov process under the family of measures (P x )x∈Rn . Usually, we need the SDE dXt = f (t, Xt )dZt
is homogenous, i.e. f (t, x) = f (x), and the driving process Z is itself a Markov process. When Z is a BM,
we emphasize that it is a standard BM (cf. [8] Chapter IX, Definition 1.2)
9.5. a)
Proof. If 21 ∆u = −λu in D, then by integration by parts formula, we have −λ u, u = −λ D u2 (x)dx =
1
1
u(x) · u(x)dx ≤ 0. So λ ≥ 0. Because u is not identically zero, we must have
2 D u(x)∆u(x)dx = − 2 D
λ > 0.
b)
Proof. We follow the hint. Let u be a solution of (9.3.31) with λ = ρ. Applying Dynkin’s formula to the
process dYt = (dt, dBt ) and the function f (t, x) = eρt u(x), we get
τ ∧n

E (t,x) [f (Yτ ∧n )] = f (t, x) + E (t,x)

Lf (Ys )ds .
0

Since Lf (t, x) = ρeρt u(x) + 21 eρt ∆u(x) = 0, we have E (t,x) [eρτ ∧n u(Bτ ∧n )] = eρt u(x). Let t = 0 and n ↑ ∞,
we are done. Note ∀ξ ∈ bF∞ , E (t,x) [ξ] = E x [ξ] (cf. (7.1.7)).
c)
Proof. This is straightforward from b).
9.6.
Proof. Suppose f ∈ C02 (Rn ) and let g(t, x) = e−αt f (x). If τ satisfies the condition E x [τ ] < ∞, then by
Dynkin’s formula applied to Y and y, we have
τ

E (t,x) [e−ατ f (Xτ )] = e−αt f (x) + E (t,x)

(
0

That is,


+ A)g(s, Xs )ds .
∂s

τ

E x [e−ατ f (Xτ )] = e−ατ f (x) + E x [

e−αs (−α + A)f (Xs )ds].
0

Let t = 0, we get

τ

E x [e−ατ f (Xτ )] = f (x) + E x [

e−αs (A − α)f (Xs )ds].
0

If α > 0, then for any stopping time τ , we have
τ ∧n

E x [e−ατ ∧n f (Xτ ∧n )] = f (x) + E x [

e−αs (A − α)f (Xs )ds].
0

Let n ↑ ∞ and apply dominated convergence theorem, we are done.
9.7. a)

23


Proof. Without loss of generality, assume y = 0. First, we consider the case x = 0. Following the hint and
note ln |x| is harmonic in R2 \{0}, we have E x [f (Bτ )] = f (x), since E x [τ ] = 21 E x [|Bτ |2 ] < ∞. If we define
τρ = inf{t > 0 : |Bt | ≤ ρ} and τR = inf{t > 0 : |Bt | ≥ R}, then
P x (τρ < τR ) ln ρ + P x (τρ > τR ) ln R = ln |x|,
P x (τρ < τR ) + P x (τρ > τR ) = 1.
ln R−ln |x|
ln R−ln ρ . Hence
ln R−ln |x|
limR→∞ limρ→0 ln R−ln ρ = 0.

So P x (τρ < τR ) =

P x (τ0 < ∞) = limR→∞ P x (τρ < τR ) = limR→∞ limρ→0 P x (τρ < τR ) =

For the case x = 0, we have
P 0 (∃ t > 0, Bt = 0)
= P 0 (∃ > 0, τ0 ◦ θ < ∞)
= P 0 (∪

>0, ∈Q+ {τ0 ◦ θ
0
P (τ0 ◦ θ < ∞)

< ∞})

=

lim

=

lim E 0 [P B (τ0 < ∞)]

→0
→0

z2

=

lim

=

0.

e− 2

P z (τ0 < ∞)dz


→0

b)
˜t =
Proof. B

−1
0

0
Bt and
1

−1
0

0
1

˜ is also a Brownian motion.
is orthogonal, so B

c)
Proof. P 0 (τD = 0) = lim

→0

P 0 (τD ≤ ) ≥ lim
(1)

P 0 (∃ t ∈ (0, ], Bt
0

=

P (∃ t ∈ (0, ],

=

1.

(2)
Bt

(1)

→0

P 0 (∃ t ∈ (0, ], Bt

(2)

≥ 0, Bt

(1)

= 0) + P 0 (∃ t ∈ (0, ], Bt

0

= 0) + P (∃ t ∈ (0, ],
(1)

(2)

≥ 0, Bt

(1)
Bt

= 0,

(2)
Bt

(2)

= 0). Part a) implies
(2)

≤ 0, Bt

= 0)

= 0)
(1)

(2)

And part b) implies P 0 (∃ t ∈ (0, ], Bt ≥ 0, Bt = 0) = P 0 (∃ t ∈ (0, ], Bt ≤ 0, Bt = 0). So
(1)
(2)
P 0 (∃ t ∈ (0, ], Bt ≥ 0, Bt = 0) = 21 . Hence P 0 (τD = 0) ≥ 12 . By Blumenthal’s 0-1 law, P 0 (τD = 0) = 1,
i.e. 0 is a regular boundary point.
d)
(2)

Proof. P 0 (τD = 0) ≤ P 0 (∃ t > 0, Bt = 0) ≤ P 0 (∃ t > 0, Bt
boundary point.

(3)

= Bt

= 0) = 0. So 0 is an irregular

9.9. a)
Proof. Assume g has a local maximum at x ∈ G. Let U ⊂⊂ G be an open set that contains x, then
g(x) = E x [g(XτU )] and g(x) ≥ g(XτU ) on {τU < ∞}. When X is non-degenerate, P x (τU < ∞) = 1. So we
must have g(x) = g(XτU ) a.s.. This implies g is locally a constant. Since G is connected, g is identically a
constant.
9.10.
24


Proof. Consider the diffusion process Y that satisfies
dt
dXt

dYt =

dt
αXt dt + βXt dBt

=

1
0
dt +
dBt .
αXt
βXt

=

Let τ = inf{t > 0 : Yt ∈ (0, T ) × (0, ∞)}, then by Theorem 9.3.3,
τ

K(Xs )e−ρs ds]

= E (t,x) [e−ρτ φ(Xτ )] + E (t,x) [

f (t, x)

0
T −t

K(Xsx )e−ρ(s+t) ds],

= E[e−ρ(T −t) φ(XTx −t )] + E[
0

where Xtx = xe(α−

β2
2

)t+βBt

. Then it’s easy to calculate
T −t

f (t, x) = e−ρ(T −t) E[φ(XTx −t )] +

e−ρ(s+t) E[K(Xsx )]ds.
0

9.11. a)
Proof. First assume F is closed. Let {φn }n≥1 be a sequence of bounded continuous functions defined on ∂D
such that φn → 1F boundedly. This is possible due to Tietze extension theorem. Let hn (x) = E x [φn (Bτ )].
¯ and ∆hn (x) = 0 in D. So by Poisson formula, for z = reiθ ∈ D,
Then by Theorem 9.2.14, hn ∈ C(D)
hn (z) =



1


Pr (t − θ)hn (eit )dt
0

Let n → ∞, hn (z) → E x [1F (Bτ )] = P x (Bτ ∈ F ) by bounded convergence theorem, and RHS →

1
it
2π 0 Pr (t − θ)1F (e )dt by dominated convergence theorem. Hence
P z (Bτ ∈ F ) =

1




Pr (t − θ)1F (eit )dt
0

Then by π − λ theorem and the fact Borel σ-field is generated by closed sets, we conclude
P z (Bτ ∈ F ) =

1




Pr (t − θ)1F (eit )dt
0

for any Borel subset of ∂D.
b)
Proof. Let B be a BM starting at 0. By example 8.5.9, φ(Bt ) is, after a change of time scale α(t) and under
the original probability measure P, a BM in the plane. ∀F ∈ B(R),
P (B exits D from ψ(F ))
= P (φ(B) exits upper half plane from F )
= P (φ(B)α(t) exits upper half plane from F )
=

Probability of BM starting at i that exits from F

= µ(F )
So by part a), µ(F ) =

1



0

1ψ(F ) (eit )dt =

f (ξ)dµ(ξ) =
R

1


1



0

1F (φ(eit ))dt. This implies



f (φ(eit ))dt =
0

25

1
2πi

∂D

f (φ(z))
dz
z


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay

×