# Independent And Stationary Sequences Of Random Variables - Chapter 19

Chia sẻ: dalatngaymua

Chapter 19 EXAMPLES AND ADDENDA The separate sections of this chapter are not related to one another except in so far as they illustrate or extend the results of Chapter 18 . © 1 . The central limit theorem for homogeneous Markov chains Consider a homogeneous Markov chain with a finite number of states (labelled 1, 2, . . ., k) and transition matrix P = (p i ;) (see, for instance, Chapter III of [47] ) . If Xn is the state of the system at time n, we have the sequence of random variables X1 , X2 , . . ., Xn...

Bạn đang xem 10 trang mẫu tài liệu này, vui lòng download file gốc để xem toàn bộ.

## Thể loại:

Khoa Học Tự Nhiên  » Toán học

## Nội dung Text: Independent And Stationary Sequences Of Random Variables - Chapter 19

1. Chapter 19 EXAMPLES AND ADDENDA The separate sections of this chapter are not related to one another except in so far as they illustrate or extend the results of Chapter 18 . © 1 . The central limit theorem for homogeneous Markov chains Consider a homogeneous Markov chain with a finite number of states (labelled 1, 2, . . ., k) and transition matrix P = (p i ;) (see, for instance, Chapter III of [47] ) . If Xn is the state of the system at time n, we have the sequence of random variables X1 , X2 , . . ., Xn , . . . . (19 .1 .1) We denote by p~~ ) the probability of moving from state i to state j in n steps. If for some s > 0, p( ;) > 0 for all i, j, then Markov's theorem [47] states that the limits pj = lim p(n ) n-+ oo exist for all i and j and do not depend on i, and that, for constants C, p (0<p<1), max t!) - j.1 l p~ p < Cpn . (19 .1 .2) i,, The numbers p l , p 2 , . . ., p,, form a stationary probability distribution in the sense that, if P(X 1 =j) = p; for all j, then the variables Xn form a stationary sequence . It then follows from (19 .1 .2) that Xn is uniformly mixing, since, if A= {X1=i1, X2='2, . . ., Xr =i r { , B = { Xn+r - in+r, . . ., Xs i s { ,
2. nk) is any other initial distribution, we denote the cor- 366 EXAMPLES AND ADDENDA Chap . 19 P(AB) = Pr1 Pl1lz . . .Plr trPtntn +r . . .p ;s 11s = = I'(A)Pinln+rPi„+rin+r+l . . . Pi,- 1is , P (A) P (B) = P (A) p to+rpto+rin+r+1 . . . PG-ii" so that JP(AB)-P(A)P(B)I < P(A)IP~"n+r-Pin+rI < P(A)Cpn . Let f (~) be any real function defined on the states of the chain . Application of Theorem 18 .5 .2 shows that the central limit theorem applies to the sequence f (Xj) whenever 62 = E{f(X1)-Ef(X1)}2 + n +2 Z E{f(Xj+1)-Ef(Xj+1)} If(X1)-Ef(X1)} # 0 . j=1 If n= (n1, it2, . °' responding probability and expectation by P,, and En . Theorem 19 .1 .1 . Assume that (19 .1 .2) holds, and that a :~4-0 . Then, for any initial distribution n, lim Pn Qn2 { f (Xj) - Ef (X j) } < z = (27r)- 2 Z -~ e - 2•Z du . n oo j=1 Proof. The theorem is already proved for the case 2c= (p1, P2, Pk) ° Thus, denoting the normalised sum as usual by Z, , and setting r = [log n], ,,I+IE,teitzn-Ee'tznl < Je-2t2-E,te``znI < Ie-2`2-L r ° En exp 6n2 I [f (Xj) -Ef (Xj)] -1 j= 1 r ° E exp it 2 [f (Xj) - Ef (Xj)] -1 1+ J 1 n it ° (En - E) exp E [f (Xi) - Ef (X j) ] +0(1) < ~un" j=r+1 k < (riYIJ)+1 PJr+1)PJr+1Jr+2"'PJn-IJn + t .jr+1 . °°° . jn=1
3. 19 .1 . HOMOGENEOUS MARKOV CHAINS 367 2 ItI log n + max If (i)I +o(1) Unl < 2C r + 2 Its log n max If (i)b+o(1) ~0 (n-+ ac) p . (19 .1 .3) 6n2 . The results cited above extend to Markov chains on an arbitrary state space, for the theory of which the reader is referred to Chapter V, © 5 of [3 1] . The transition matrix (ptj) is replaced by the transition function p (x, A), defined for all points x of the state space t and all elements A of the U- algebra Rx of subsets of 3r . We choose an initial distribution it (A), a probability measure on (X, Rx) . Using the Kolmogorov extension theorem, we can find a sequence of random variables X1, X2, . . ., Xn, . . . (19 .1 .4) with values in 3`, such that P(X1EA1, X2EA2, . . ., XneAn)= . . . f P(~n-1, din) . (19 .1 .5) JA1 n(d~I) f p(~1, d~2) AZ An The n-step transition probabilities p(n) (c, A) are given by PGV, A) = p (~, A) , p(n) (~, A) = f Pcn -1 > (q, A) P (~, dri) and 7r(A)E= (A)n-1 (n 1) , ) (~' A) n (d~) ' (19 .1 .6) (n ~ 2) fA P( Under reasonably weak conditions, there exists a stationary measure p (A), such that sup I p(") (x, A) - p (A) I < C pn , (19 .1 .7) 4,A where C, p are constants, 0 < p < 1 . This is true, for instance (see [23]) if (1) there is a finite measure m on j~, with m (3) > 0, an integer v and a positive number e such that p(V) (~, A) < 1- c whenever m (A) < a (Doeblin's condition), and (2) there is only one ergodic set .
4. 368 EXAMPLES AND ADDENDA Chap. 19 If the measure p is taken as the initial distribution, then the sequence (19 .1 .4) is stationary . Moreover, (19 .1 .7) implies the uniform mixing condition, since IP(XIEA,, . . ., XkEAk, Xn+k, . . ., c AS )+ - -P(Xj EA,, . . ., XkEAk)P(Xn+kEAn+k , -,X., c- Aj <P(X1 EA 1 , . . ., Xk EA k) SUP IP(n) (~n, din+k) - P( d ~n+k)I X 4k A n +k X p(~n+k , d,n+k+1) . . . P(~s-19 d~s) < JAn+k+1 J As < 2CpnP(X1 E A 1 , . . .,Xk EAk) . Thus the mixing coefficient satisfies 0 (n) < 2Cpn . The reader will be able to show, conversely, that if (19 .1 .4) is uniformly mixing, then (19 .1 .7) holds . Consequently the uniform mixing coefficient of a Markov chain either does not tend to zero, or decreases exponentially fast. Theorem 19 .1 .2. Let (19 .1 .7) hold, and let f (~) be a real-valued measurable function on X . If EIf(X1)1 2 = Jf()2 p(d)< cc ~~ and if a2 = E{f(Xl) - Ef(XJ)} 2 + 00 +2 E E{f(X1) - Ef(X1)}{f(Xj+1) - Ef(Xj+1)} =AO , j=1 then for any initial distribution 7r (A), -# limp" a - ' n Y [f(Xj)-Ef(Xj)] < z = (27x)-2 Fe -2"2 du . n ~ oo j=1 ao Proof If the initial distribution 7r(A) is the stationary distribution p (A), the stationary sequence f (Xj) is uniformly mixing, and the theorem is a
5. 19 .2 . m-DEPENDENT SEQUENCES 3 69 special case of Theorem 18 .5 .2. For an arbitrary initial distribution we proceed exactly as in the proof of Theorem 19 .1 .1, estimating the differ- ence En - E. In a similar way we may use Theorem 18 .6.1 to prove the following result . Theorem 19 .6.1 . Let (19 .1 .7) hold, and let f=f (~1, be a real-valued ~2, . . .) function on the infinite product E x ¢ x . . ., measurable with respect to the product 6-algebra tR, x a . x . . . . Write fj = f (Xj, Xj+ 1, . . .) . If E(f ) < co, and if j2 00 Q 2 = E(f1 4 Ef1) 2 + 2 1 E(f1 - Ef1)(fj+1 - Efj+1)OO, j=1 then for any initial distribution it, as n-+oo, n z n_I e - J"2 duu P,+- 1 Z (fj _Efj)<z =(2it) -2 ~ - j=1 co © 2. m-dependent sequences A sequence of random variables . . . X_ 1, X0, X1 , X2, . . . is said to be m-dependent if the random vectors (X,, _ p , XQ _ p+ 1 , . . ., XQ), (Xb, Xb+ 1, . . ., Xb+q) are independent whenever b - a > m, or equivalently if mt•_ 0 and 93 are independent when b - a > m . . The latter form of the definition has an obvious extension to the case of a process X (t) with continuous time parameter . A simple method of constructing m-dependent sequences (see [25] for examples occurring in statistics) is as follows . Let -1, ~0, ~ 1' . . . be independent variables, and f (x 1 , . . ., xm) a function of m real variables . Then Xj - f (~j, ~j+ l, . . ., ~ j+m- 1) 19 .2.1
6. 370 EXAMPLES AND ADDENDA Chap . 19 defines an in-dependent sequence . The converse is false ; there are in- dependent sequences not expressible in the form (19 .2.1). Since Gaussian variables are independent if they are orthogonal, a sta- tionary Gaussian process Xt is in-dependent if and only if its autocovari- ance function R I is identically zero in J tj > m. An in-dependent sequence is trivially uniformly mixing with 0 (n) = 0 for n > m. Hence the following result is a special case of Theorem 18.5.2. Theorem 19.2.1 . Let Xj be a stationary m-dependent sequence with EX0 < oo . Then 00 ~Z = EX0+2 1 EXO XJ j=1 converges, and if u :AO, n_z n lim n- oo e 0 -l 1 X j <z j=1 = Op (z) . © 3 . The distribution of values of sums of the form Ef (2k x) Let f (t) be a periodic function of the real argument t, with period 1, and consider the distribution of the values of the sum n Sn (t) _ f(2k t) . (19 .3 .1) k=1 Such sums are of considerable importance in the metric theory of numbers, and as such have been studied by a number of authors . An important special case is the function f (t) = {t} the fractional part of t. The reason for discussing the problem here is that it is a special case of those discussed in © 18 .6. Indeed, for 0 < t < 1, n n Sn(t) = E, f({ 2k t}) = E f(Tk t), k=1 k=1 where T is the mapping of [0, 1) into itself defined by T t = {2t} .
7. 19 .3 . SUMS OF THE FORM Ef(2 k x) 371 This transformation preserves Lebesgue measure 2; for A c [0, 1), A (T -1 A) = A (A) . ( 19 .3 .2) (We leave this for the reader to verify .) We now study the probability space formed by the segment [0, 1), with the Lebesgue measurable sets, and probability measure A . Then equation (19 .3 .2) means that the sequence of random variables fk = f (2k t) is stationary. We shall see that much more can in fact be said . Any t E [0, 1) has an expansion of the form 00 t = Z Ek(t)(2)k = [E1, E2, . . .] , k=1 where 8k (t) = O or 1 . If we neglect the rationals (which have measure zero), the correspondence between t and the sequence 18k1 is one-to-one, so that the function f (t) may be written f (t) =f (E1, E 2 , . . .) . It is clear that Ek (T t) = Ek ({ 2t }) = Ek+ 1 ( t) , so that E 1 , E 2 , .. . is a stationary sequence . Moreover, the Ek are indepen- dent, since P(E1 = i l , E2=i2, . . ., Es= i s )= 2{t ; E1(t)= l l , . . ., ES(t)= is} _ S S = 2-s= 1 1 2 {t ; Ek(t) = lk} = II P(Ek = ik) , k=1 k=1 where i k = 0 or 1 . Consequently, the random variable f=f (t) =f (E 1 , E2, . . .) is measurable with respect to the a-algebra generated by the E ;, and fk = f (2k t) = f (Ek, Ek + 1, . . . ) is obtained from the independent random variables E j in the way discussed in © 18.6. Theorem 18 .6.1 therefore gives sufficient conditions for the asymptotic normality of Sn (t), i.e. for the limit Sfit)SES.(t) lim A t ; < z = (P (z) n o0 1 n( )} to hold.
8. 372 EXAMPLES AND ADDENDA Chap . 19 These conditions may be stated in a different, and more natural, form in the present application . We must first, of course, require that f have a finite variance, i .e . (i f(t)2dt < cr- . 0 Moreover, we need to compute [f ]k = [f ]k (t) = E (f IE1, E 2 , . . . . Ek) . This is measurable with respect to {E1, . . ., Ek}, and must therefore be constant on each of the intervals A Jk = [(j-1)2 -k, j2 -k), (j=1, 2, . . ., 2k) The definition of conditional expectation (© 1 .1) gives ]k(t)dt = f(t)dt , J A s k If J A Jk so that fJ2 k [f]k(t) = 2k ` f(t)dt = 2kJ - _ k f(t)dt , (A sk (J 1)2 for tEA Jk . The special case of Theorem 18 .6.1 can now be stated as follows. Theorem 19 .3 .1 . Let f (t) be a function in L2(01 1) and with period 1, and let fo f(t)dt=0 . If - Z Y ( 1 If (t) - [f ]k (t) 1 2 dt < oo , (19 .3.3) k=1 J 0 then 2 = f 1 f (t)2 dt + 2 0 0 Z 1 f (t)f (2 k t) dt 0 k=1J 0 converges, and if u :A 0, n lim a~ t; (7 -1 n -2 Z f(2 k t) < z = O (Z) n- co k=1 Remark 19 .3 .1 . The condition (19 .3 .3) will be satisfied if either (1) f (t) is a function of bounded variation, or 1 (2) If (t)-f(t+h)12dt= 0 log-2-E h ,. e>0 J0
9. 19 .3 . SUMS OF THE FORM E f(2 k x) 373 Proof (1) If Var (f) < Go, then denoting the L 2 norm by It - II, we have llf-[f]kll = 1 {f(t)-[f]k(t)}2 _ f0 J 2 2k f Aj, dt J AJk [f (t) -f (u)] du 2 ,~, < 2 (Var f )i ~ 2k f dt f I f (t) -f (u) duI .1 Aik AJk 2-k 2-k _ (Var f dt )? ~ 2k J 0 J 0 If(t+(I-1)2_k)+ k -f(u+(1-1)2-k)I du < (Var f )2_ , so that Z Ilf- [f]kil < Var f E 2 --jk < oo . k (2) As above, Ilf-[f]kll < ~ 2k f dt f If(u)-f(t)] 2 du i A,ik Aik But dt [ f (u) - f (t)] 2 dt = f AJk AJk = dt If (u)-f(u+t-U-1)2-k)]2du+ JA ~k JA ~k ° 2 fA ~k dt A f ~k [f(u) -f (u+t- U- 1)2 - k)] x x [f(u+t-(j-1)2-k)-f(t)]du+ ° J Aik dt fAik [f(u+t-1-1)2-k)-f(t)]2du k 2 f 20 dt J Ajk [f (u) -f (u + t)] 2 du + 2-k ° 2 dt (j [f (u) -f (u + t)] 2 du (v + t) + Jo fik ~ A [f ~ f(t+(j - 1 ) 2-k )] 2 dv) --
10. 374 EXAMPLES AND ADDENDA Chap . 19 Substituting condition (2), we have IIf-[.f]kII 2 = O(k -2- L) , so that 00 I k=1 II .f- [ .f]kli < 00 . © 4. Application to the metric theory of continued fractions Each real number t in the interval (0, 1) has a unique continued fraction expansion of the form 1 1 t = .. . , (19.4.1) a, (t) + a 2 (t) + where the a„ (t) are natural numbers, and every sequence (a„ (t)) corre- sponds to some real number t . We write (19.4.1) symbolically in the form t = [a 1 (t), a2 (t), . . . ] . (19.4 .2) The metric theory of continued fractions is concerned with the problem of computing the measures of sets of values of t defined by conditions on the sequence a„ (t) . We shall see that, if a suitable measure is placed on the interval (0, 1), then the sequence a„ (t) can be made stationary, and that it then satisfies the uniform mixing condition . Many of the results of the theory then follow from the known properties of mixing sequences . Define then a probability space with (0, 1) as the set of elementary events, endowed with the 6-algebra of Lebesgue measurable sets . The appropriate probability measure turns out to be that defined by the equation y (A) = (log 2)-1 . (19.4 .3) fA lt dt It is evident that the a„ (t) are random variables, but the proof of the prop- erties of the sequence (a„) is quite complicated, and we need some simple facts about continued fractions (which may be found in textbooks of number theory, or in [68] ) . If t >, 0, [t] = a o we write t = [ao ; a 1 , a2 , . . .] ,
11. 19 .4 . APPLICATION TO THE METRIC THEORY 375 where [a 1 , a2, -,-]is the expansion of {t} =t-ao . If [a o ; a l , a 2 , . . ., ad = a o + 1 - . . . + 1 a,+ ak is the terminating continued fraction corresponding to that of t, we write [a o ; a 1 , . . ., ad = Pk/qk Then it is easy to prove by induction that, for all k > 2, Pk = ak Pk - 1 + Pk - 2 , (19 .4 .4) qk = ak qk - 1 + qk - 2 and from this it follows that, for all k > 2, gkPk-1 Pkgk-1 = (- 1) k (19.4.5) If the continued fraction does not terminate, then 1) qk > 22(k - (19 .4 .6) and either Pk \ t \ Pk+Pk-1 (19 .4.7) qk qk + qk -1 or Pk+Pk- 1 Pk t 1< -, qk + qk - 1 qk Let t = [a o ; a 1 , a 2 , . . .], and define r n (not necessarily integral) recursively, by t=a o +l/r l , rn = an+ 1/rn+ 1 , ( n=1, 2, . . .) . Then t= [a o ; a 1 , a 2 , . . ., rn+1 ] and rn= [an ; an+l, an+2, . . .] . The number rn is called the remainder of order n of the continued fraction expansion of t. From (19.4.4) it follows that, for all k in 1 <, k ,< n, Pk-1rk+Pk-2 [a 0 ; a 1 , . . ., an] = , qk-1rk+qk-2 and thus for all n, t = Pn-1 rn+Pn-2 (19.4.8) qn-1rn+qn-2
12. 376 EXAMPLES AND ADDENDA Chap . 19 . . .ks We shall denote by Ak,' 2 the set oft E (0, 1) for which a k , (t) = i 1 , . . ., aks (t) = is . Lemma 19.4.1 . The set Ai1 z: : : n is the interval with end-points Pn Pn + Pn-1 qn qn + qn - 1 where p, and qn are defined by Pn [il, i2 , . . ., in] qn Proof. The set A~1 ...", consists of the numbers of the form t = [i 1 , l2, . . ., i , rn+11 where 1 < rn+ 1 < oo . Thus t = Pnrn+1+Pn-1 gnrn+1+qn-1 which varies on the interval with end-points Pn Pn + Pn-1 and qn qn+qn-1 as rn+1 varies on the interval [1, cc) . The fundamental result is then the following theorem . Theorem 19 .4.1 . With respect to the measure §, the sequence a n (t) is stationary, and satisfies the uniform mixing condition with - 0 (n) < K e ~" , where K and A are absolute constants . We note that an cannot be stationary in the wide sense, since 1 Ea 1 (t) _ -101 jo g2 1 +t) dt = -' r dt r = oo . r=11og2 (r+1-1 1 +t
13. 19 .4 . APPLICATION TO THE METRIC THEORY 377 Proof. (1) To prove that a n (t) is stationary we have only to show that in+') §(A I ; . . .gin) = §(Ail+s ° . We first show that §(AS ...",) = l-c(A2 .. .. . gin + 1) ) By Lemma 19 .4.1, A! , : : : n is an interval with end-points p n / q n and (p n + P„ -1)/ (q n + qn _ 1), where p n/qn = [i 1 , i 2 , . . ., in] For the sake of argument suppose . that I = Pn < Pn + Pn - 1 = A . An= n qn qn + qn -1 Then A 12 . . . (n+1) = k ii . . .i„ 1 ° An < rn K n, -A l k+rn(t) ' so that 00 12 . . . (n+1) _ 1 k+fi n +l k+An I §(Ak `' E k= 1 . . . `n )- log 2 log k+~.n k+An + 1 _ 1 1 +An 1 n l092 log 1 + An - § (A i, . . . in Continuing in this way, . . .(n+2) = A 2 . . . (n+2> ~( A3 . . . i n ti ) ~c ( k . . .in ) k ~ u(Al . . .(n+1)) = li(A2A . .(inn +1)) - Y" (All . . . n) 7 k and more generally, +s) (n+s)) (19.4.9) y(A~, .. . = Y(A Finally, the general case is obtained from (19 .4.9) and the equation (19.4.10) Ik (k # 11) and the stationarity is proved . (2) To prove the uniform mixing condition, it is sufficient to prove that (,~ 1 . . .k I~(AII . . .ikn k+n . . .k+n+s _ ~~ 1 . . .k ~,( k+n . . .k+n+s AJk+n . . .Jk+n+s) Y"(AiI . . .ik) ' (AJk+n . . .Jk+n+s)I <Ke -An`iZ (19.4.11) §(Ai1 . . .ik u(AJkn . . .jk "„+s),
14. 378 EXAMPLES AND ADDENDA Chap . 19 since any sets A, B measurable with respect to (a 1 , . . ., a,) and (a,,,,, ak+n+s) respectively may be written as disjoint unions A= J AI, B U BM , I m of sets A l , B m of the type occurring in (19 .4.11), and (19 .4.11) will imply that l § (AB) - u (A) u (B) I < E l§ (A 1 B.) - u (Al)§ (Bm) I l, m K e - An It (A1) § (Bm) _ I' m = K e- ;.n1/2 y(A) u(B) < K e- .inI y(A) To prove (19.4.11) we need a theorem of Kyz'min, whose proof may be found in [68] . Theorem (Kyz'min) . Let (fn (x) ; n = 1, 2, . . .) be a sequence of functions on [0, 1] satisfying 00 1 1 fn+1(x) = k+x . k=1 (k+X) 2fn If, for 0 < x < 1, 0 <fo (x) < m , 1 fo (x) I < M , then fn( X ) = 1+x + OKe -1n 12 (0<x<1, 101< 1 ), where 1 (z) dz , a = log 2 J o fo ). is an absolute positive constant, and K depends only on M, m . We set Mn(x) = N {t ; a1 (t) = i 1 , . . ., ak(t) = ak(t) = ak , Zk+n(t) < x} where z,,(t) denotes the continued fraction Zs(t) = [as+ 1(t), as+ 2 (t), . . .] . In order that a 1 (t) = i 1 , . . ., ak (t) = ik, Zk + n (t) < x, it is necessary and suffi- cient that, for some integer r,
15. 19 .4 . APPLICATION TO THE METRIC THEORY 379 al (t) = il, . . ., ak(t) = lk, r+x < Zn+k- 1 ( t) <r since zs (t) = 1 as+ 1(t) + Zs+ 1 ( t) Therefore, Mn (x) = Mn - 1 1 - Mn - 1 1 (19.4.12) r=1 r r+x It is easy to check that this equation may be differentiated term-by-term, to give _ •• 1 1 Min-1 (19 .4 .13) Mn(x) r=1 (r+x)2 (r+x) ' for n >, 1 . We now introduce the functions Z. (x) = Mn (x) / § (Ail . .- k) (n > 0). which satisfy •• 1 1 xn(x) = E r-1 (r-~X)2 Xn-1 r+x) If the conditions of Kyz'min's theorem are satisfied, we can therefore con- clude that Mn' (X) _ 1 1 + OK e - An 3 /z 101 < 1 . (19.4.14) § (Ai, : : : k) log 2 1 + x ' Now integrate this equation over Aik+n ° . .Jk+n+s Because of the stationarity, the integral of the right-hand side is equal to k+n . . .k+n+s k+n . . .k+n+s -In'h 1. ~(AJk+n . . . .Jk+n+S)+ 6 1K§(AJk+n . . .Jk+n+s)e Iell < To calculate the integral of the left-hand side, note that by Lemma 19 .4.1, A + n ° ° ° Jk + n + s is the interval (a, /3) on which the first s coefficients of the continued fraction of t are jk+n, ° ° °, jk+n+s ° The difference Mn M, (06) is therefore the §-measure of 1 . . .k All . . .kk n {a zk+n \$~
16. 380 EXAMPLES AND ADDENDA Chap . 19 Thus -IA1 M,, (x)dx = M (f3) - M (a) = Jk+l ° ° ° J k+n+s = § (A1 . . .k n Ak+n . . .k+n . . .k+n+s) 11 ° ° ° 1k Jk+Jk+n . . .Jk+n+s Hence the integrated version of (19 .4.14) is equivalent to (19 .4 .11). It remains to prove the admissibility of Kyz'min's theorem, so that we have to show that 0<X0'o (X)<c1 IXg (x)I < c 2 , where c l , c 2 are constants . For t e Ail : : : k , t = Pk + Zk Pk- 1 qk+Zkgk-1 where A Ell, . . ., l k] qk Thus Ik(x)-Ali . . .kn l td zk(t)<x} is the interval with end-points Pk Pk + XPk - 1 qk qk + xqk - 1 Therefore 1 f dt MO (x) _ (19.4.16) l og 2 jI k( X ) 1 + t ' 1 dt §(A : : : k) = MO(1) _ log 2 Ik(1) 1 +t The second equation gives 1 2 log 2 g f Ik(1) dt < §(Ail : : :k) < 101 g2 1 k(l) dt , (19 .4.17) dt = Pk _ Pk +Pk- 1 1 .1 k (1) qk qk+qk-1 gk(gk+qk-1) Differentiating (19 .4.16), and using (19 .4.5),
17. 19 .4 . APPLICATION TO THE METRIC THEORY 381 Mo(x) = [log 2 {(qk+xqk-1)(qk+pk+(pk-l+qk-1)x)}] -1 . Hence from (19.4.17) and the inequality qk - 1 < q k , we have 8< Xo (x) < 2, IX"(x)l,<24 . The measure-preserving transformation T associated with the stationary sequence (aj (t)) has the form Tt = T [a, (t), a 2 (t), . . .] _ [a2 (t), a 3 (t), . . .] , or equivalently, Tt= {t -1 } . Since the sequence is mixing, T is metrically transitive, so that the ergodic theorem [47] has the following form . Theorem 19.4.2. Let f (t) be an absolutely integrable function on (0, 1) . Then for all points t of (0, 1) except possibly for the elements of a set of measure zero, t 1 lim n - 1 n- oo n '=o E1 f(Tit) = (log 2 )-1 1 d . (t fo + Corollary 19.4.1 . Let f (r) be a function of the integral variable r, and let f (r) = 0 (r 1- a), S > 0. Then for almost all t c- (0, 1), l imn -1 " f (ak) = . f (r) log 1 1 + 2 )/log2 . r(r+) n oo k- 1 r- 1 To prove this, it suffices to note that f (a 1 (t)) takes the value f(r) on ((r+ 1) - 1, r -1 ) . In particular, taking f (r) = log r, we have, almost everywhere, n a0 lim n-1 a,= log r log 'lo-2, n oo k 1 r-1 2 (1 + r(r+) or equivalently, co 1 log r/log 2 lim (a l a2 . . . an ) 1 /n = II 1 + r(r+2) n-+co r=1
18. 382 EXAMPLES AND ADDENDA Chap . 19 Many similar results can be proved in the same way . To formulate a central limit theorem for the sequence f (Tit), we have to be able to compute expressions of the form [f]k(t) = E(f Ia l , a 2 , . . ., ak) . This is constant on A l : : : k, and f 1 . . .k lI . . .ik [f]k(t)§(dt) = fA1 . . .k f(t)§(dt) , li . . .ik so that, for t c A : : : k , [f]k(t)=lt(Ail : : :k)-1 fA Il .. .. .k f( t)§(dt) . 1 .ik Theorem 19 .4.3. Let the function f (t) satisfy 1 (1 ) J if (t) 2 dt < co , o Z (2 ) 00 E 1 If (t) - [ f ]k (t) 12 dt < 0 , ( 19.4 .18) k 1 J0 and write f t) dt a= J (t) § (d t) = lot 2 J 1 1+t of 1 log 2 Then 00 a2 = J 1 If (t)-a}2§(dt)+2 E f 1 If(t) -a}{f(Tkt)-a}§(dt) 0 k=1J 0 converges, and if a =A 0, n-1 l im A t ; u - ' n - Z I { f(T k t-a)} < z , = O(z) , n- cx f k=0 where AA denotes Lebesgue measure on (0, 1) . Proof. It follows at once from Theorems 18 .6.1 and 19 .4.1 that n-1 -1 - lim § t ; a -1 n E If(Tk t)-a} < z = 0(Z) . n-c% k=0 Hence we have only to show that § can be replaced by Lebesgue measure,, .
19. 19 .4 . APPLICATION TO THE METRIC THEORY 383 Lemma 19.4.2. If B c (0, 1) belongs to the a-algebra Wl + 1 generated by the functions a J (t) (j > n), then V Iu(B) - C(B)I < K 1 e - " (19 .4.20) where K 1 is a constant . Proof. If in the second part of the proof of Theorem 19.4.1 we set Mn(x) = A {t ; a1 (t) = i, zn+ 1 (t) < x} , Xn(x) = Mn (x)/A(A') and carry through the same arguments, then instead of (19.4.11) we arrive at the inequality l+n . . .k+n+s IA (A1 nAj,+n /l(AL , l+n . . .k+n+s . . °jk+n +s)-~(Ail) §(AJ1 +n . . .Jk+n+s)I< K e -Anl/`A(A1)A(Al+n . . .k+n+s) 1 J1+n . . .Jk+n+s Summing over 1 < i < oo, we find that l+n . . .k+n+s )-§(A l+n . . .k+n+s . - .in'/z l+n . . .k+n+s IA (A~ Ji+n ° ° ° J k+n+s J1 +n ° ° ° J k+n+s)I < Ke ,1 (AJI+n ° ° ° J k+n+s ) . Since any set B c- M 1 can be approximated by unions of sets of the form + appearing in the last inequality, the lemma is proved . As well as the probability space considered throughout this section, we construct another by replacing § by A, distinguishing the corresponding expectation operators as E § and E2, so that 11 1 E' (f) = f (t) y (d t) , EA (f) = f (t) dt . 0 o is a bounded function measurable with respect to 93? 1 , then Lemma If f (t) 19 .4.2 implies that IEu (f) - E2(f)I < sup If(t)IKe-znl/Z . (19.4.21) t In fact, it is sufficient to prove this inequality for f of the form f = I ag (B) J where the BJ are disjoint sets in + 1 , and for such f, (19.4.20) gives IE u (f) - EA(f)I < sup Iajl J I Ii(B)-)(Bj)I < J - znl , sup IajI Ke - "'/ZE A (By ) < sup If (t)I Ke j j t
20. 384 EXAMPLES AND ADDENDA Chap . 19 Now write n-1 Zn =Q -1 n I {f(Tkt)-a}, k=0 so that, by (19.4.19), lim E§ (e1zz ^) = e - 2T 2 n- 00 Using (19 .4.20), as n-+oo, with r= [log n], IE,, (e``zn) - E1(eirzn) I iz, r E, 1- exp 1 E [f (T k t) - a] an k=O 2 r + E z 1- exp { [f (Tk t) - a] + . any k 0 n - 1 it + (E§ -E2 ) exp [f(Tk t)-a] 2 Ian k=r+1 2 =0 O (e - ~ rl/z ) = 0(1) . 'TI n r)' Hence the theorem is proved . . Remark 19 .4.1 . Condition (19 .4 .18) is satisfied if either of the conditions of Remark 19 .3 .1 is satisfied. The proof is almost the same as before, using the fact that, fort, u e A . . .k , by (19.4.5) and (19.4.6), Pk Pk+pk-1 = 1 G 2-(k-1) It-uI G . \ qk qk+qk-1 gk(gk+qk-1) \ © 5. Example of a sequence not satisfying the central limit theorem Let . . ., -1, o> 1, b2, . . . be a sequence of independent normal variables with mean 0 and variance 1,

Bình luận(0)

## Tài liệu liên quan

THÔNG TIN
TRỢ GIÚP
HỔ TRỢ KHÁCH HÀNG
Theo dõi chúng tôi