intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Báo cáo hóa học: " Research Article Analysis of the Sign Regressor Least Mean Fourth Adaptive Algorithm"

Chia sẻ: Nguyen Minh Thang | Ngày: | Loại File: PDF | Số trang:12

59
lượt xem
8
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành hóa học dành cho các bạn yêu hóa học tham khảo đề tài: Research Article Analysis of the Sign Regressor Least Mean Fourth Adaptive Algorithm

Chủ đề:
Lưu

Nội dung Text: Báo cáo hóa học: " Research Article Analysis of the Sign Regressor Least Mean Fourth Adaptive Algorithm"

  1. Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2011, Article ID 373205, 12 pages doi:10.1155/2011/373205 Research Article Analysis of the Sign Regressor Least Mean Fourth Adaptive Algorithm Mohammed Mujahid Ulla Faiz, Azzedine Zerguine (EURASIP Member), and Abdelmalek Zidouri Electrical Engineering Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia Correspondence should be addressed to Azzedine Zerguine, azzedine@kfupm.edu.sa Received 25 June 2010; Accepted 5 January 2011 Academic Editor: Stephen Marshall Copyright © 2011 Mohammed Mujahid Ulla Faiz et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A novel algorithm, called the signed regressor least mean fourth (SRLMF) adaptive algorithm, that reduces the computational cost and complexity while maintaining good performance is presented. Expressions are derived for the steady-state excess-mean-square error (EMSE) of the SRLMF algorithm in a stationary environment. A sufficient condition for the convergence in the mean of the SRLMF algorithm is derived. Also, expressions are obtained for the tracking EMSE of the SRLMF algorithm in a nonstationary environment, and consequently an optimum value of the step-size is obtained. Moreover, the weighted variance relation has been extended in order to derive expressions for the mean-square error (MSE) and the mean-square deviation (MSD) of the proposed algorithm during the transient phase. Computer simulations are carried out to corroborate the theoretical findings. It is shown that there is a good match between the theoretical and simulated results. It is also shown that the SRLMF algorithm has no performance degradation when compared with the least mean fourth (LMF) algorithm. The results in this study emphasize the usefulness of this algorithm in applications requiring reduced implementation costs for which the LMF algorithm is too complex. 1. Introduction The behavior of the SRA algorithm depends on the input data. It is shown in [11] that for some inputs, the LMS Reduction in complexity of the least mean square (LMS) algorithm is stable while the SRA algorithm is unstable. This algorithm has always received attention in the area of is a drawback of the SRA algorithm when compared with the adaptive filtering [1–3]. This reduction is usually done by SA algorithm since the latter is more stable than the LMS clipping either the estimation error or the input data, or algorithm [4, 16]. The SRA algorithm is always stable when both to reduce the number of multiplications necessary at the input data is Gaussian as in the case of speech processing. each algorithm iteration. The algorithm based on clipping Also, the performance of the SRA algorithm is superior to of the estimation error is known as the sign error or more that of the SA algorithm for Gaussian input data. It is shown commonly the sign algorithm (SA) [4–8], the algorithm in [10] that the SRA algorithm is much faster than the SA based on clipping of the input data is known as the sign algorithm in achieving the desired steady-state mean-square regressor algorithm (SRA) [9–12], and the algorithm based error for white Gaussian data. Theoretical studies of the SRA on clipping of both the estimation error and the input data algorithm with correlated Gaussian data in both stationary is known as the sign sign algorithm (SSA) [13, 14]. These and nonstationary environments are found in [12]. algorithms result in a performance loss when compared The convergence rate and the steady-state mean-square with the conventional LMS algorithm [9, 10]. However, error of the SRA algorithm is only slightly inferior to significant reduction in computational cost and simplified those of the LMS algorithm for the same parameter setting. hardware implementation can justify this poor performance In [10], the convergence rate of the SRA algorithm is in applications requiring reduced implementation costs [15, compared with that of the LMS algorithm to show that the 16]. SRA algorithm converges slower than the LMS algorithm
  2. 2 EURASIP Journal on Advances in Signal Processing by a factor of 2/π for the same steady-state mean-square where di is the desired signal. When the data is real-valued and g[ei ] = ei3 , the general update form in (1) becomes error. It is shown in [17] that the SRA algorithm exhibits wi = wi−1 + μH[ui ]uT ei3 , i ≥ 0. (3) significantly higher robustness against the impulse noise than i the LMS algorithm. Now if The above-mentioned advantages motivate us to analyze the proposed sign regressor least mean fourth (SRLMF) 1 1 1 H[ui ] = diag ,..., , , (4) adaptive algorithm. In this paper, the mean-square analysis, ui1 ui2 uiM the convergence analysis, the tracking analysis, and the then the update form in (3) reduces to transient analysis of the SRLMF algorithm are carried out. The framework used in this work relies on energy 1 1 1 conservation arguments [18]. Expressions are evaluated for uT ei3 wi = wi−1 + μ diag ,..., , i ui1 ui2 uiM the steady-state excess-mean-square error (EMSE) of the (5) SRLMF algorithm in a stationary environment. A condition wi−1 + μ sign [ui ]T ei3 , = i ≥ 0, for the convergence of the mean behavior of the SRLMF algorithm is also derived. Also, expressions for the tracking where M is the filter length. The SRLMF algorithm update EMSE in a nonstationary environment are presented. An recursion in (5) can be regarded as a special case of the optimum value of the step-size μ is also evaluated. Moreover, general update form in (3) for some matrix data nonlinearity an extension of the weighted variance relation is provided in that is implicitly defined by the following relation: order to derive expressions for the mean-square error (MSE) and the mean-square deviation (MSD) of the proposed sign [ui ]T = H[ui ]uT . (6) i algorithm during the transient phase. From the simulation results it is shown that both the SRLMF algorithm and 3. Mean-Square Analysis of the least mean fourth (LMF) algorithm [19] have a similar the SRLMF Algorithm performance for the same steady-state EMSE. Moreover, the results show that the theoretical and simulated results are in We wil assume that the data {di , ui } satisfy the following good agreement. conditions of the stationary data model [18, 20–24]. The paper is organized as follows: following the Intro- duction is Section 2 where the proposed algorithm is (A.1) There exists an optimal weight vector wo such that developed, while the mean-square analysis of the proposed di = ui wo + vi . SRLMF algorithm is presented in Section 3. The convergence (A.2) The noise sequence vi is independent and identically analysis of the proposed algorithm is presented in Section 4. distributed (i.i.d.) with variance σv = E[|vi |2 ] and is 2 Section 5 presents the tracking analysis of the proposed algo- independent of u j for all i, j . rithm for random walk channels and as a by-product of this (A.3) The initial condition w−1 is independent of the zero analysis the optimum value of step-size for these channels mean random variables {di , ui , vi }. is derived. And Section 6 presents thoroughly the transient analysis of the proposed algorithm. The Computational (A.4) The regressor covariance matrix is R = E[u∗ ui ] > 0. i Load is detailed in Section 7. To investigate the performance For any adaptive filter of the form in (1), and for any of the proposed algorithm, several simulation results for data {di , ui }, assuming filter operation in steady-state, the different scenarios are presented in Section 8. Finally, some following variance relation holds [18]: conclusions are given in Section 9. 22 = 2E eai g[ei ] , as i −→ ∞, μE H g [ ei ] ui (7) 2. Algorithm Development where The SRLMF algorithm is based on clipping of the regression 2 = E ui H[ui ]uT , E ui (8) vector ui (row vector). Consider now the adaptive filter, i H which updates its coefficients according to the following recursion [18]: e i = e a i + vi , (9) ∗ wi = wi−1 + μH[ui ]ui g[ei ], i ≥ 0, and eai = ui (wo − wi−1 ) is the a priori estimation error. Then (1) g[ei ] becomes where wi (column vector) is the updated weight vector at g[ei ] = ei3 = eai + vi eai + vi2 + 2eai vi . 2 (10) time i, μ is the step-size, H[ui ] is some positive-definite Hermitian matrix-valued function of ui , g[ei ] denotes some By using the fact that eai and vi are independent, we reach at function of the estimation error signal given by the following expression for the term E[eai g[ei ]]: 2 2 4 E eai g[ei ] = 3σv E eai + E eai . ei = di − ui wi−1 , (11) (2)
  3. EURASIP Journal on Advances in Signal Processing 3 (1) Sufficiently Small Step-Sizes. Small step-sizes lead to Ignoring third and higher-order terms of eai , then (11) 2 small values of E[eai ] and eai in steady-state. Therefore, for becomes smaller values of μ, the last two terms in (18) can be ignored, 2 2 E eai g[ei ] ≈ 3σv E eai . the steady-state EMSE is given by (12) 6 22 μξv 2 H g [ei ]], To evaluate the term E[ ui we start by noting that ζ= (19) Tr(R). 2 2 6σv πσu + 6eai vi5 + 15eai vi2 + 15eai vi4 2 6 5 4 2 g [ ei ] = eai + 6eai vi (13) (2) Separation Principle. For larger values of μ, we resort + 20eai vi3 + vi6 . 3 to the separation assumption, namely, that at steady-state, ui 2 is independent of eai . In this case, the last term in (18) H If we multiply g2 [ei ] by ui 2 from the left, use the fact that will be zero since eai is zero mean, the steady-state EMSE can H vi is independent of both ui and eai , and again ignoring third be shown to be and higher-order terms of eai , we obtain 6 2 μξv 2/πσu Tr(R) 22 2 224 5 ζ= ≈ 6E . H g [ ei ] H ea i vi H e a i vi (20) E ui ui + 15E ui 6σv − 15μξv 2/πσu Tr(R) 2 4 2 26 H vi +E ui 4. Convergence Analysis of 2 22 E vi5 + 15E ≈ 6E H ea i H eai ui ui the SRLMF Algorithm 2 E vi6 × E vi4 + E ui Convergence analysis of the SRLMF algorithm is much more H complicated than that of the LMS algorithm due to existence of the higher order estimation error signal in the coefficient 2 22 E vi5 + 15E 4 ≈ 6E H ea i H eai ξv ui ui update recursion. We thus make the following assumptions 2 along with (A.2) to make the analysis mathematically more 6 ξv , +E ui H tractable [19–24, 26]: (14) (A.5) di and ui are zero-mean, wide-sense stationary, and where ξv = E[|vi |4 ], ξv = E[|vi |6 ] denote the forth and 4 6 jointly Gaussian random variables. sixth-order moments of vi , respectively. (A.6) The input pair {di , ui } is independent of {d j , u j } for From Price’s theorem [25] we have all i, j . 21 = (15) E x sign y E xy , Subtracting both sides of (5) from wo we get π σy wi = wi−1 + μ sign [ui ]T ei3 , (21) then where wi = wo − wi . Taking expectations of both sides of (21) 2 T 2 = E ui sign [ui ] = (16) Tr(R). E ui we obtain H 2 πσu E[wi ] = E[wi−1 ] + μE sign [ui ]T ei3 . (22) Substituting (16) into (14) we get Using Price’s theorem [25], we can conclude that 22 2 22 E vi5 + 15E 4 ≈ 6E H g [ ei ] H ea i H eai ξv E ui ui ui 2 2 E sign [ui ]T ei3 = E uT ei3 . 6 (23) Tr(R)ξv . + i 2 πσu 2 πσu (17) Substituting (23) into (22) we get Substituting (12) and (17) into (7) we get 2 E uT ei3 . E[wi ] = E[wi−1 ] + μ (24) i 2 πσu 2 ui 2 eai 2 2 6 4 2 = 6σv E eai μξv Tr(R) + 15μξv E H 2 πσu (18) The expectation E[uT ei3 ] can be simplified using the fact that i ui 2 eai vi5 + 6μE . E for zero-mean and jointly Gaussian random variables x1 and H x2 , In order to simplify (18) and arrive at an expression for 3 2 E x1 x2 = 3E[x1 x2 ]E x2 . the steady-state EMSE ζ = E[eai ], we consider two cases. 2 (25)
  4. 4 EURASIP Journal on Advances in Signal Processing o (A.9) The initial conditions {w−1 , w−1 } are independent of Thus, using (25) in conjunction with (A.5), it follows that the zero mean random variables {di , ui , vi , qi }. E uT ei3 = E E uT ei3 | wi−1 i i In this case, the following variance relation holds [18]: = 3E E ei2 | wi−1 E E uT ei | wi−1 (26) i + μ−1 Tr(Q) = 2E eai g[ei ] , 22 as i −→ ∞. μE H g [ei ] ui σe2|wi−1 uT ei = 3E | wi−1 EE , (32) i where Tracking results can be obtained by inspection from the mean-square results as there are only minor differences. E σe2|wi−1 = σe2 − var E[ei | wi−1 ] Therefore, by substituting (12) and (17) into (32), we get (27) 2 = σe , 2 = μ−1 Tr(Q) + μξv 6 2 2 6σv E eai Tr(R) and from (9) 2 πσu E E uT ei | wi−1 = E E uT (vi + ui wi−1 ) | wi−1 (33) 22 i i 4 + 15μξv E H eai ui = E E uT ui wi−1 | wi−1 2 i E vi5 . + 6μE H ea i ui (28) = E uT ui wi−1 i We again consider two cases for the evaluation of the tracking EMSE ζ of the SRLMF algorithm. = RE[wi−1 ]. Substituting (27) and (28) in (26) yields (1) Sufficiently Small Step-Sizes. Also, here, in this case we get uT ei3 3σe2 RE[wi−1 ]. = E (29) i μ−1 Tr(Q) + μξv 2/πσu Tr(R) 6 2 (34) ζ= . Substituting (29) into (24) we get 2 6σv 22 An optimum value of the step-size of the SRLMF algorithm E[wi ] = E[wi−1 ] + 3μ σ RE[wi−1 ] πσu e 2 is obtained by minimizing (34) with respect to μ. Setting the (30) derivative of ζ with respect to μ equal to zero gives 22 = I + 3μ σ R E[wi−1 ]. πσu e 2 Tr(Q) μopt = . (35) Ultimately, it is easy to show that the mean behavior of the 2 6 2/πσu Tr(R)ξv weight vector, that is E[wi ], converges to the optimal weight vector wo if μ is bounded by: (2) Separation Principle. Similarly here as it was done for the derivation of (20), we obtain the following: 2 2πσu 0
  5. EURASIP Journal on Advances in Signal Processing 5 6.1. Weighted Energy-Conservation Relation 6.2. Weighted Variance Relation. Here, the weighted variance relation presented in [18] has been extended in order to Theorem 1. For any adaptive filter of the form (1), any derive expressions for the MSE and the MSD of the SRLMF positive-definite Hermitian matrix Σ, and for any data {di , ui }, algorithm during the transient phase. it holds that [18]: Theorem 2. For any adaptive filter of the form (1), any 2 2 + eai Σ + eHiΣ , 2 2 2 2 H = ui positive-definite Hermitian matrix Σ, and for any data {di , ui }, ui wi wi−1 HΣH HΣH Σ Σ p it holds that (38) 2 2 2 2 =E + μ2 E g[ei ] E wi wi−1 ui where eai Σ ui H[ui ]Σwi−1 , eHiΣ ui H[ui ]Σwi , and 2 = H HΣH Σ Σ ui HΣH p (47) ui (H[ui ]ΣH[ui ])u∗ . −2μ Re E eai Σ∗ g[ei ] as i −→ ∞. H i , Proof. Let us consider the adaptive filter updates of the Similarly, for real-valued data, the above weighted variance generic form given in (1). Subtracting both sides of (1) from relation becomes wo , we get 2 2 2 =E + μ2 E 2 HΣH g [ei ] E wi wi−1 ui Σ Σ wi = wi−1 − μH[ui ]u∗ g[ei ]. (48) (39) i −2μE eai Σ g[ei ] , as i −→ ∞. H If we multiply both sides of (39) by ui H[ui ]Σ from the left, Proof. Squaring both sides of (40), we get we get 2 2 eHiΣ = eai Σ − μ ui eHiΣ = eai Σ − μ ui 2 H 2 H (49) HΣH g[ei ] . HΣH g[ei ]. (40) p p For compactness of notation let us omit the argument of g so Two cases can be considered here. that (49) looks like = 0). In this case, wi = wi−1 and eai Σ = eHiΣ 2 H Case 1 ( ui HΣH 2 p 2 = eai Σ − μeai Σ ui 2 ∗ 4 2 eHiΣ H + μ2 ui H g HΣH g HΣ |2 = |eHΣ |2 . 2 2 Σ= wi−1 Σ and |eai HΣH so that wi p pi − μeai Σ∗ ui 2 H HΣH g. 2 HΣH = 0). Case 2 ( ui In this case, we use (40) to solve for / (50) g[ei ], Substituting (50) into (46), we get 1 ea Σ − eHiΣ . H g[ei ] = (41) p μ ui 2 ΣH i 2 2 2 2 4 2 + μ2 ui = ui wi−1 g ui wi H HΣH HΣH Σ HΣH Σ Substituting (41) into (39), we get − μeai Σ ui − μeai Σ∗ ui ∗ 2 2 H H HΣH g. HΣH g ∗ (51) H[ui ]ui HΣ ea − eHiΣ . wi = wi−1 − (42) p ui 2 ΣH i 2 H Dividing both sides of (51) by ui (of course here HΣH ui 2 ΣH = 0) we get / Expression (42) can be rearranged as H 2 − μeai Σ g∗ − μeai Σ∗ g. 2 2 2 H[ui ]u∗ HΣ H[ui ]u∗ HΣ + μ2 ui H H Σ= wi wi−1 g HΣH Σ i i eai = wi−1 + ep . wi + (43) 2 ui 2 ΣH i (52) ui HΣH H Taking expectations of both sides of (52), we obtain Evaluating the energies of both sides of (43) results in 2 2 2 2 + μ2 E =E g[ei ] E wi wi−1 ui 2 2 H[ui ]u∗ HΣ H[ui ]u∗ HΣ HΣH Σ Σ i i = wi−1 + (44) ea ep . wi + (53) ui 2 ΣH i ui 2 ΣH i ∗ − μE eai Σ g[ei ] + eai Σ∗ g[ei ] , Σ Σ H H H H After a straightforward calculation, the following weighted or in the following format: energy-conservation results: 2 2 2 2 + μ2 E =E g[ei ] E wi wi−1 ui HΣH Σ Σ 1 1 2 2 eai Σ eHiΣ 2 2 H = . wi Σ + wi−1 Σ + (54) p 2 2 ui ui HΣH HΣH − 2μ Re E eai Σ∗ g[ei ] H as i −→ ∞. , (45) For real-valued data, the weighted variance relation in (54) The weighted energy-conservation relation in (45) can also becomes be written as 2 2 2 =E + μ2 E 2 HΣH g [ei ] E wi wi−1 ui 2 2 Σ Σ + eai Σ + eHiΣ . 2 2 2 2 H = ui ui wi wi−1 HΣH HΣH Σ Σ (55) p −2μE eai Σ g[ei ] , as i −→ ∞. H (46)
  6. 6 EURASIP Journal on Advances in Signal Processing Evaluation of E[ ui 2 ΣH g2 [ei ]]. In order to facilitate the The transient analysis of the class of filters in (1) is more H evaluation of the term E[ ui 2 ΣH g2 [ei ]] we use the sepa- challenging due to the presence of the error nonlinearity. H Nevertheless, by using some approximations, the analysis can ration principle, namely, we assume that the filter is long be carried out to provide some useful insights about the enough so that the following assumption holds [18]. performance of the SRLMF algorithm. 2 is independent of ei . (A.11) ui HΣH To start, the expectations E[ ui 2 ΣH g2 [ei ]] and H HΣ g[e ]] are evaluated in the ensuing analysis in Therefore, E[eai i terms of the weighted norm of wi−1 . Since these expectations 2 2 2 E g2 [ei ] . =E HΣH g [ei ] E ui ui (64) HΣH are involved mathematically we will rely on the following assumption in order to facilitate their evaluation [18]. Since eai is Gaussian and independent of the noise, the expec- (A.10) The a priori estimation errors {eai , eai Σ } are jointly tation E[g2 [ei ]] depends on eai through its second moment H only. Therefore, we can define the following function of circular Gaussian. 2 E[eai ]: Evaluation of E[eai Σ g[ei ]]. From Price’s theorem, if x and y H Z2 = E g 2 [ e i ] . (65) are jointly Gaussian random variables that are independent from a third random variable z, then it holds that [25]: For the SRLMF algorithm, g[ei ] = ei3 . Since eai and vi are zero mean Gaussian and independent random variables with E xy = E xg y + z E yg y + z . (56) variances E[eai ] and σv , we have σe2 = E[ei2 ] = E[eai ] + σv . 2 2 2 2 E y2 6 Moreover from [18], E[ei ] = 15σe6 . Applying this result to the term E[eai Σ g[ei ]], and using (9), H Thus we get Z2 = E ei6 E eai Σ g[ei ] = E eai Σ g eai + vi H H 6 = 15σe ⎡ ⎤ (57) E eai g[ei ] ⎦ = E ea i Σ ea i ⎣ H 3 . 2 = 15 σe 2 E eai 3 2 2 = 15 E eai + σv In view of the assumption (A.10), the expectation E[eai g[ei ]] 2 depends on eai only through its second moment, E[eai ]. 3 2 2 ]: Therefore, we can define the following function of E[eai 2 2 2 4 2 6 = 15 E eai + 45σv E eai + 45ξv E eai + 15ξv . (66) E eai g[ei ] Z1 = . (58) 2 The expression for Z2 is related to the desired term E eai E[ ui 2 ΣH g2 [ei ]] through the equality For the SRLMF algorithm, g[ei ] = ei3 , therefore H = Z2 E 2 2 2 HΣH g [ei ] E ui ui 3 HΣH E eai g[ei ] = E eai eai + vi (67) (59) 2 = Z2 E . sign[ui ] Σ + vi3 eai + 3eai vi2 4 3 2 =E eai + 3eai vi . Since Now since eai and vi are zero mean Gaussian and inde- 2 2 pendent random variables with variances E[eai ] and σv , = E ui H[ui ]ΣH[ui ]uT 2 E ui HΣH i respectively, we obtain T = E sign[ui ]Σ sign [ui ] (68) 4 2 2 E eai g[ei ] = E eai + 3σv E eai . (60) 2 =E By using the fact that for circular Gaussian eai it holds that . sign[ui ] Σ E[eai ] = 3E[eai ]2 , we get 4 2 Substituting (63) and (67) into (55), we get 2 2 2 2 E eai g[ei ] = 3E eai + 3σv E eai 2 2 2 + μ 2 Z2 E =E sign[ui ] E wi wi−1 Σ Σ Σ (61) (69) 2 2 2 = 3E eai E e a i + σv . − 2μZ1 E eai Σ eai . H Substituting (61) into (58), we get Independence Assumption. If we assume that the regressor Z1 = 3 E eai + σv . 2 2 (62) sequence {ui } is i.i.d. then The expression for Z1 is related to the desired term E eai Σ eai = E wiT 1 ΣH[ui ]uT ui wi−1 H E[eai Σ g[ei ]] through the equality H − i (70) E eai Σ g[ei ] = Z1 E eai Σ eai . 2 H H =E . (63) wi−1 ΣHuT ui i
  7. EURASIP Journal on Advances in Signal Processing 7 In this way, the terms {E[eai Σ eai ], Z1 , Z2 } become all func- H Substituting (74) and (75) into (72), we get tions of wi−1 . Therefore, (69) becomes 3 2 2 2 + μ2 15 σu E 2 =E E wi wi−1 wi−1 2 2 2 + μ Z2 E 2 =E sign[ui ] E wi Σ wi−1 Σ Σ 2 2 2 2 + 45σv σu E wi−1 2 − 2μZ1 E wi−1 ΣHuT ui i 2 42 6 +45ξv σu E + 15ξv wi−1 (76) 2 2 + μ 2 Z2 E =E wi−1 sign[ui ] Σ Σ (71) 2 8σu 2 − 2μZ1 E 2 wi−1 ×E −3 μ sign[ui ] Σ sign [ui ]T ui π 2 2 + μ 2 Z2 E =E sign[ui ] wi−1 2 2 Σ Σ 2 2 × σu E + σv E . wi−1 wi−1 8 2 μ Z1 E − . wi−1 Since E[ sign[ui ] 2 ] = M , the recursion in (76) becomes ΣR 2 πσu 3 2 2 2 + 15μ2 Mσu E 6 =E E wi wi−1 wi−1 We thus find that studying the transient behavior of the SRLMF algorithm in effect has reduced to evaluating the 2 functions Z1 and Z2 and studying the resulting variance 2 + 45μ2 Mσv σu E 24 wi−1 relation (71). Let us now illustrate the application of the above results for white and correlated input data. 2 + 45μ2 Mξv σu E 42 + 15μ2 Mξv 6 wi−1 (77) 2 2σu 2 White Input Data. For white input data R is diagonal, say 2 2 −6 μσu E wi−1 R = σu I. Therefore, if we select Σ = I, the variance relation 2 π (71) becomes 2 2σu 2 2 −6 μσv E wi−1 2 2 2 π + μ Z2 E 2 =E sign[ui ] E wi wi−1 2 + 15μ2 Mξv , 6 = fE (72) wi−1 2 8σu 2 μ Z1 E − . wi−1 π where ⎛ ⎞ Now since 2 2σu 2 ⎠ f = 1 + 3μ⎝15μMσu ξv − 2 24 σ πv eai = wiT 1 uT ui wi−1 2 −i ⎛ ⎞ (73) 2 = wi−1 2 uT ui . 2σu ⎠ (78) + 3μσu ⎝15μMσu σv 2 2 22 −2 E wi−1 i π Substituting (73) into (66), we get 2 2 + 15μ2 Mσu E 6 . wi−1 3 2 2 2 Z2 = 15 E 2 + 45σv wi−1 uT ui E wi−1 uT ui i i We see that the transient behavior of the SRLMF algorithm is described by a nonlinear recursion in E[ wi 2 ] due to the 2 4 6 + 45ξv E + 15ξv wi−1 uT ui i presence of the factor E[ wi−1 2 ] inside f . 3 2 2 2 2 = 15 E + 45σv E wi−1 wi−1 R R Correlated Input Data. For uncorrelated data, the variance (74) 2 relation (72) shows that only unweighted norms of wi and 4 6 + 45ξv E + 15ξv wi−1 R wi−1 appear on both sides of the equation. However, for correlated data, different weighing matrices will appear on 3 2 2 2 2 2 2 = 15 σu E + 45σv σu E wi−1 wi−1 both sides of (72). If Σ = I in (71), we get 2 42 6 + 45ξv σu E + 15ξv . wi−1 2 2 2 + μ 2 Z2 E =E sign[ui ] E wi wi−1 Similarly by substituting (73) into (62), we get (79) 8 2 μ Z1 E − . wi−1 2 Z1 = 3 σu E 2 2 R + σv . wi−1 (75) 2 πσu
  8. 8 EURASIP Journal on Advances in Signal Processing If Σ = R in (71), we get Using this fact, we have 2 2 2 + μ 2 Z2 E =E sign[ui ] E wi wi−1 R R R 2 2 2 = − p0 E − p1 E E wi wi wi RM R (80) 8 2 μ Z1 E − . wi−1 − ··· (85) R2 2 πσu 2 − pM −1 E . wi Similarly if Σ = RM −1 in (71), we get RM −1 2 2 2 + μ 2 Z2 E =E sign[ui ] E wi wi−1 RM −1 RM −1 RM −1 We can collect the above results into a compact vector notation by writing (79)–(81) as 8 2 μ Z1 E − . wi−1 RM 2 πσu (81) Wi = F Wi−1 + μ2 Z2 Y, (86) wi 2 M ] can be inferred from the prior weighting The term E[ R factors where the M × 1 vectors {Wi , Y} are given by 2 2 2 2 ,...,E E wi ,E wi R ,E wi R 2 wi RM−1 , ⎡ ⎤ ⎡ ⎤ (82) 2 2 E sign[ui ] E wi ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ by expressing RM as a linear combination of its lower-order ⎢E w ⎥ ⎢ E sign[u ] ⎥ 2 2 ⎢ ⎥ ⎢ ⎥ powers using the Cayley-Hamilton theorem. Thus let p(x) = i i R R ⎢ ⎥ ⎢ ⎥ Wi = ⎢ Y=⎢ ⎥, ⎥, det(xI − R) denote the characteristic polynomial of R, say ⎢ ⎥ ⎢ ⎥ . . . . ⎢ ⎥ ⎢ ⎥ . . ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ p(x) = xM + pM −1 xM −1 + pM −2 xM −2 + · · · + p1 x + p0 . 2 2 sign[ui ] E wi E RM −1 RM −1 (83) (87) Then we know that [18]: RM = − pM −1 RM −1 − pM −2 RM −2 − · · · − p1 R − p0 I. (84) and the M × M coefficient matrix F is given by ⎡ ⎤ 8 μ Z1 − ⎢ ⎥ 1 ⎢ ⎥ 2 πσu ⎢ ⎥ ⎢ ⎥ 8 ⎢ ⎥ μ Z1 − 0 1 ⎢ ⎥ ⎢ ⎥ 2 πσu ⎢ ⎥ ⎢ ⎥ 8 ⎢ ⎥ μ Z1 − 0 0 1 ⎢ ⎥ ⎢ ⎥ 2 πσu ⎢ ⎥ F =⎢ ⎥. . (88) . ⎢ ⎥ . ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 8 μ Z1 ⎢ ⎥ − 0 0 1 ⎢ ⎥ 2 πσu ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ 8 8 8 8 μp0 Z1 μp1 Z1 μpM −2 Z1 1 + μpM −1 Z1 ··· 2 2 2 2 πσu πσu πσu πσu As can be seen from (86), the transient behavior of the and the excess mean-square error is defined as SRLMF algorithm is described by an M -dimensional state- 2 space recursion as opposed to one-dimensional in the white ea i EMSE lim E , (90) i→∞ input case (72). We know that, the mean-square error is defined as where lim E |ei |2 , 2 2 MSE =E (89) ea i . E wi−1 (91) R i→∞
  9. EURASIP Journal on Advances in Signal Processing 9 Table 1: Computational load per iteration for LMF and SRLMF 0 algorithms when data is real. −1 × Algorithm + Sign −2 2M 2M + 3 LMF −3 2M 2M + 2 SRLMF 1 −4 MSE (dB) −5 Table 2: Computational load per iteration for LMF and SRLMF algorithms when data is complex. −6 LMF −7 × Algorithm + Sign SRLMF 8M + 1 8M + 5 −8 LMF 6M + 1 6M + 3 SRLMF 2 −9 −10 0 2 4 6 8 10 12 The evolution of E[|eai |2 ] is described by the second entry of ×103 Iterations the state vector Wi in (86). The resulting learning curve of the filter is E[|ei |2 ] = σv + E[|eai |2 ]. 2 Figure 1: Comparison of the MSE learning curves of LMF and SRLMF algorithms in a uniform noise environment with SNR = We know that the mean-square deviation is defined as 10 dB. 2 . MSD lim E wi (92) i→∞ −29.98 The evolution of E[ wi 2 ] is described by the first entry of the state vector Wi in (86). 7. Computational Load MSE (dB) −29.99 Finally, the computational complexity of the LMF and SRLMF algorithms is discussed in this section. Tables 1 and 2 detail the estimated computational load per iteration for LMF and SRLMF algorithms, respectively, for real- and complex-valued data in terms of the number of real additions (+), real multiplications (×), and comparisons with zero (or −30 sign evaluations). We know that one complex multiplication requires four real multiplications and two real additions, 10−2 10−1 while one complex addition requires two real additions. Step-size (µ) As can be seen from these two tables, the computational complexity of the SRLMF algorithm becomes more inter- Simulation Theory (small µ) esting when the data is complex-valued. The case of fading Theory (separation) channels in mobile communications is a good example where this scenario can bring drastic improvement in complexity of Figure 2: Theoretical and simulated MSE learning curves of the SRLMF algorithm over the LMF algorithm. the SRLMF algorithm using white Gaussian regressors with shift structure with SNR = 30 dB. 8. Simulation Results First, the performance analysis of the LMF and the SRLMF environment. In all of these figures the MSE is plotted versus the step-size μ with a SNR = 30 dB. algorithms is investigated in an unknown system identifica- tion setup with wo = [0.227 0.460 0.688 0.460 0.227]T In the case of Figure 2, the regressors, with shift structure, as far as convergence, steady-state and transient behaviors are are generated by feeding a unit-variance white process into a concerned. Figure 1 depicts the convergence behavior of the tapped delay line. However, in Figure 3, the regressors, with two algorithms for a signal to noise ratio (SNR) of 10 dB in shift structure, are generated by passing correlated data into a uniform environment. This figure shows almost identical a tapped delay line. Here, the correlated data are obtained performance for the two algorithms; no deterioration has by passing a unit-variance i.i.d. Gaussian data through occurred to the SRLMF algorithm. √ first-order autoregressive model with transfer function a 1 − a2 / (1 − az−1 ) and a = 0.8. To further test the validity Second, in order to validate the theoretical findings, extensive simulations are carried out for different scenarios. of the results, Gaussian regressors with an eigenvalue spread While Figures 2–4 are for the case of the steady-state EMSE of of five without a shift structure are used, this is depicted in the SRLMF algorithm in a stationary environment, Figure 5 Figure 4. As it can be seen from these figures, the simulation is for the case of the tracking EMSE in a nonstationary results match very well the theoretical results ((19) and (20)).
  10. 10 EURASIP Journal on Advances in Signal Processing −29.98 0 −5 −10 MSE (dB) −29.99 −15 MSE (dB) −20 −25 −30 −30 10−2 10−1 −35 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Step-size (µ) Step-size (µ) Simulation Theory (small µ) Simulation Theory (small µ) Theory (separation) Theory (separation) Figure 3: Theoretical and simulated MSE learning curves of the Figure 5: Theoretical and simulated MSE learning curves of the SRLMF algorithm using correlated Gaussian regressors with shift SRLMF algorithm for a random-walk channel with SNR = 30 dB. structure with SNR = 30 dB. 0 −10 −29.98 MSD (dB) −20 −30 −40 MSE (dB) −50 −29.99 0 2 4 6 8 10 ×104 Iterations (a) 0 −10 −30 MSE (dB) −20 10−2 10−1 −30 Step-size (µ) −40 −50 Simulation 0 2 4 6 8 10 Theory (small µ) ×104 Iterations Theory (separation) Simulation Figure 4: Theoretical and simulated MSE learning curves of the Theory SRLMF algorithm using Gaussian regressors with an eigenvalue spread = 5 without shift structure with SNR = 30 dB. (b) Figure 6: Theoretical and simulated MSD (a) and MSE (b) learning curves of the SRLMF algorithm using white Gaussian regressors Third, to further validate the theoretical results in a track- with SNR = 50 dB. ing scenario, the results of Figure 5 depicts this behavior. Here, the random-walk channel behaves according to R whose eigenvalue spread we set at ρ = 5. Let the SNR be wio = wio−1 + qi , (93) 50 dB and the step-size is fixed at μ = 0.01. where qi is a Gaussian sequence with zero mean and variance The results in Figures 6 and 7 show the theoretical and σq = 10−9 and w−1 = wo . As observed from Figure 5, the o 2 simulated MSD and MSE learning curves of the SRLMF simulation results corroborate closely the theoretical results algorithm using white Gaussian regressors and Gaussian ((34) and (36)). regressors with an eigenvalue spread equal to 5. The theo- Finally, we examine the transient behavior of the SRLMF retical values are obtained by using the expression (86). As algorithm for the case of Gaussian data. Let us consider a can be seen here, There is an excellent match between the real-valued regression sequence {ui } with covariance matrix theoretical and simulated results.
  11. EURASIP Journal on Advances in Signal Processing 11 References 0 −10 MSD (dB) [1] H. Sari, “Performance evaluation of three adaptive equal- −20 ization algorithms,” in Proceedings of the IEEE International −30 Conference on Acoustics, Speech, and Signal Processing (ICASSP −40 ’82), vol. 7, pp. 1385–1389, May 1982. −50 [2] N. J. Bershad, “On the optimum data nonlinearity in LMS 0 2 4 6 8 10 adaptation,” IEEE Transactions on Acoustics, Speech, and Signal ×104 Iterations Processing, vol. 34, no. 1, pp. 69–76, 1986. [3] C. P. Kwong, “Dual sign algorithm for adaptive filtering,” IEEE (a) Transactions on Communications, vol. 34, no. 12, pp. 1272– 1275, 1986. 0 [4] O. Macchi, “Advances in adaptive filtering,” in Digital Com- −10 munications, E. Biglieri and G. Prati, Eds., pp. 41–57, North- MSE (dB) −20 Holland, Amsterdam, The Netherlands, 1986. −30 [5] V. J. Mathews and S. H. Cho, “Improved convergence analysis of stochastic gradient adaptive filters using the sign algorithm,” −40 IEEE Transactions on Acoustics, Speech, and Signal Processing, −50 0 2 4 6 8 10 vol. 35, no. 4, pp. 450–454, 1987. ×104 [6] N. A. M. Verhoeckx and T. A. C. M. Claasen, “Some considera- Iterations tions on the design of adaptive digital filters equipped with the Simulation sign algorithm,” IEEE Transactions on Communications, vol. Theory 32, no. 3, pp. 258–266, 1984. [7] E. Eweda, “Almost sure convergence of a decreasing gain (b) sign algorithm for adaptive filtering,” IEEE Transactions on Figure 7: Theoretical and simulated MSD (a) and MSE (b) learning Acoustics, Speech, and Signal Processing, vol. 36, no. 10, pp. curves of the SRLMF algorithm using Gaussian regressors with an 1669–1671, 1988. eigenvalue spread = 5, SNR = 50 dB. [8] E. Eweda, “Tight upper bound of the average absolute error in a constant step-size sign algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 11, pp. 1774–1776, 1989. [9] T. A. C. M. Claasen and W. F. G. Mecklenbrauker, “Com- 9. Conclusions parison of the convergence of two algorithms for adaptive FIR digital filters,” IEEE Transactions on Acoustics, Speech and A new adaptive algorithm, called the SRLMF algorithm, has Signal Processing, vol. 29, no. 3, pp. 670–678, 1981. been presented in this work. Expressions are derived for the [10] N. J. Bershad, “Comments on ’comparison of the convergence steady-state EMSE in a stationary environment. A condition of two algorithms for adaptive FIR digital filters’,” IEEE for the mean convergence is also found, and it turns out Transactions on Acoustics, Speech, and Signal Processing, vol. 33, that the convergence of the SRLMF algorithm strongly no. 6, pp. 1604–1606, 1985. depends on the choice of initial conditions. Also, expressions [11] W. A. Sethares, I. M. Y. Mareels, B. D. O. Anderson, C. are obtained for the tracking EMSE in a nonstationary R. Johnson Jr., and R. R. Bitmead, “Excitation conditions environment. An optimum value of the step-size μ is also for signed regressor least mean squares adaptation,” IEEE evaluated. Moreover, an extension of the weighted variance Transactions on Circuits and Systems, vol. 35, no. 6, pp. 613– relation is provided in order to derive expressions for the 624, 1988. mean-square error (MSE) and the mean-square deviation [12] E. Eweda, “Analysis and design of a signed regressor LMS (MSD) of the proposed algorithm during the transient algorithm for stationary and nonstationary adaptive filtering with correlated Gaussian data,” IEEE Transactions on Circuits phase. Monte Carlo simulations have shown that there is and Systems, vol. 37, no. 11, pp. 1367–1374, 1990. a good agreement between the theoretical and simulated [13] S. Dasgupta and C. R. Johnson Jr., “Some comments on the results. The simulation results indicate that both the SRLMF behavior of sign-sign adaptive identifiers,” Systems and Control algorithm and the LMF algorithm converge at the same rate Letters, vol. 7, no. 2, pp. 75–82, 1986. resulting in no performance loss. The analysis developed [14] S. I. Koike, “Analysis of the sign-sign algorithm based on in this paper is believed to make practical contributions to Gaussian distributed tap weights,” in Proceedings of the the design of adaptive filters using the SRLMF algorithm IEEE International Conference on Acoustics, Speech and Signal instead of the LMF algorithm in pursuit of the reduction in Processing (ICASSP ’98), vol. 3, pp. 1673–1676, May 1998. computational cost and complexity whilst still maintaining [15] D. L. Duttweiler, “Adaptive filter performance with nonlin- good performance. earities in the correlation multiplier,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 30, no. 4, pp. 578– 586, 1982. Acknowledgment [16] A. Gersho, “Adaptive filtering with binary reinforcement,” IEEE Transactions on Information Theory, vol. 30, no. 2, pp. The authors acknowledge the support provided by King 191–199, 1984. [17] S. Koike, “Effects of impulse noise at filter input on perfor- Fahd University of Petroleum and Minerals to carry out this work. mance of adaptive filters using the LMS and signed regressor
  12. 12 EURASIP Journal on Advances in Signal Processing LMS algorithms,” in Proceedings of the International Sym- posium on Intelligent Signal Processing and Communications (ISPACS ’06), pp. 829–832, December 2006. [18] A. H. Sayed, Fundamentals of Adaptive Filtering, Wiley Inter- science, New York, NY, USA, 2003. [19] E. Walach and B. Widrow, “The Least Mean Fourth (LMF) adaptive algorithm and its family,” IEEE Transactions on Information Theory, vol. 30, no. 2, pp. 275–283, 1984. [20] A. Zerguine, C. F. N. Cowan, and M. Bettayeb, “LMS-LMF adaptive scheme for echo cancellation,” Electronics Letters, vol. 32, no. 19, pp. 1776–1778, 1996. [21] T. Aboulnasr and A. Zerguine, “Variable weight mixed- norm LMS-LMF adaptive algorithm,” in Proceedings of the 33rd Annual Asilomar Conference on Signals, Systems, and Computers, pp. 791–794, Pacific Grove, Calif, USA, October 1999. [22] M. Moinuddin and A. Zerguine, “Tracking analysis of the NLMS algorithm in the presence of both random and cyclic nonstationarities,” IEEE Signal Processing Letters, vol. 10, no. 9, pp. 256–258, 2003. A. Zerguine, M. K. Chan, T. Y. Al-Naffouri, M. Moinuddin, [23] and C. F. N. Cowan, “Convergence and tracking analysis of a variable normalised LMF (XE-NLMF) algorithm,” Signal Processing, vol. 89, no. 5, pp. 778–790, 2009. [24] A. Zerguine, M. Moinuddin, and S. A. A. Imam, “A noise constrained least mean fourth (NCLMF) adaptive algorithm,” Signal Processing, vol. 91, no. 1, pp. 136–149, 2011. [25] R. Price, “A useful theorem for nonlinear devices having Gaussian inputs,” IRE Transactions on Information Theory, vol. 4, no. 2, pp. 69–72, 1958. [26] S. H. Cho, S. D. Kim, and K. Y. Jeon, “Statistical convergence of the adaptive least mean fourth algorithm,” in Proceedings of the 3rd International Conference on Signal Processing (ICSP ’96), vol. 1, pp. 610–613, October 1996.
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2