Báo cáo hóa học: " Modified Clipped LMS Algorithm Mojtaba Lotfizad"
lượt xem 3
download
Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành hóa học dành cho các bạn yêu hóa học tham khảo đề tài: Modified Clipped LMS Algorithm Mojtaba Lotfizad
Bình luận(0) Đăng nhập để gửi bình luận!
Nội dung Text: Báo cáo hóa học: " Modified Clipped LMS Algorithm Mojtaba Lotfizad"
- EURASIP Journal on Applied Signal Processing 2005:8, 1229–1234 c 2005 Hindawi Publishing Corporation Modified Clipped LMS Algorithm Mojtaba Lotfizad Department of Electrical Engineering, Tarbiat Modarres University, P.O. Box 14115-143, Tehran, Iran Email: lotfizad@modares.ac.ir Hadi Sadoghi Yazdi Department of Electrical Engineering, Tarbiat Modarres University, P.O. Box 14115-143, Tehran, Iran Email: sadoghi y@yahoo.com Received 30 May 2004; Revised 22 January 2005; Recommended for Publication by Mark Kahrs A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization (+1, 0, −1) scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS) algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented. Keywords and phrases: adaptive filter, LMS algorithm, clipped LMS algorithm, modified clipped LMS algorithm. 1. INTRODUCTION others in terms of speed of processing [16]. Also fast CLMS is proposed for increasing the speed of convergence [2]. Adaptive signal processing has been one of the fastest grow- Much effort from the viewpoint of reduction of the com- ing fields of research in recent years. It has attained its pop- putations of the LMS algorithm is seen in the aforemen- ularity due to a broad range of useful applications in such tioned references. The present work concerns the presenta- diverse areas as communications, radar, sonar, seismology, tion of a modified version of the CLMS algorithm whose navigation and control systems, and biomedical electronics. tracking is much better than the CLMS and LMS and has less The LMS adaptive filter is very popular due to its simplic- computation as well. ity, but even simpler approaches are required for many real- The variants of LMS are discussed in Section 2. The pro- time applications, several different versions of the LMS algo- posed new algorithm, which is a modification of the afore- rithm have been proposed in the literature [1, 2, 3, 4, 5, 6]. mentioned algorithm, appears in Section 3. Section 4 deals Reduction of the complexity of the LMS algorithm has re- with the computation of tracking performance of the pro- ceived attention in the area of adaptive filters [5, 7, 8, 9]. The posed algorithm. Section 5 is concerned with computer sim- sign algorithm and clipped data algorithm are in this cate- ulation issues. Reduction of computational complexity of the gory [2, 5, 8, 9, 10]. proposed algorithm is investigated in Section 6. The final The tracking behavior of adaptive filtering algorithms is section presents conclusions for the present work and sum- a fundamental issue in defining their performance in non- marises the main findings. stationary operating environments. It has been established that adaptive algorithms that exhibit good convergence prop- erties in stationary environments do not necessarily provide 2. VARIANTS OF THE LMS ALGORITHM good tracking performance in a nonstationary environment The purpose of this section is to briefly introduce the main because the convergence behavior of an adaptive filter is a existing variants of the LMS algorithm. In order to clarify the transient phenomenon, whereas the tracking behavior is a background of the new algorithm, it is necessary to show how steady-state property [11, 12]. Thus, much research is done they are interrelated and how they have evolved. The LMS for the measurement of tracking performance of variants of the LMS algorithm from different views [10, 13, 14, 15]. algorithm has been studied in [17, 18] as For applications in which slow adaptation is acceptable, Wn+1 = Wn + µen Xn , (1) the clipped LMS (CLMS) algorithm has an edge over the
- 1230 EURASIP Journal on Applied Signal Processing msgn(x, δ ) sgn(x) 1 1 −δ δ x x −1 −1 Figure 1: Quantization scheme for the clipped LMS algorithm. Figure 2: Quantization scheme for the modified clipped LMS algo- rithm. where where Xn is the modified clipped input signal vector whose ith component is xn (i) = msgn[xn (i), δ ], where msgn{·} is T en = dn − Xn Wn , (2) the modified sign function defined as +1, Wn = [wn (1), wn (2), . . . , wn (N )]T is the weight vector of the δ ≤ xn (i), estimator, Xn is the vector of the input data sequence, which −δ < xn (i) < δ , msgn xn (i), δ = 0, (8) −1, is assumed to be a stationary random process, N is the num- xn (i) ≤ −δ. ber of filter taps, en is the estimation error, dn is the desired response, and µ is the step size. It should be noted that the implementation of such an A simple change can be made to the LMS algorithm to adaptive filter has potentially greater throughput because for obtain the CLMS algorithm [2, 16, 19]: those times when the tap input signal xn (i) is less than the specified threshold, δ , then xn (i) will be equal to zero and no coefficient adaptation for the corresponding weight needs to Wn+1 = Wn + µen Xn , (3) be performed. This means that some of the time-consuming operations where X is the clipped input signal vector, whose ith com- in the weight update formula (7) can be omitted, thereby ponent is x(i) = sgn[x(i)]. Other variations of the LMS algo- ˜ leading to a reduction of the computational load on the pro- rithm that have been studied are the “sign” algorithm [20, 21] cessor. Whether this potential can be realised depends on the architecture used in the processor and also the application. Convergence of the mean of the weight vector for MCLMS Wn+1 = Wn + µen Xn , ˜ (4) is proved in the next subsection. It is shown that the mean of the weight vector of the modified clipped LMS algorithm where converges to the optimum weight vector of the Wiener filter. 3.1. Derivation of the convergence en = sgn en , ˜ (5) of the MCLMS algorithm Now, we want to prove that the statistical average of the and “the zero-forcing” algorithm [22, 23] weight vector converges in the limit to the optimum Wiener weight vector. Taking expectations on both sides of (7) yields Wn+1 = Wn + µen Xn . ˜ (6) E Wn+1 = E Wn + µE en Xn . (9) The CLMS algorithm involves clipping the input signal Substituting (2) in (10) gives vector in the weight update formula (3). This quantization scheme can best be illustrated by Figure 1. T E Wn+1 = E Wn + µE dn Xn − Xn Xn Wn . (10) T Assuming lack of correlation between the weights and Xn Xn 3. THE PROPOSED MODIFIED CLIPPED as in [17], (10) gives LMS ALGORITHM T E Wn+1 = E Wn + µ E dn Xn − E Xn Xn E Wn . Here we propose a new modification to the clipped LMS al- gorithm to further simplify the implementation of the LMS (11) algorithm. Rather than representing the input signal Xn by Now, with regard to (A.1) in the appendix, we have a two-level signal as shown earlier by (3), we quantize it into a three-level signal according to the quantization scheme α α shown in Figure 2. Thus, the adaptation equation can be E Wn+1 = E Wn + µ P − RE Wn σx σx written as (12) α α = I − µ R E Wn + µ P , Wn+1 = Wn + µen Xn , (7) σx σx
- Modified Clipped LMS Algorithm 1231 If µ satisfies this relation for the largest eigenvalue λmax , then where (19) is also satisfied for all other eigenvalues. Thus, the con- δ2 vergence condition for MCLMS is as follows: 2 α= exp − 2 , (13) π 2σx α1 0
- 1232 EURASIP Journal on Applied Signal Processing 25 105 MCLMS/CLMS Relative reduction in error of MCLMS 100 20 Weight error (dB) 10−5 MCLMS/LMS LMS 15 CLMS 10−10 10 10−15 MCLMS 5 10−20 0 100 200 300 400 500 600 700 800 900 1000 0 Iteration number 0.2 0.3 0.4 0.5 0.6 0.7 0.75 0.8 0.85 0.9 Figure 3: Weight estimation error, norm of the difference weight Threshold δ vector. Figure 4: Relative reduction in error of MCLMS compared with CLMS and LMS with step size of 0.1 for MCLMS and different Also, the denominator of the fraction (23) is thresholds with best step size for least tracking error for LMS and CLMS. 2 νn 2 = σn . E (26) and adaptive filter weights generated by each algorithm was Hence, the MCLMS algorithm misadjustment can be written averaged over 100 independent simulation runs and plotted as as a function of time, as depicted in Figure 3. The norm is 2 calculated by 1α tr RΦ . MMCLMS = (27) 2 σν σx 1/ 2 2 norm(h, W ) = h i − Wi . (29) Comparing this misadjustment value with that of the LMS i algorithm, MLMS = (1/σν ) tr{RΦ}, the following relation can 2 be obtained: It can be seen that MCLMS has much better convergence than CLMS and it is also almost as good as LMS in terms 2 α MMCLMS = MLMS . (28) of convergence speed. σx Of course, the MCLMS speed of convergence is reduced by increasing the threshold δ . In the above case with a The above relation shows that increasing the threshold δ such threshold of 0.7, the difference weight norm of CLMS is im- that α in less than σx gives rise to a decrease in the misad- proved by 12%, whereas the CLMS in comparison to the LMS justment error relative to LMS in tracking, but with regard has both lower convergence speed and higher weight error. to (21), it causes the MCLMS to be slower in convergence. In Figure 4 shows the ratio of tracking error norm for weights the next section, this issue will be shown for identification of of the proposed MCLMS algorithm to optimum weights of a filter and its tracking. LMS and CLMS. The existence of the second parameter in the MCLMS algorithm, that is, δ , in comparison with LMS 5. APPLICATION OF MCLMS IN THE and CLMS, has caused an increase in its performance. IDENTIFICATION PROBLEM In order to demonstrate the convergence behavior of the 6. REDUCTION OF COMPUTATIONAL COMPLEXITY OF LMS, CLMS, and the new MCLMS algorithm, 100 runs of THE MCLMS RELATIVE TO THE CLMS ALGORITHM simulation experiments have been performed (with µ = 0.17 The proposed algorithm has less computational complexity and δ = 0.7 for MCLMS, µ = 0.105 for LMS, and µ = 0.13 relative to the CLMS algorithm. If we assume that the input for CLMS, which were the best parameters for maximum signal has a Gaussian distribution with zero mean and stan- speed of convergence). In all experiments carried out for the dard deviation σx , then the probability that the signal falls in system identification, a stationary white noise sequence was the interval between [−δσx , δσx ] is used and the system is a 7-tap FIR transversal filter having parameters that are arbitrarily chosen. −δσx The input data were normalized to have unit variance. P − δσx < x < δσx = N µx , σx dx, (30) The norm of the difference between the plant FIR weights −δσx
- Modified Clipped LMS Algorithm 1233 With regard to the assumption of the theorem, Table 1: Rate of reduction of computational complexity in weight update for MCLMS in comparison to CLMS. ρσu σv ρ2 E{zv} = − σv = 0. (A.4) Threshold δ 0.1 0.4 0.7 1.0 σu σv Percentage of Therefore z and v are uncorrelated. Also, therefore, since z 7.97 31.09 51.61 68.27 computational and v are uncorrelated, we have E zv = E{z}E v = E{z}× reduction 0 = 0. ρ u E zv = E − v v =0= ⇒ (A.5) where N (µx , σx ) is the input probability density function and σu σv P (−δσx < x < δσx ) which, in addition to being the proba- ρ ρσu 1 E uv = E vv =⇒ E uv = E vv . (A.6) bility of the occurrence of the signal in interval [−δσx , δσx ], σu σv σv is also the computational reduction of MCLMS relative to CLMS. The reason is that the signal is falling between the On the other hand, two thresholds with a probability of P (−δσx < x < δσx ), and |v |, |v | > δ , within this interval, the proposed algorithm has no weight vv = v × msgn(v, δ ) = (A.7) update, since according to (8) msgn[xn (i), δσx ] is equal to |v | ≤ δ. 0, zero. The computational reduction of MCLMS compared to the CLMS is shown in Table 1 for several different thresholds. The density function of vv is also Gaussian with distribution It is interesting to note that regarding (30) and Figure 4 N (0, σv ) hence, for δ = 0.7, the computational complexity of the weight up- date formula can be reduced about 52% without any notice- +∞ v2 1 |v | √ E vv = exp − dv able change in the convergence behavior. 2σv 2π −∞ (A.8) +∞ v2 1 |v | √ =2 exp − dv 7. CONCLUSIONS AND SUGGESTIONS 2σv 2π +δ FOR FURTHER WORK δ2 2 = σv exp − 2 . (A.9) A proposed modified clipped LMS algorithm for discrete- π 2σv time adaptive FIR filtering has been studied. This new algo- rithm was analytically treated from a theoretical viewpoint Now regarding (A.6) and (A.9) we have and the convergence rate and tracking performance from a misadjustment viewpoint were derived. The advantages in- δ2 ρσu 2 E uv = σv exp − 2 clude a simple weight update formula, better convergence σv π 2σv capability, and better tracking performance. Further work (A.10) δ2 1 2 could apply the three-level clipping idea to the error signal = exp − 2 ρσu σv . instead of the input signal. σv π 2σv Finally, with regard to E{uv} = ρσu σv in (A.10), we have APPENDIX proved the theorem. Theorem 1. If two random variables u and v both have a Gaussian distribution N (0, σu ) and N (0, σv ), respectively, and ACKNOWLEDGMENT E{uv} = ρσu σv , v = msgn(v, δ ), then The authors wish to thanks an anonymous reviewer for his/her helpful comments. α E uv = E{uv}, (A.1) σv REFERENCES √ where α = 2/π exp(−δ 2 / 2σv ). 2 [1] E. Ferrara, “Fast implementations of LMS adaptive filters,” in IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 28, pp. 474–475, 1980. Proof. We define the random variable [2] L. Deivasigamani, “A fast clipped-data LMS algorithm,” in IEEE Transactions on Acoustics, Speech, and Signal Processing, ρ u z= − v. vol. 30, pp. 648–649, 1982. (A.2) σu σv [3] J. A. Chambers, O. Tanrinkulu, and A. G. Constantinides, “Least mean mixed-norm adaptive filtering,” Electronic Let- Now we have ter, vol. 30, no. 19, pp. 1574–1575, 1994. [4] E. Walach and B. Widrow, “The least mean fourth (LMF) ρ ρ2 u u adaptive algorithm and its family,” IEEE Trans. Inform. The- E{zv} = E − v v =E v −E v . (A.3) σu σv σu σv ory, vol. 30, no. 2, pp. 275–283, 1984.
- 1234 EURASIP Journal on Applied Signal Processing Mojtaba Lotfizad was born in Tehran, Iran, [5] C. Kwong, “Dual sign algorithm for adaptive filtering,” IEEE in 1955. He received the B.S. degree in elec- Trans. Commun., vol. 34, no. 12, pp. 1272–1275, 1986. trical engineering from Amir Kabir Univer- [6] A. Ahlen, L. Lindbom, and M. Sternad, “Analysis of sta- sity, Iran, in 1980, and the M.S. and Ph.D. bility and performance of adaptation algorithms with time- invariant gains,” IEEE Trans. Signal Processing, vol. 52, no. 1, degrees from the University of Wales, UK, in pp. 103–116, 2004. 1985 and 1988, respectively. He then joined [7] E. Eweda, “Analysis and design of a signed regressor LMS the Engineering Faculty Tarbiat Modarres algorithm for stationary and nonstationary adaptive filtering University, Iran. He has also been a Con- with correlated Gaussian data,” IEEE Transactions on Circuits sultant to several industrial and government and Systems, vol. 37, no. 11, pp. 1367–1374, 1990. organizations. His current research interests [8] W. A. Sethares, I. M. Y. Mareels, B. D. O. Anderson, C. R. John- are signal processing, adaptive filtering, speech processing, and spe- son Jr., and R.R. Bitmead, “Excitation conditions for signed cialized processors. regressor least mean squares adaptation,” IEEE Transactions on Circuits and Systems, vol. 35, no. 6, pp. 613–624, 1988. Hadi Sadoghi Yazdi was born in Sabzevar, [9] V. Mathews and S. Cho, “Improved convergence analysis of Iran, in 1971. He received the B.S. degree in stochastic gradient adaptive filters using the sign algorithm,” electrical engineering from Ferdosi Mashad in IEEE Transactions on Acoustics, Speech, and Signal Process- University, Iran, in 1994, and he then re- ing, vol. 35, pp. 450–454, 1987. ceived the M.S. degree in electrical engi- [10] E. Eweda, “Comparison of RLS, LMS, and sign algorithms for neering from Tarbiat Modarres University, tracking randomly time-varying channels,” IEEE Trans. Signal Tehran, in 1996. Since September 2000, he Processing, vol. 42, no. 11, pp. 2937–2944, 1994. has been pursuing the Ph.D. degree in elec- [11] P. C. Wei, J. Han, J. R. Zeidler, and W. H. Ku, “Compara- trical engineering at the Tarbiat Modarres tive tracking performance of the LMS and RLS algorithms for University, Iran. His research interests in- chirped narrowband signal recovery,” IEEE Trans. Signal Pro- clude adaptive filtering, image and video processing. cessing, vol. 50, no. 7, pp. 1602–1609, 2002. [12] S. Haykin, Adaptive Filter Theory, Prentice-Hall, Englewood Cliffs, NJ, USA, 3rd edition, 1996. [13] M. Hajivandi and W. A. Gardner, “Measures of tracking per- formance for the LMS algorithm,” in IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, pp. 1953– 1958, 1990. [14] B. Farhang-Boroujeny and S. Gazor, “Performance of LMS- based adaptive filters in tracking a time-varying plant,” IEEE Trans. Signal Processing, vol. 44, no. 11, pp. 2868–2871, 1996. [15] L. Lindbom, M. Sternad, and A. Ahlen, “Tracking of time- varying mobile radio channels—part 1. The wiener LMS algo- rithm,” IEEE Trans. Commun., vol. 49, no. 12, pp. 2207–2217, 2001. [16] M. White, I. Mack, G. Borsuk, D. Lampe, and F. Kub, “Charge- coupled device (CCD) adaptive discrete analog signal process- ing,” IEEE Trans. Commun., vol. 27, no. 2, pp. 390–405, 1979. [17] B. Widrow and S. Stearns, Adaptive Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, USA, 1985. [18] B. Farhang-Boroujeny, Adaptive Filters, Theory and Applica- tions, John Wiley & Sons, West Sussex, England, 1999. [19] J. L. Moschner, Adaptive filtering with clipped input data, Ph.d.thesis, Stanford University, Stanford, Calif, USA, June 1970. [20] N. Verhoeckx, H. van den Elzen, F. Snijders, and P. van Ger- wen, “Digital echo cancellation for baseband data transmis- sion,” IEEE Trans. Signal Processing, vol. 27, no. 6, pp. 768– 781, 1979. [21] T. Claasen and W. Mecklenbrauker, “Comparison of the con- vergence of two algorithms for adaptive FIR digital filters,” IEEE Transactions on Circuits and Systems, vol. 28, no. 6, pp. 510–518, 1981. [22] R. W. Lucky, “Automatic equalization for digital communica- tion,” Bell Syst. Tech. J., vol. 4, no. 4, pp. 547–588, April 1965. [23] B.-E. Jun, D.-J. Park, and Y.-W. Kim, “Convergence analysis of sign-sign LMS algorithm for adaptive filters with correlated Gaussian data,” in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’95), vol. 2, pp. 1380– 1383, Detroit, Mich, USA, May 1995.
CÓ THỂ BẠN MUỐN DOWNLOAD
-
Báo cáo hóa học: " Research Article Iterative Methods for Generalized von Foerster Equations with Functional Dependence"
14 p | 67 | 7
-
báo cáo hóa học:" Recombinant bromelain production in Escherichia coli: Process optimization in shake flask culture by Response Surface Methodology"
34 p | 73 | 6
-
Báo cáo hóa học: "Research Article A Multidimensional Functional Equation Having Quadratic Forms as Solutions"
8 p | 82 | 6
-
Báo cáo hóa học: " Erratum The PLSI Method of Stabilizing Two-Dimensional Nonsymmetric Half-Plane Recursive Digital Filters"
1 p | 40 | 5
-
Báo cáo hóa học: " Research Article A Statistical Multiresolution Approach for Face Recognition Using Structural Hidden Markov Models"
13 p | 58 | 5
-
Báo cáo hóa học: " Research Article Arabic Handwritten Word Recognition Using HMMs with Explicit State Duration"
13 p | 44 | 5
-
Báo cáo hóa học: " Research Article Question Processing and Clustering in INDOC: A Biomedical Question Answering System"
7 p | 50 | 5
-
Báo cáo hóa học: " Research Article Stability Problem of Ulam for Euler-Lagrange Quadratic Mappings"
15 p | 84 | 5
-
Báo cáo hóa học: " Research Article Simultaneous Eye Tracking and Blink Detection with Interactive Particle Filters"
17 p | 55 | 4
-
Báo cáo hóa học: " Research Article Optimizing Training Set Construction for Video Semantic Classification"
10 p | 48 | 4
-
báo cáo hóa học:" Sparse correlation matching-based spectrum sensing for open spectrum communications"
43 p | 55 | 4
-
Báo cáo hóa học: " Research Article A Diversity Guarantee and SNR Performance for Unitary Limited Feedback MIMO Systems"
15 p | 59 | 4
-
Báo cáo hóa học: " Research Article A Design Framework for Scalar Feedback in MIMO Broadcast Channels"
12 p | 42 | 4
-
Báo cáo hóa học: " Research Article Multitarget Identification and Localization Using Bistatic MIMO Radar Systems"
8 p | 38 | 4
-
Báo cáo hóa học: " Research Article A Markov Model for Dynamic Behavior of ToA-Based Ranging in Indoor Localization"
14 p | 44 | 4
-
Báo cáo hóa học: " Research Article Feedback Reduction in Uplink MIMO OFDM Systems by Chunk Optimization"
14 p | 50 | 3
-
Báo cáo hóa học: " Research Article Performance Capabilities of Long-Range UWB-IR TDOA Localization Systems"
17 p | 45 | 3
-
Báo cáo hóa học: " Research Article Extraction of Protein Interaction Data: A Comparative Analysis of Methods in Use"
9 p | 53 | 3
Chịu trách nhiệm nội dung:
Nguyễn Công Hà - Giám đốc Công ty TNHH TÀI LIỆU TRỰC TUYẾN VI NA
LIÊN HỆ
Địa chỉ: P402, 54A Nơ Trang Long, Phường 14, Q.Bình Thạnh, TP.HCM
Hotline: 093 303 0098
Email: support@tailieu.vn