intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Báo cáo hóa học: " Research Article Downsampling Non-Uniformly Sampled Data"

Chia sẻ: Linh Ha | Ngày: | Loại File: PDF | Số trang:10

29
lượt xem
4
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành hóa học dành cho các bạn yêu hóa học tham khảo đề tài: Research Article Downsampling Non-Uniformly Sampled Data

Chủ đề:
Lưu

Nội dung Text: Báo cáo hóa học: " Research Article Downsampling Non-Uniformly Sampled Data"

  1. Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2008, Article ID 147407, 10 pages doi:10.1155/2008/147407 Research Article Downsampling Non-Uniformly Sampled Data Frida Eng and Fredrik Gustafsson Department of Electrical Engineering, Link¨pings Universitet, 58183 Link¨ping, Sweden o o Correspondence should be addressed to Fredrik Gustafsson, fredrik@isy.liu.se Received 14 February 2007; Accepted 17 July 2007 Recommended by T.-H. Li Decimating a uniformly sampled signal a factor D involves low-pass antialias filtering with normalized cutoff frequency 1/D followed by picking out every Dth sample. Alternatively, decimation can be done in the frequency domain using the fast Fourier transform (FFT) algorithm, after zero-padding the signal and truncating the FFT. We outline three approaches to decimate non- uniformly sampled signals, which are all based on interpolation. The interpolation is done in different domains, and the inter- sample behavior does not need to be known. The first one interpolates the signal to a uniformly sampling, after which standard decimation can be applied. The second one interpolates a continuous-time convolution integral, that implements the antialias filter, after which every Dth sample can be picked out. The third frequency domain approach computes an approximate Fourier transform, after which truncation and IFFT give the desired result. Simulations indicate that the second approach is particularly useful. A thorough analysis is therefore performed for this case, using the assumption that the non-uniformly distributed sampling instants are generated by a stochastic process. Copyright © 2008 F. Eng and F. Gustafsson. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. INTRODUCTION A number of other applications and relevant references can be found in, for example, [1]. Downsampling is here considered for a non-uniformly sam- It should be obvious from the examples above that for pled signal. Non-uniform sampling appears in many applica- most applications, the original non-uniformly sampled sig- tions, while the cause for nonlinear sampling can be classified nal is sampled much too fast, and that oscillation modes and into one of the following two categories. interesting frequency modes are found at quite low frequen- cies compared to the inverse mean sampling interval. The problem at hand is stated as follows. Event-based sampling The sampling is determined by a nuisance event process. One Problem 1. The following is given: typical example is data traffic in the Internet, where packet (a) a sequence of non-uniform sampling times, tm , m = arrivals determine the sampling times and the queue length 1, . . . , M ; is the signal to be analyzed. Financial data, where the stock (b) corresponding signal samples, u(tm ); market valuations are determined by each transaction, is an- other example. (c) a filter impulse response, h(t ); and (d) a resampling frequency, 1/T . Uniform sampling in secondary domain Also, the desired intersampling time, T , is much larger than the original mean intersampling time, Some angular speed sensors give a pulse each time the shaft has passed a certain angle, so the sampling times depend on angular speed. Also biological signals such as ECGs are natu- tM rally sampled in the time domain, but preferably analyzed in E tm − tm−1 ≈ = Tu . μT (1) M another domain (heart rate domain).
  2. 2 EURASIP Journal on Advances in Signal Processing Let x denote the largest integer smaller than or equal to Reconstruction has long been an interesting topic in im- x. Find age processing, especially in medical imaging, see, for exam- ple, [16], where, in particular, problems with motion arti- n = 1, . . . , N , z(nT ), facts are addressed. Arbitrary sampling distributions are al- lowed, and the reconstruction is done through resampling (2) t M N= M , to a uniform grid. The missing pixel problem is given at- T D tention in [9, 17]. In [18], approximation of a function with bounded variation, with a band-limited function, is consid- such that z(nT ) approximates z(nT ), where ered and the approximation error is derived. Pointwise re- construction is investigated in [19], and these results will be z (t ) = h u(t ) = h(t − τ )u(τ )dτ (3) used in Section 5. Here, we neither put any constraints on the non-uniform sampling times, nor assumptions on the signal’s function is given by convolution of the continuous-time filter h(t ) and class. Instead, we take a more application-oriented approach, signal u(t ). and aim at good, implementable, resampling procedures. We will consider three different methods for converting from For the case of uniform sampling, tm = mTu , two well- known solutions exist; see, for example, [2]. non-uniform to uniform sampling. The first and third al- gorithm are rather trivial modifications of the time and (a) First, if T/Tu = D is an integer, then (i) u(mTu ) is fil- frequency-domain methods for uniformly sampled data, re- tered giving z(mTu ), and (ii) z(nT ) = z(nDTu ) gives spectively, while the second one is a new truly non-uniform the decimated signal. algorithm. We will compare performance of these three. In (b) Further, if T/Tu = R/S is a rational number, then a all three cases, different kinds of interpolation are possible, frequency domain method is known. It is based on but we will focus on zero-order hold (nearest neighbor) and (i) zero padding u(mTu ) to length RM , (ii) computing first order hold (linear interpolation). Of course, which in- the discrete Fourier transform (DFT), (iii) truncating terpolation is best depends on the signal and in particular the DFT a factor S, and finally computing the inverse on its inter-sample behavior. Though we prefer to talk about DFT (IDFT), where the (I)FFT algorithm is used for decimation, we want to point out that the theories hold for the (I)DFT calculation. any type of filter h(t ). A major contribution in this work is a detailed analysis of Conversion between arbitrary sampling rates has also been discussed in many contexts. The issues with efficient the algorithms, where we assume additive random sampling, (ARS), implementation of the algorithms are investigated in [3– 6], and some of the results are beneficial also for the non- uniform case. tm = tm−1 + τ m , (4) Resampling and reconstruction are closely connected, since a reconstructed signal can be used to sample at desired where τ m is stochastic additive sampling noise given by the time points. The task of reconstruction is well investigated known probability density function pτ (t ). The theoretical re- for different setups of non-uniform sampling. A number of sults show that the downsampled signal is unbiased under iterative solutions have been proposed, for example, [1, 7, 8], fairly general conditions and present an equivalent filter that several more are also discussed in [9]. The algorithms are generates z(t ) = h u(t ), where h depends on the designed not well-suited for real-time implementations and are based filter h and the characteristic function of the stochastic dis- on different assumptions on the sampling times, tm , such as tribution. bounds on the maximum separation or deviation from the The paper is organized as follows. The algorithms are de- nominal value mTu . scribed in Section 2. The convolutional interpolation gives Russel [9] also investigates both uniform and non- promising results in the simulations in Section 3, and the last uniform resampling thoroughly. Russell argues against the sections are dedicated to this algorithm. In Section 4, theo- iterative solutions, since they are based on analysis with ideal retic analysis of both finite time and asymptotic performance filters, and no guarantees can be given for approximate so- is done. The section also includes illustrative examples of the lutions. A noniternative approach is given, which assumes theory. Section 5 investigates an application example and is- periodic time grids, that is, the non-uniformity is repeated. sues with choosing the filter h(t ), while Section 6 concludes Another overview of techniques for non-uniform sampling the paper. is given in [10], where, for example, Ferreira [11] studies the special case of recovery of missing data and Lacaze [12] re- constructs stationary processes. 2. INTERPOLATION ALGORITHMS Reconstruction of functions with a convolutional ap- proach was done by [13], and later also by [14]. The sampling Time-domain interpolation can be used with subsequent fil- is done via basis functions, and reduces to the regular case if tering. Since LP-filtering is desired, we also propose two other delta functions are used. These works are based on sampling methods that include the filter action directly. The main idea is to perform the interpolation at different levels and the sets that fulfill the non-uniform sampling theorem given in [15]. problem was stated in Problem 1.
  3. F. Eng and F. Gustafsson 3 For Problem 1, with Tu = tM /M , compute For Problem 1, compute j M (1) tm = arg min | jTu − tm |, (1) z(kT ) = τ m h(kT − tm )u(tm ). tm < jTu m=1 j = u(tm ), (2) u( jTu ) M (3) z(kT ) = hd (kT − jTu )u( jTu ), Algorithm 2: Convolution interpolation. j =1 where hd (t ) is a discrete time realization of the impulse response h(t ). and using Riemann integration, we get Algorithm 2. The al- gorithm will be exact if the integrand, h(kT − τ )u(τ ), is con- Algorithm 1: Time-domain interpolation. stant between the sampling points, tm , for all kT . As stated before, the error, when this is not the case, decreases when the ratio M/N increases. 2.1. Interpolation in time domain This algorithm can be further analyzed using the inverse Fourier transform, and the results in [22], which will be done It is well described in literature how to interpolate a signal or in Section 4.1. function in, for instance, the following cases. Remark 2. Higher-order interpolations of (5) were studied (i) The signal is band-limited, in which case the sinc in- in [23] without finding any benefits. terpolation kernel gives a reconstruction with no error [20]. When the filter h(t ) is causal, the summation is only (ii) The signal has vanishing derivatives of order n + 1 and taken over m such that tm < kT , and thus Algorithm 2 is higher, in which case spline interpolation of order n is ready for online use. optimal [21]. (iii) The signal has a bounded second-order derivative, in which case the Epanechnikov kernel is the optimal in- 2.3. Interpolation in the frequency domain terpolation kernel [19]. LP-filtering is given by a multiplication in the frequency do- The computation burden in the first case is a limiting fac- main, and we can form the approximate Fourier transform tor in applications, and for the other two examples, the inter- (AFT), [22], given by Riemann integration of the Fourier polation is not exact. We consider a simple spline interpola- transform, to get Algorithm 3. This is also a known approach tion, followed by filtering and decimation as in Algorithm 1. in the uniform sampling case, where the DFT is used in each This is a slight modification of the known solution in the uni- step. The AFT is formed for 2N frequencies to avoid circular form case as was mentioned in Section 1. convolution. This corresponds to zero-padding for uniform Algorithm 1 is optimal only in the unrealistic case where sampling. Then the inverse DFT gives the estimate. the underlying signal u(t ) is piecewise constant between the samples. The error will depend on the relation between the Remark 3. The AFT used in Algorithm 3 is based on Rie- original and the wanted sampling; the larger the ratio M/N , mann integration of the Fourier transform of u(t ), and the smaller the error. If one assumes a band-limited signal, would be exact whenever u(t )e−i2π f t is constant between where all energy of the Fourier transform U ( f ) is restricted sampling times, which of course is rarely the case. As for the to f < 0.5N/tM , then a perfect reconstruction would be pos- two previous algorithms, the approximation is less grave for sible, after which any type of filtering and sampling can be large enough M/N . This paper does not include an investiga- performed without error. However, this is not a feasible so- tion of error bounds. lution in practice, and the band-limited assumption is not satisfied for real signals when the sensor is affected by addi- More investigations of the AFT were done in [22]. tive noise. 2.4. Complexity Remark 1. Algorithm 1 finds u( jTu ) by zero-order hold in- terpolation, where of course linear interpolation or higher- In applications, implementation complexity is often an issue. order splines could be used. However, simulations not in- We calculate the number of operations, Nop , in terms of addi- cluded showed that this choice does not significantly affect tions (a), multiplications (m), and exponentials (e). As stated the performance. before, we have M measurements at non-uniform times, and want the signal value at N time points, equally spaced with 2.2. Interpolation in the convolution integral T. (i) Step (3) in Algorithm 1 is a linear filter, with one addi- Filtering of the continuous-time signal, u, yields tion and one multiplication in each term, z(kT ) = h(kT − τ )u(τ )dτ , (5) 1 Nop = (1m + 1a)MN. (6)
  4. 4 EURASIP Journal on Advances in Signal Processing The desired uniform sampling is given by the intersampling time T = 4 seconds. The non-uniform sampling is defined For Problem 1, compute (1) fn = n/ 2NT , n = 0, . . . , 2N − 1, by M τ m u(tm )e−i2π fn tm , n = 0, . . . , N , (2) U ( fn ) = tm = tm−1 + τ m , (11) m=1 (3) Z ( fn ) = Z ( f2N −n ) = H ( fn )U ( fn ), τ m ∈ Re(tl , th ), (12) n = 0, . . . , N , 2N −1 (4) z(kT ) = 1/ 2NT Z ( fn )ei2πkT fn and the limits tl and th are varied. In the simulation, N is set n=0 k = 0, . . . , N − 1. to 64 and the number of non-uniform samples are chosen so that tM > NT is assured. This is not in exact correspondence Here, Z is the complex conjugate of Z . with the problem formulation, but assures that the results for different τ m -distributions are comparable. Algorithm 3: Frequency-domain interpolation. The samples are corrupted by additive measurement noise, Computing the convolution in step (3) in the fre- u tm = s tm + e tm , (13) quency domain would require the order of M log 2 (M ) operations. where e(tm ) ∈ N (0, σ 2 ), σ 2 = 0.1. (ii) Algorithm 2 is similar to Algorithm 1, The filter is a second-order LP-filter of Butterworth type with cutoff frequency 1/2T, that is, 2 Nop = (2m + 1a)MN , (7) √π √ π h(t ) = 2 e−(π/T 2)t sin √t, t > 0, (14) where the extra multiplication comes from the factor T T2 τ m. (π/T )2 (iii) Algorithm 3 performs an AFT in step (2), frequency- H (s) = √ . (15) s2 + 2π/Ts + (π/T )2 domain filtering in step (3) and an IDFT in step (4), This setup is used for 500 different realizations of f j , τ m , Nop = (2m + le + la)2M (N + 1) 3 and e(tm ). + (lm)(N + 1) (8) We will test four different rectangular distributions (12): + (le + lm + la)2N 2 . τ m ∈ Re(0.1, 0.3), μT = 0.2, σ T = 0.06, (16a) Using the IFFT algorithm in step (4) would give τ m ∈ Re(0.3, 0.5), μT = 0.4, σ T = 0.06, (16b) N log2 (2N ) instead, but the major part is still MN . τ m ∈ Re(0.4, 0.6), μT = 0.5, σ T = 0.06, (16c) All three algorithms are thus of the order MN , though Algo- τ m ∈ Re(0.2, 0.6), μT = 0.4, σ T = 0.12, (16d) rithms 1 and 2 have smaller constants. Studying work on efficient implementation, for example, and the mean values, μT , and standard deviations, σ T , are [9], performance improvements, could be made also here, shown for reference. For every run, we use the algorithms mainly for Algorithms 1 and 2, where the setup is similar. presented in the previous section and compare their results Taking the length of the filter h(t ) into account can sig- to the exact, continuous-time, result, nificantly improve the implementation speed. If the impulse response is short, the number of terms in the sums in Algo- z(kT ) = h(kT − τ )s(τ )dτ. rithms 1 and 2 will be reduced, as well as the number of extra (17) frequencies needed in Algorithm 3. We calculate the root mean square error, RMSE, 3. NUMERIC EVALUATION 1 2 z(kT ) − z(kT ) . λ (18) We will use the following example to test the performance of N k these algorithms. The signal consists of three frequencies that are drawn randomly for each test run. The algorithms are ordered according to lowest RMSE, (18), and Table 1 presents the result. The number of first, second Example 1. A signal with three frequencies, f j , drawn from a and third positions for each algorithm during the 500 runs, rectangular distribution, Re, is simulated are also presented. Figure 1 presents one example of the re- sult, though the algorithms are hard to be separated by visual s(t ) = sin 2π f1 t − 1 + sin 2π f2 t − 1 + sin 2π f3 t , (9) inspection. A number of conclusions can be drawn from the previous 1 f j ∈ Re 0.01, j = 1, 2, 3. , (10) example. 2T
  5. F. Eng and F. Gustafsson 5 (iv) Algorithms 1 and 2 have similar RMSE statistics, Table 1: RMSE values, λ in (18), for estimation of z(kT ), in Example 1. The number of runs, where respective algorithm fin- though, of the two, Algorithm 2 performs slightly bet- ished 1st, 2nd, and 3rd, is also shown. ter in the mean, in all the four tested cases. E[λ] Std(λ) 1st 2nd 3rd In this test, we find that Algorithm 3 is most often number one, but Algorithm 2 is almost as good and more stable in its Alg. 1 0.281 0.012 98 258 144 performance. It seems that the realization of the frequencies, Setup in (16a) Alg. 2 0.278 0.012 254 195 51 f j , is not as crucial for the performance of Algorithm 2. As Alg. 3 0.311 0.061 148 47 305 stated before, the performance also depends on the down- Alg. 1 0.338 0.017 9 134 357 sampling factor for all the algorithms. Setup in (16b) Alg. 2 0.325 0.015 175 277 48 The algorithms are comparable in performance and com- Alg. 3 0.330 0.038 316 89 95 plexity. In the following, we focus on Algorithm 2, because of Alg. 1 0.360 0.018 6 82 412 its nice analytical properties, its online compatibility, and, of Setup in (16c) Alg. 2 0.342 0.015 144 329 27 course, its slightly better performance results. Alg. 3 0.341 0.032 350 89 61 Alg. 1 0.337 0.015 59 133 308 4. THEORETIC ANALYSIS Setup in (16d) Alg. 2 0.331 0.015 117 285 98 Given the results for Algorithm 2 in the previous section, we Alg. 3 0.329 0.031 324 82 94 will continue with a theoretic discussion of its behavior. We consider both finite time and asymptotic results. A small note is done on similar results for Algorithm 3. 3 4.1. Analysis of Algorithm 2 2 Here, we study the a priori stochastic properties of the es- 1 timate, z(kT ), given by Algorithm 2. For the analytical cal- culations, we assume that the convolution is symmetric, and 0 get −1 M z(kT ) = τ m h tm u k T − tm m=1 −2 M H (η)ei2πηtm dη U (ψ )ei2πψ (kT −tm ) dψ = τm −3 m=1 100 110 120 130 140 150 160 170 180 190 200 M Time, t τ m e−i2π (ψ −η)tm dψ dη H (η)U (ψ )ei2πψkT = Figure 1: The result for the four algorithms, in Example 1, and a m=1 certain realization of (16c). The dots represent u(tm ), and z(kT ) is H (η)U (ψ )ei2πψkT W ψ − η; t1 dψ dη M = shown as a line, while the estimates z(kT ) are marked with a ∗ (Alg. 1), ◦ (Alg. 2) and + (Alg. 3), respectively. (19) with (i) Comparing a given algorithm for different non- M τ m e−i2π f tm . M W f ; t1 = (20) uniform sampling time pdf, Table 1 shows that pτ (t ), m=1 in (16a), (16b), (16c), (16d), has a clear effect on the performance. Let (ii) Comparing the algorithms for a given sampling time distribution shows that the lowest mean RMSE is no ϕτ ( f ) = E e−i2π f τ = e−i2π f τ pτ (τ )dτ = F pτ (t ) guarantee of best performance at all runs. Algorithm 2 (21) has the lowest E[λ] for setup (16a), but still performs worst in 10% of the cases, and for (16d), Algorithm 3 denote the characteristic function for the sampling noise τ . is number 3 in 20% of the runs, while it has the lowest Here, F is the Fourier transform operator. Then, [22, Theo- mean RMSE. rem 2] gives (iii) Usually, Algorithm 3 has the lowest RMSE (1st posi- tion), but the spread is more than twice as large (stan- M 1 dϕτ ( f ) 1 − ϕτ ( f ) dard deviation of λ), compared to the other two algo- E W( f ) = − , (22) 1 − ϕτ ( f ) 2πi df rithms.
  6. 6 EURASIP Journal on Advances in Signal Processing where also an expression for the covariance, Cov(W ( f )), is where h(t ) is given by given. The expressions are given by straightforward calcula- h(t ) = F −1 H W ( f ) (t ), (26b) tions using the fact that the sampling noise sequences τ m are independent stochastic variables and tm = m 1 τ k in (20). k= with W ( f ) as in (20). These known properties of W ( f ) make it possible to find Furthermore, if the filter h(t ) and signal u(t ) belong to the E[z(kT )] and Var(z(kT )) for any given characteristic func- Schwartz class, then S [24], tion, ϕτ ( f ), of the sampling noise, τ k . The following lemma will be useful. M 1 Ez(kT ) −→ z(kT ) pm (t ) −→ , M −→ ∞, (26c) if μT Lemma 1 (see [22, Lemma 1]). Assume that the continuous- m=1 time function h(t ) with FT H ( f ) fulfills the following condi- M 1 Ez(kT ) = z(kT ) p m (t ) = , ∀M , if (26d) tions. μT m=1 S .1 (1) h(t ) and H ( f ) belong to the Schwartz class, with μT = E[τ m ], and pm (t ) is the pdf for time tm . (2) The sum gM (t ) = M=1 pm (t ) obeys m Proof. First of all, (5) gives 1 1 gM (t )h(t )dt = h(t )dt = H (0), lim (23) μT μT M −→∞ H (ψ )U (ψ )ei2πψkT dψ , z(kT ) = (27a) for this h(t ). and from (19), we get (3) The initial value is zero, h(0) = 0. U (ψ )ei2πψkT H (η)W (ψ − η)dη dψ z(kT ) = Then, it holds that (27b) 1 − ϕτ ( f )M 1 H (ψ ) H ( f )df = H (0). lim (24) 1 − ϕτ ( f ) μT M−→∞ which implies that we can identify H ( f ) as the filter opera- tion on the continuous-time signal u(t ), and (26a) follows. Proof. The proof is conducted using distributions from func- From Lemma 1 and (22), we get tional analysis and we refer to [22] for details. E W ( f ) y ( f )df Let us study the conditions on h(t ) and H ( f ) given in (28) 1 − ϕτ ( f )M Lemma 1 a bit more. The restrictions from the Schwartz class E τ e−i2π f τ = y ( f )df −→ y (0) could affect the usability of the lemma. However, all smooth 1 − ϕτ ( f ) functions with compact support (and their Fourier trans- for any function y ( f ) fulfilling the properties of Lemma 1. forms) are in S , which should suffice for most cases. It is This gives not intuitively clear how hard (23) is. Note that, for any ARS case with continuous sampling noise distribution, pm (t ) is H (η)U (ψ )ei2πψkT E W (ψ − η) dψ dη = E z(kT ) approximately a Gaussian for higher m, and we can confirm that, for a large enough fixed t , H (ψ )U (ψ )ei2πψkT dψ −→ = z(kT ), M 1 1 2 2 e−(t−mμT ) /2mσ T −→ √ gM (t ) = , M −→ ∞, (29) μT 2πmσ T m=1 (25) when H ( f ) and U ( f ) behave as requested, and (26c) follows. Using the same technique as in the proof of Lemma 1, with μT and σ T being the mean and the standard deviation (26d) also follows. of the sampling noise τ , respectively. The integral in (23) can From the investigations in [22], it is clear that H ( f ), in then serve as some kind of mean value approximation, and (27b), is the AFT of the sequence h(tm ) (cf. the AFT of u(tm ) the edges of gN (t ) will not be crucial. Also, condition 3 fur- in step (2) of Algorithm 3). ther restricts the behavior of h(t ) for small t , which will make Requiring that both h(t ) and u(t ) be in the Schwartz class condition 2 easier to fulfill. is not, as indicated before, a major restriction. Though, some thought needs to be done for each specific case before apply- Theorem 1. The estimate given by Algorithm 2 can be written ing the theorem. as Algorithm 3 can be investigated analogously. z(kT ) = h u(kT ), (26a) Theorem 2. The estimate given by Algorithm 3 can be written as z(kT ) = h s ⇔ t k h(1) (t ) is bounded, that is, h(1) (t ) = θ (|t |−k ), for all k , l ≥ 0. u(kT ), (30a) 1 1h ∈
  7. F. Eng and F. Gustafsson 7 where h(t ) is given by the inverse Fourier transform of 1 N −1 1 H fn e−i2π ( f − fn )kT W ( fn − f ) H( f ) = 0.97 2NT n=0 2N −1 H − fn e−i2π ( f − fn )kT W − fn − f + , 0.94 n=N (30b) 0.91 and W ( f ) was given in (20). 0.88 Proof. First, observe that real signals u(t ) and h(t ) give U ( f ) = U (− f ) and H ( f ) = H (− f ) , respectively, the rest is completely in analogue with the proof of Theorem 1, 0.85 with one of the integrals replaced with the corresponding 10−2 10−1 sum. Frequency, f (Hz) Numerically, it is possible to confirm that the require- Re(0.1, 0.3) Re(0.2, 0.6) Re(0.3, 0.5) H( f ) ments on pm (t ), in (26c), are true for Additive Random Sam- Re(0.4, 0.6) pling, since pm (t ) then converges to a Gaussian distribution with μ = mμT and σ 2 = mσ T . A smooth filter with compact Figure 2: The true H ( f ) (thick line) compared to |E[H ( f )]| given by (14) and Theorem 1, for the different sampling noise distribu- support is noncausal, but with finite impulse response (see tions in (16a), (16b), (16c), (16d), and M = 250. e.g., the optimal filter discussed in Section 5). A noncausal filter is desired in theory, but often not pos- sible in practice. The performance of zBW (kT ) compared to 0.035 the optimal noncausal filter in Table 2 is thus encouraging. 0.03 4.2. Illustration of Theorem 1 0.025 Theorem 1 shows that the originally designed filter H ( f ) is effectively replaced by another linear filter H ( f ) when using 0.02 Algorithm 2. Since H ( f ) only depends on H ( f ) and the re- alization of the sampling times tm , we here study H ( f ) to 0.015 exclude the effects of the signal, u(t ), on the estimate. 0.01 First, we illustrate (26b) by showing the influence of the sampling times, or, the distribution of τ m , on E[H ]. We use the four different sampling noise distributions in (16a), 0.005 (16b), (16c), (16d), using the original filter h(t ) from (14) 0 with T = 4 seconds. Figure 2 shows the different filters H ( f ), 0T 4T 8T 12T 16T 20T 24T when the sampling noise distribution is changed. We con- Time, t (s) clude that both the mean and the variance affect |E[H ( f )]|, Figure 3: An example of the filter h(tm ) given by (31), (16b), and and that it seems possible to mitigate the static gain offset T = 4. from H ( f ) by multiplying z(kT ) with a constant depending on the filter h(t ) and the sampling time distribution pτ (t ). Second, we show that EH −→ H when M increases, for a a close fit also at higher frequencies. The convergence of the smooth filter with compact support, (26c). Here, we use magnitude of the filter is clearly shown in Figure 4. 2 π t − df T 1 h( t ) = t − d f T < d f T , (31) cos , 5. APPLICATION EXAMPLE 4d f 2 df T where d f is the width of the filter. The sampling noise distri- As a motivating example, consider the ubiquitous wheel bution is given by (16b). Figure 3 shows an example of the speed signals in vehicles that are instrumental for all driver sampled filter h(tm ). assistance systems and other driver information systems. The wheel speed sensor considered here gives L = 48 pulses per To produce a smooth causal filter, the time shift d f T is used. This in turn introduces a delay of d f T in the resampling revolution, and each pulse interval can be converted to a wheel speed. With a wheel radius of 1/π , one gets 24ν pulses procedure. We choose the scale factor d f = 8 for a better view per second. For instance, driving at ν = 25 m/s (90 km/h) of the convergence (higher d f gives slower convergence). The width of the filter covers approximately 2d f T/μT = 160 non- gives an average sampling rate of 600 Hz. This illustrates the uniform samples, and more than 140 of them are needed for need for speed-adaptive downsampling.
  8. 8 EURASIP Journal on Advances in Signal Processing 100 110 Instantaneous angular speed estimate, ω (rad/s) 100 90 10−1 80 70 60 10−2 50 40 10−3 30 20 10 10−4 0 10−2 10−1 0 200 400 600 800 1000 1200 1400 Frequency, f (Hz) Time, t (s) M = 140 H( f ) Figure 5: The data from a wheel speed sensor of a car. The data in M = 160 M = 120 the gray area is chosen for further evaluation. It includes more than M = 150 M = 100 600 000 measurements. Figure 4: The true filter, H ( f ) (thick line), compared to |E[H ( f )]| for increasing values of M , when h(t ) is given by (31). Under these conditions, the work by [19] helps with opti- mally estimating z(kT ). When estimating a function value z(kT ) from a sequence u(tm ) at times tm , a local weighted lin- Example 2. Data from a wheel speed sensor, like the one dis- ear approximation is investigated. The underlying function is cussed above, have been collected. An estimate of the angular approximated locally with a linear function speed, ω(t ), m(t ) = θ 1 + (t − kT )θ 2 , 2π (35) ω tm = , (32) L tm − tm−1 and m(kT ) = θ 1 is then found from minimization, can be computed at every sample time. The average inter- sampling time, tM /M , is 2.3 milliseconds for the whole sam- M 2 θ = arg min u tm − m tm KB tm − kT , (36) pling set. The set is shown in Figure 5. We also indicate the θ m=1 time interval where the following calculations are performed. A sampling time T = 0.1 second gives a signal suitable for where KB (t ) is a kernel with bandwidth B, that is, KB (t ) = 0 driver assistance systems. for |t | > B. The Epanechnikov kernel, We begin with a discussion on finding the underlying sig- 2 t nal in an offline setting and then continue with a test of dif- KB ( t ) = 1 − , (37) B ferent online estimates of the wheel speed. + is the optimal choice for interior points, t1 + B < kT < tM − B, 5.1. The optimal nonparametric estimate both in minimizing MSE and error variance. Here, subscript + means taking the positive part. This corresponds to a non- For the data set in Example 2, there is no true reference sig- causal filter for Algorithm 2. nal, but in an offline test like this, we can use computationally This gives the optimal estimate zopt (kT ) = θ 1 , using the expensive methods to compute the best estimate. For this ap- noncausal filter given by (35)–(37) with B = Bopt from [19], plication, we can assume that the measurements are given by u tm = s tm + e (33) 1/ 5 15σ 2 m Bopt = . (38) C 2 MT/μT with (i) independent measurement noise, e(tm ), with variance In order to find Bopt , the values of σ 2 , C , and μT were roughly σ 2 , and estimated from data in each interval [kT − T/ 2, kT + T/ 2] and (ii) bounded second derivative of the underlying noise- a mean value of the resulting bandwidth was used for Bopt . free function s(t ), that is, The result from the optimal filter, zopt (kT ), is shown compared to the original data in Figure 6, and it follows a s(2) (t ) < C , (34) smooth line nicely. which in the car application means limited accelera- For end points, that is, a causal filter, the error variance is still minimized by said kernel, (37), restricted to t ∈ [−B, 0], tion changes.
  9. F. Eng and F. Gustafsson 9 94 79.8 93 79.6 92 79.4 Angular speed, ω (rad/s) Angular speed, ω (rad/s) 91 79.2 90 89 79 88 78.8 87 78.6 86 78.4 85 84 78.2 83 78 675 680 685 690 695 700 647 648 649 650 651 652 653 654 Time, t (s) Time, t (s) Figure 6: The cloud of data points, u(tm ) black, from Example 2, Figure 7: A comparison of three different estimates for the data in and the optimal estimates, zopt (kT ) gray. Only part of the shaded Example 2: optimal zopt (kT ) (thick line), casual “optimal” zE (kT ) interval in Figure 5 is shown. (thin line), and causal Butterworth zBW (kT ) (gray line). Only a part of the shaded interval in Figure 5 is shown. 2 Table 2: RMSE from optimal estimate, E[|zopt (kT ) − z∗ (kT )| ], but KB (t ) = (1 − t/B)+ , t ∈ [−B, 0] is optimal in MSE sense. in Example 2. Fan and Gijbels still recommend to always use the Epanech- nikov kernel, because of both performance and implemen- Casual “optimal” Butterworth Local mean Nearest neighbor tation issues. [19] does not include a result for the optimal zE (kT ) zBW (kT ) zm (kT ) zn (kT ) bandwidth in this case. In our test we need a causal filter and 0.0793 0.0542 0.0964 0.3875 then choose B = 2Bopt in order to include the same number of points as in the noncausal estimate. the Epanechnikov kernel. It is promising that the estimate 5.2. Online estimation from Algorithm 2, zBW (kT ), is close to zopt (kT ), and it en- courages future investigations. The investigation in the previous section gives a reference value to compare the online estimates to. Now, we test four 6. CONCLUSIONS different estimates: This work investigated three different algorithms for down- (i) zE (kT ): the casual filter given by (35), (36), the kernel sampling non-uniformly sampled signals, each using inter- (37) for −B < t ≤ 0 and B = 2Bopt ; polation on different levels. Two algorithms are based on ex- (ii) zBW (kT ): a causal Butterworth filter, h(t ), in Algo- isting techniques for uniform sampling with interpolation rithm 2; the Butterworth filter is of order 2 with cutoff in time and frequency domain, while the third alternative is frequency 1/ 2T = 5 Hz, as defined in (14), truly non-uniform where interpolation is made in the con- (iii) zm (kT ): the mean of u(tm ) for tm ∈ [kT − T/ 2, kT + volution integral. The results in the paper indicate that this T/ 2]; third alternative is preferable in more than one way. (iv) zn (kT ): a nearest neighbor estimate; Numerical experiments presented the root mean square and compare them to the optimal zopt (kT ). The last two error, RMSE, for the three algorithms, and convolution inter- estimates are included in order to show if the more clever polation has the lowest mean RMSE together with frequency- estimates give significant improvements. Figure 7 shows the domain interpolation. It also has the lowest standard devia- first two estimates, zE (kT ) and zBW (kT ), compared to the tion of the RMSE together with time-domain interpolation. optimal zopt (kT ). Theoretic analysis showed that the algorithm gives Table 2 shows the root mean square errors compared to asymptotically unbiased estimates for noncausal filters. It the optimal estimate, zopt (kT ), calculated over the interval was also possible to show how the actual filter implemented indicated in Figure 5. From this, it is clear that the casual by the algorithm was given by a convolution in the frequency “optimal” filter, giving zE (kT ), needs tuning of the band- domain with the original filter and a window depending only width, B, since the literature gave no result for the opti- on the sampling times. mal choice of B in this case. Both the filtered estimates, In a final example with empirical data, the algorithm gave zE (kT ) and zBW (kT ), are significantly better than the sim- significant improvement compared to the simple local mean ple mean, zm (kT ). The Butterworth filter performs very well, estimate and was close to the optimal nonparameteric esti- and is also much less computationally complex than using mate that was computed beforehand.
  10. 10 EURASIP Journal on Advances in Signal Processing Thus, the results are encouraging for further investiga- [16] M. Bourgeois, F. Wajer, D. van Ormondt, and F. Graveron- Demilly, “Reconstruction of MRI images from non-uniform tions, such as approximation error analysis and search for sampling and its application to intrascan motion correction in optimality conditions. functional MRI,” in Modern Sampling Theory, J. J. Benedetto and P. J. Ferreira, Eds., chapter 16, pp. 343–363, Birkh¨ user, a ACKNOWLEDGMENTS Boston, Mass, USA, 2001. The authors wish to thank NIRA Dynamics AB for providing [17] S. R. Dey, A. I. Russell, and A. V. Oppenheim, “Precompen- the wheel speed data, and Jacob Roll, for interesting discus- sation for anticipated erasures in LTI interpolation systems,” sions on optimal filtering. Part of this work was presented at IEEE Transactions on Signal Processing, vol. 54, no. 1, pp. 325– EUSIPCO07 335, 2006. [18] P. J. Ferreira, “Nonuniform sampling of nonbandlimited sig- nals,” IEEE Signal Processing Letters, vol. 2, no. 5, pp. 89–91, REFERENCES 1995. [19] J. Fan and I. Gijbels, Local Polynomial Modelling and Its Appli- [1] A. Aldroubi and K. Gr¨ chenig, “Nonuniform sampling and re- o cations, Chapman & Hall, London, UK, 1996. construction in shift-invariant spaces,” SIAM Review, vol. 43, [20] A. Papoulis, Signal Analysis, McGraw-Hill, New York, NY, no. 4, pp. 585–620, 2001. USA, 1977. [2] S. K. Mitra, Digital Signal Processing: A Computer-Based Ap- [21] M. Unser, “Splines: a perfect fit for signal and image process- proach, McGraw-Hill, New York, NY, USA, 1998. ing,” IEEE Signal Processing Magazine, vol. 16, no. 6, pp. 22–38, [3] T. A. Ramstad, “Digital methods for conversion between ar- 1999. bitrary sampling frequencies,” IEEE Transactions on Acoustics, [22] F. Eng, F. Gustafsson, and F. Gunnarsson, “Frequency domain Speech, and Signal Processing, vol. 32, no. 3, pp. 577–591, 1984. [4] A. I. Russell and P. E. Beckmann, “Efficient arbitrary sam- analysis of signals with stochastic sampling times,” to appear pling rate conversion with recursive calculation of coeffi- in IEEE Transactions on Signal Processing. [23] F. Gunnarsson and F. Gustafsson, “Frequency analysis using cients,” IEEE Transactions on Signal Processing, vol. 50, no. 4, non-uniform sampling with application to active queue man- pp. 854–865, 2002. [5] A. I. Russell, “Efficient rational sampling rate alteration using agement,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’04, vol. 2, pp. IIR filters,” IEEE Signal Processing Letters, vol. 7, no. 1, pp. 6–7, 581–584, Montreal, Canada, May 2004. 2000. [6] T. Saram¨ ki and T. Ritoniemi, “An efficient approach for con- [24] C. Gasquet and P. Witkomski, Fourier Analysis and Applica- a tions, Springer, New York, NY, USA, 1999. version between arbitrary sampling frequencies,” in Proceed- ings of the IEEE International Symposium on Circuits and Sys- tems (ISCAS ’96), vol. 2, pp. 285–288, Atlanta, Ga, USA, May 1996. [7] F. J. Beutler, “Error-free recovery of signals from irregularly spaced samples,” SIAM Review, vol. 8, no. 3, pp. 328–335, 1966. [8] F. Marvasti, M. Analoui, and M. Gamshadzahi, “Recovery of signals from nonuniform samples using iterative methods,” IEEE Transactions on Signal Processing, vol. 39, no. 4, pp. 872– 878, 1991. [9] A. I. Russell, “Regular and irregular signal resampling,” Ph.D. dissertation, Massachusetts Institute of Technology, Cam- bridge, Mass, USA, 2002. [10] F. Marvasti, Ed., Nonuniform Sampling: Theory and Practice, Kluwer Academic Publishers, Boston, Mass, USA, 2001. [11] P. J. Ferreira, “Iterative and noniterative recovery of missing samples for 1-D band-limited signals,” in Sampling: Theory and Practice, F. Marvasti, Ed., chapter 5, pp. 235–282, Kluwer Academic Publishers, Boston, Mass, USA, 2001. [12] B. Lacaze, “Reconstruction of stationary processes sampled at random times,” in Nonuniform Sampling: Theory and Prac- tice, F. Marvasti, Ed., chapter 8, pp. 361–390, Kluwer Academic Publishers, Boston, Mass, USA, 2001. [13] H. G. Feichtinger and K. Gr¨ chenig, “Theory and practice o of irregular sampling,” in Wavelets: Mathematics and Applica- tions, J. J. Benedetto and M. W. Frazier, Eds., pp. 305–363, CRC Press, Boca Raton, Fla, USA, 1994. [14] Y. C. Eldar, “Sampling with arbitrary sampling and recon- struction spaces and oblique dual frame vectors,” Journal of Fourier Analysis and Applications, vol. 9, no. 1, pp. 77–96, 2003. [15] K. Yao and J. B. Thomas, “On some stability and interpolatory properties of nonuniform sampling expansions,” IEEE Trans- actions on Circuits and Systems, vol. 14, no. 4, pp. 404–408, 1967.
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2