A Variable Step Size for Acoustic Echo Cancellation Using Normalized Sub band Adaptive Filter

Numerous various step size normalized least mean square (VSS-NLMS)Algorithms have been derived to solve the problem of fast convergence rate and low mean square error .Here we find out the ways to control the step size. A normalized subband adaptive filter algorithm uses a fixed and variable step size, which is chosen as a trade-off between the steady-state error and the convergence rate. A variable step size for normalized subband adaptive filter is derived by minimizing the mean-square deviation between the optimal weight vector and the estimated weight vector at each instant of time. The variable step size is presented in terms of error variance. Therefore, we verify the different algorithms either they are capable of tracking in stationary and non-stationary environments. The results show good tracking ability and low misalignment of the algorithm in system identification. Parameters are tracking, steady state errors, and misalignment, environment, step size.


I.INTRODUCTION
Adaptive filters have been used in many applications such as system identification, Channel equalization, Channel estimation, echo cancellation, and active noise control l [6], [15]. Linear and nonlinear filtering techniques have received increasing attentions in recent adaptive signal processing literatures [6]. Numerous researchers have contributed to the development in these fields [6] . The least mean square (LMS) and normalized least-mean square (NLMS) algorithms are the most widely used adaptive filter algorithms because they are easy to implement and have low computational complexity [15]. The stability of this algorithm is governed by step size parameter. It is well known that the choice of this parameter, within the stability conditions, reflects a tradeoff between fast convergence and good tracking ability on one side and low misadjustment on the other hand. Although NLMS algorithm has achieved a certain degree of success so it converges slowly with colored input signals [15]. To solve the problem of NLMS a normalized subband adaptive filter (NSAF) algorithm was used [15]. The NSAF algorithm divides the input signals into multiple subbands in order to whiten the input signals in each subband. Similar to the case of the LMS-type algorithms, the performance of the NSAF algorithm depends on the step size [15]. A fixed step size reflects a trade-off between the high convergence rate and low misalignment [6]. To overcome this drawback, several variable step size NSAF algorithms have been proposed [9], [12]. The step size of a variable step size matrix NSAF (VSSM-NSAF) is designed under the assumption that the -norm of the error is equal to the measurement noise variance [9]. The VSS MNSAF algorithm is capable of tracking in non-stationary environments. However, this algorithm has high misalignment in the steady state [15]. The step size of an NSAF with a variable step size (NSAF-VSS) is derived by minimizing the mean-square deviation (MSD) [12]. The NSAF-VSS algorithm exhibits good performance in terms of misalignment. However, because this algorithm is designed for a stationary system, it cannot track changes in the coefficient vector [15].
The rest of the paper is organized as follows. Section II provides the overview of different parameters, Section III discuss about the specific application, then in Section IV different filters discussed and Section V different adaptive algorithms, followed by result, conclusion .

II.PARAMETERS
The convergence rate, tracking, misadjustment, mean square error, tap length are the few parameters that we discuss in this paper. The number of tap coefficients of a linear filter is a important parameter that effects the performance of a minimum mean square error (MMSE) adaptive filter [6]. MMSE is a monotonic non increasing function of the tap length but decrease of MMSE due to tap length increase is always important. It doesn't t increase complexity but introduces adaptation noise.so we use a optimum tap length that balances the steady state performance and complexity. In multiuser echo cancellation system the length of the echo may keep changing as user entering or leaving the system. But most of the designs tap length is usually fixed [6].

Acoustic echo cancellation
The noise is a major problem for all telecommunication system where a non-negligible coupling exists between loudspeaker and microphone and makes the communication difficult. Early systems relied on disciplined speakers like single duplex approaches (one way) and generate echos. To cancel these echos we identify the impulse response due to coupling [9] so acoustic Echo Cancellers are used in teleconferencing systems to reduce undesired echo s [6].Multichannel acoustic echo cancellers have become essential after the spread of multichannel system for higher realistic performance in speaker localization [6].
Several adaptive filter algorithms are proposed for noise cancellation. The adaptive filter essentially minimizes the meansquared error between noisy signal and a reference signal. In adaptive filters convergence rate is decreasing by increasing the number of taps especially if the reference signal spectrum has a large dynamic range [9]. Echo cancellation requires a method for adjusting the learning rate when noise or interference present in the signal. Most of the echo cancellation algorithms used to detect double talk conditions so have long been recognized as an essential component of two way voice communication systems for reducing the annoying effects of network and acoustic echoes [6].

IV. FILTERS
Filters are of two types: Linear and non linear. All the applications where non linear distortions have to be identified and compensated by adaptive signal processing [1], [2]. FIR and IIR both are linear filters but Volterra is a non linear filter. By using these filters people are working on time/frequency domain implementation but few are working on statistical methods for higher order spectral analysis [6].

Wiener Filter Theory
It is a statistical analysis method where prior environment is known before the processing. This theory is to define very clearly by mean of an optimum filter. The mean square error (MSE), ξ, is defined by the "expectation" of the squared error .Weiner filter theory is defined by performance surface [1]- [2]. M a y 2 5 , 2 0 1 3

Performance Surface
The vertical axis represents the mean square error and the two horizontal axes represent the values of two filter coefficients. The quadratic error function, or performance surface, can be used to determine the optimum weight vector Wopt (or Wiener filter coefficients). With a quadratic performance function there is only one global optimum; no local minima exist. The shape of the function would be hyper-parabolic if there were more than two weights. When the weight vector (filter coefficient) is at the optimum Wopt, the mean square error will be at its minimum. The filter given by Wopt is the Wiener filter. Wopt has to be calculated repeatedly for non-stationary signals and this can be computationally intensive. An iterative solution to the Wiener-hopf equation is the steepest decent algorithm [2].

Adaptive Filter
It is a steepest descent algorithm. Commonly, the characteristics of systems are either not known or time variable due to many reasons, undesirable in many cases. Therefore, processing approaches should adapt to the unknown characteristics, hence to extract valid information in a changing scenario. For that reason the adaptive algorithms should be simple, computationally efficient, implementable on the existing hardware platform (like digital signal processors or configurable blocks) and cost effective for commercial utility. Any real time processing has to be adaptive and some of the common applications are: compression and coding, active control of noise (inside aircraft cabin and automobiles, industrial noise) and vibration, communication applications like channel equalization, acoustic and line echo cancellation, adaptive antenna arrays and adaptive processing of biomedical signals[1]- [2].
Adaptive filters have the ability to adjust their own parameters automatically and their design requires little or no a priori knowledge of signal or noise characteristics [17].The design of such filters is the domain of optimal filtering which originated with the wiener filter and was extended by work of kalman filter. M a y 2 5 , 2 0 1 3

Volterra Filter
It can deal with general class of nonlinear systems but its output is still linear with respect to various higher order kernels or impulse responses [6]. So truncated Volterra models have been widely applied and became very popular. One key reason is that the number of the volterra coefficients increases geometrically as the delays and orders increasing [6].

V. ADAPTIVE ALGORITHMS
Most popular adaptive filter algorithms are Least Mean square (LMS) algorithm and normalized Least Mean square (NLMS) algorithm [1]. The popularity of these algorithms is due to its simplicity and robust performance [2].The stability of this algorithm is governed by a step size parameter [1].

Least Mean Square (LMS) Algorithm
No of tap coefficient of a linear filter is an important parameter for calculating the performance of minimum mean square error (MMSE).By adjusting the tap length we can improve convergence rate. For fixed tap length segmented filter (SF) is used and for variable tap length gradient descent (GD) was used [2].

Normalized Least Mean Square (NLMS) Algorithm
NLMS is widely used algorithm because of its simplicity and robust performance. The stability of the basic NLMS is controlled by a step size. This parameter also governs the rate of convergence, speed, tracking ability and the amount of steady-state excess mean-square error (MSE). Aiming to solve conflicting objectives of fast convergence and low excess MSE. It achieves a certain degree of success that converges slowly with colored input signals [1]- [2].
In the standard LMS algorithm If x(n) is large, it suffers from gradient noise amplification [2]. But normalized LMS algorithm seeks to avoid gradient noise amplification [2]. The step size is time varying m (n), and optimized to minimize error.

Variable Step Size (VSS-NLMS) Algorithm
This algorithm is used to recover the near end signal from the error signal of the adaptive filter. In this algorithm large value is used for fast convergence rate and tracking. while small value for low misadjustment and good robustness features. This VSS-LMS employs a larger step size when the estimation error is large, and vice versa. Aboulnasr pointed out that the advantageous performance of this VSS-LMS and several other variable step-size LMS algorithms is usually obtained in a high signal-to-noise environment The motivation is that a large MSE increases step size and a large system noise decreases step size, and vice versa[1]- [2].

Nonparametric (NPVSS-NLMS)
This algorithm is for the improvement of VSS-NLMS algorithm. It was develop for the recovery of System noise from the error of the adaptive filter. There are two aspects are there for this algorithm one is only background noise and other is with background noise with near -end speech (double talk scenario). This approach is used for controlling the step size, M a y 2 5 , 2 0 1 3 provides fast convergence, good tracking and low misadjustment [4]. This algorithm is working without assumptions compared to all other algorithms. Algorithm always works for stationary systems [1]- [2].

Normalized sub band adaptive filter (NSAF)
This approach is used for accelerating the convergence. It shows good performance with its computational complexity close to that of NLMS algorithm with colored input signals [9] .The idea of the NSAF is to use the sub band signals normalized by their respective sub band input variances to update the tap weights of a full band adaptive filter. For fixed step size, it requires a tradeoff between fast convergence rate and low misadjustment [9]. This problem is solved by setmembership where uses fixed step size concept [9].The NSAF algorithm divide the input signals into multiple sub bands in order to whiten the input signals in each sub band

Variable step size Normalized sub band adaptive filter (VSS-NSAF)
It is capable of tracking in non stationary environment. This algorithm has high misalignment in steady state. The step size is derived by minimizing the mean square deviation (MSD).This algorithm exhibits good performance in terms of misalignment [15]. This algorithm is designed for a stationary system, it cannot track changes in the coefficient vector.MSD is minimized between optimal weight vector and estimated weight vector at each instant of time [15].

Variable step size matrix (VSSM-NSAF)
This algorithm is for measurement of noise variance. It is capable of tracking in non-stationary environment. It has high misalignment in steady state [9].It recover the power of the subband system noises from those of the subband error signals of the adaptive filters to further improve the performance of NSAF. Therefore need not know the powers of the subband system noises in advance .VSSM-NSAF can obtain better performance than NSAF and SM-NSAF [9].

Set membership-(SM-NSAF )
Power of the system noise should be known. Even if the power of the system noise is assumed to be known one can see by simulations that the convergence performance is sensitive to the selection of the error bound. Therefore desired performance must be obtained by many trials [9].

Affine projection algorithm (VSS-APA)
Which is a generalization of the VSS-NLMS? This algorithm need not to known the power of the system noise and solve both the problems of fast convergence rate and low misadjustment. This algorithm employs the norm of the filter coefficient error vector as a criterion for optimal variable step size [12]. Its computational complexity is higher than that of NLMS algorithm and increases with projection order. New variable step size affine projection algorithms whose step sizes are adjusted according to the square of a time-averaging estimate of the autocorrelation of a priori and a posteriori errors [12].

Segmented Filter (SF) and Gradient Filter (GF)
These algorithms are used for optimum tap length for finding the best balance between complexity and steady state performance. These algorithms are called structure adaptation alg. The tap length or the number of tap coefficients of a linear filter is a important parameter that effects the performance of a minimum mean square error (MMSE) adaptive filter [18]. Here we can adjust the length. Under segmented filter (SF) algorithm filter is partitioned into several segments and the tap length is adjusted by one segment being either added to or removed from the filter according to the difference of the output error levels from the last two segments. Gradient descent (GD) variable tap length is used as compare to (SF) [18].

VI.RESULTS
Aiming to solve the conflicting objectives of fast convergence and low excess MSE associated with the conventional NLMS, a number of variable step-size NLMS (VSS-NLMS) algorithms have been presented in the table. This VSS-LMS employs a larger step size when the estimation error is large, and vice versa. Aboulnasr pointed out that the advantageous performance of this VSS-LMS and several other variable step-size LMS algorithms is usually obtained in a high signal-tonoise environment. Recently Shin, Saied, and Song developed a variable step-size affine projection algorithm, which employs the norm of the filter coefficient error vector as a criterion for optimal variable step size. Another type of variable step-size algorithm is the regularized NLMS. Manic derived a generalized normalized gradient descent (GNGD) algorithm, which updates the regularization parameter gradient adaptively . While most variable step-size algorithms need to tune several parameters for better performance, Benesty introduced a relatively tuning-free nonparametric VSS-NLMS (NPVSS) algorithm. This paper presents a comparison between different algorithms of variable step size VSS_NLMS which employs the MSE and the estimated system noise power to control the step-size update. The motivation is that a large MSE increases step size and a large system noise decreases step size, and vice versa. Our new VSSNLMS is easy to implement and gives very good performance. Extensive simulations show that the steady-state behavior predicted by the analysis is in very good agreement with the experimental results.

Faster
Increas el/Max reduced Large

VII. CONCLUSION
In this paper we discuss different parameters by different variable step size VSS-NLMS algorithms. Every algorithm works on different methods for echo cancellation and improves system performance. But in all these algorithms authors worked on different parameters and try to improve their results.