Free Essay

Wireless

In:

Submitted By zfatima2
Words 22975
Pages 92
THE COOPER UNION FOR THE ADVANCEMENT OF SCIENCE AND ART
ALBERT NERKEN SCHOOL OF ENGINEERING

Adjustable Subband Allocation
Algorithm for Critically Sampled
Subband Adaptive Filters by Adam Shabti Charles

A thesis submitted in partial fulfillment of the requirements for the degree of
Master of Engineering

May 6, 2009

Advisor
Dr. Fred L. Fontaine

THE COOPER UNION FOR THE ADVANCEMENT OF SCIENCE AND
ART
ALBERT NERKEN SCHOOL OF ENGINEERING

This thesis was prepared under the direction of the Candidate’s Thesis Advisor and has received approval. It was submitted to the Dean of the School of Engineering and the full Faculty, and was approved as partial fulfillment of the requirements for the degree of Master of Engineering.

Dr. Eleanor Baum
Dean, School of Engineering

Dr. Fred L. Fontaine
Candidate’s Thesis Advisor

Acknowledgments
I would like to thank my advisor, Dr. Fred Fontaine, for his guidance and patience throughout this process. Without his teachings I would not be where I am today. I would also like to thank the rest of the faculty, as well as my friends and peers at The
Cooper Union Albert Nerken School of Engineering. A special thanks is due to David
Nummey, Deian Stefan, Ashwin Kirpalani, Stefan M¨nzel and Matthew Epstein, all u of whom gave their time to listen patiently to my ideas and help me improve this thesis into what it is today. I would also like to thank Dr. Jack Lowenthal for keeping me motivated with his interest in my studies and projects. Lastly I like to thank my family, especially my parents, Dr. Richard and Shulamit Charles, my uncle Louis
Charles, and my sister Aya for their constant support throughout my life.

i

Abstract
Subband adaptive filters utilize subband decompositions to reduce the length of the adaptive filters, and thus reduce the number of computations needed to adapt for very large filters. Smaller bands have been shown to greatly reduce the computational complexity, but at the cost of performance. Both the convergence rate and the misadjustment of the adaptive structure suffer due to the decomposition.
Tridiagonal transfer functions as well as oversampling have been proposed to reduce these effects [6, 21]. More recently, non-uniform subband decompositions have been proposed in order to cover the cross-terms and reduce the convergence time [20].
The issue then arises that the optimal subband decomposition is often not known a-priori. This paper proposes a method of adapting the subband decomposition for nonuniform adaptive filters in order to reduce the misadjustment and convergence time when modeling non-stationary processes. A QMF based tree structure, along with an adaption algorithm were designed and implemented in MATLAB. The algorithm was able to correctly adapt for the changes in the non-stationary unknown transfer function. Both the convergence rate as well as the misadjustment were improved with minimal excess computation.

ii

Contents
1 Introduction
1.1 Motivation . . . .
1.2 Problem Statement
1.3 Previous Work . .
1.4 Proposed Solution

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

1
1
7
8
8

2 Adaptive Filtering
2.1 Adaptive Filter Theory . . . . . . . . . . .
2.2 Wiener Filters and LMS Filtering . . . . . .
2.3 Kalman Filters and RLS Filtering . . . . .
2.4 Evaluation of Adaptive Filter Performance .
2.5 Variations of LMS and RLS Filters . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

10
10
11
13
16
19

3 Multirate Filter Banks
3.1 Basics of Multirate Filter Banks . . . . . . . . . . . . . . . . . . . . .
3.2 The Perfect Reconstruction (PR) Condition . . . . . . . . . . . . . . .
3.3 Tree Structured Filter Banks . . . . . . . . . . . . . . . . . . . . . . .

23
23
28
31

4 Subband Adaptive Filtering
4.1 Background . . . . . . . . . . . . . . . . .
4.2 Uniform Subband Adaptive Filtering . . .
4.3 Non-Uniform Subband Adaptive Filtering
4.4 Properties of Subband Adaptive Filters .

.
.
.
.

33
33
34
42
43

5 Adjustable Subband Adaptive Filtering
5.1 Motivation for Adjustable Subband Filter Banks . . . . . . . . . . . .
5.2 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47
47
48
49

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

iii

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

6 Experimental Results
6.1 Construction of a PR Filter Bank . . . . . . . . . . . . .
6.2 Consolidation of Tree Branches . . . . . . . . . . . . . .
6.3 Performance of Uniform Subband Adaptive Filters . . .
6.4 Performance of Non-Uniform Subband Adaptive Filters
6.5 Performance of the Adjustable Subband Algorithm . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

7 Conclusions and Further Work
A MATLAB CODE
A.1 Table of Functions . . . . . . . . . . . . . . . . . . .
A.2 Standard LMS Algorithm . . . . . . . . . . . . . . .
A.3 Standard RLS Algorithm . . . . . . . . . . . . . . .
A.4 Normalized LMS Algorithm . . . . . . . . . . . . . .
A.5 AFF-RLS-ATNA Algorithm . . . . . . . . . . . . . .
A.6 Noise Generating Function . . . . . . . . . . . . . . .
A.6.1 Non-Stationary Filter Evaluation . . . . . . .
A.7 Fullband Testing Code . . . . . . . . . . . . . . . . .
A.8 Zero-Fill Interpolation Code . . . . . . . . . . . . . .
A.9 Length Adjusting Code . . . . . . . . . . . . . . . .
A.10 Effective Filter Evaluation Code . . . . . . . . . . .
A.11 Two Band PR Filter Gererating Code . . . . . . . .
A.12 Uniform Subband Filter Generating Code . . . . . .
A.13 Uniform Subband LMS Algorithm . . . . . . . . . .
A.14 Uniform Subband RLS Algorithm . . . . . . . . . . .
A.15 Adjustable Non-Uniform Subband LMS Algorithm .
A.16 Adjustable Non-Uniform Subband Update Algorithm
A.17 Test Filter Construction Code . . . . . . . . . . . . .
A.18 Subband Algorithm Testing Code . . . . . . . . . . .
Bibliography

59
59
61
62
65
66
78

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

80
80
82
83
85
87
90
91
92
96
97
97
98
99
102
105
108
115
117
119
125

List of Figures
1.1

Echo Cancellation Problem . . . . . . . . . . . . . . . . . . . . . . . .

2

2.1

General Adaptive Filter as a System Identifier . . . . . . . . . . . . . .

11

2.2

Error Curve Plot Comparison of Adaptive Algorithms . . . . . . . . .

19

2.3

Error Curve Plot Comparison of Adaptive Algorithms with Shot Noise

22

3.1

Decimation and Interpolation Frequency Transformations . . . . . . .

25

3.2

M Channel Uniform Filter Bank . . . . . . . . . . . . . . . . . . . . .

26

3.3

Two Channel Filter Bank . . . . . . . . . . . . . . . . . . . . . . . . .

30

3.4

Two Tier Tree Structure Analysis Bank With Subband Filters

. . . .

31

3.5

Two Tier Tree Structure Synthesis Bank . . . . . . . . . . . . . . . . .

32

4.1

Diagonal Subband Filter Structure . . . . . . . . . . . . . . . . . . . .

38

4.2

Tridiagonal Subband Filter Structure . . . . . . . . . . . . . . . . . . .

45

4.3

Non-Uniform Diagonal Subband Filter Structure . . . . . . . . . . . .

46

5.1

Subband Merging by Replacement . . . . . . . . . . . . . . . . . . . .

50

5.2

Thresholding Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

6.1

Magnitude Response of High Pass and Low Pass Filters . . . . . . . .

61

6.2

Effective Filter Comparison . . . . . . . . . . . . . . . . . . . . . . . .

62

v

6.3

Comparison of Fullband LMS Filter to 4 Band Subband LMS . . . . .

63

6.4

Comparison of Fullband RLS Filter to 16 Band Subband RLS . . . . .

64

6.5

Convergence of Non-Uniform Subband Filters . . . . . . . . . . . . . .

65

6.6

PSD Estimates for Stop-Band Test . . . . . . . . . . . . . . . . . . . .

67

6.7

Error Signals for Stop-Band Test . . . . . . . . . . . . . . . . . . . . .

68

6.8

PSD Estimates for Band-Pass to Low-Pass Test . . . . . . . . . . . . .

69

6.9

Error Signals for Band-Pass to Low-Pass Test . . . . . . . . . . . . . .

71

6.10 Steady State Subband Allocation for a Stop-Band Filter . . . . . . . .

72

6.11 Steady State Subband Allocation for a Low-Pass Filter . . . . . . . . .

73

6.12 Error Signals for Low-Pass Filter Test . . . . . . . . . . . . . . . . . .

74

6.13 Comparison of Subband Filters from a HPF . . . . . . . . . . . . . . .

75

6.14 Comparison of Subband Filters from a HPF to a LPF . . . . . . . . .

76

6.15 Comparison of Adjustable Non-Uniform Subband LMS to the TimeInvariant Non Uniform Subband LMS . . . . . . . . . . . . . . . . . .

77

List of Tables
4.1

Number of Computations for Various Adaptive Algorithms . . . . . .

4.2

Number of Computations for Tridiagonal Subband Adaptive Algorithms 41

4.3

Number of Computations for Non-Uniform Subband Adaptive Algorithms 43

6.1

QMF Perfect Reconstruction Filter Parameters . . . . . . . . . . . . .

59

6.2

QMF Perfect Reconstruction Filter Coefficients . . . . . . . . . . . . .

60

6.3

Test Filters for Adaptive Algorithms . . . . . . . . . . . . . . . . . . .

70

6.4

Noise Effect on Subband Filtering MSE . . . . . . . . . . . . . . . . .

70

6.5

Noise Effect on Subband Filtering Convergence Rate . . . . . . . . . .

70

6.6

Comparison of Computation Complexities for Noise Tests . . . . . . .

70

A.1 List of MATLAB Functions . . . . . . . . . . . . . . . . . . . . . . . .

80

vii

41

Chapter 1

Introduction
1.1

Motivation

In many signal processing applications, it is necessary to use a filter which is optimal with respect to given criteria. Often, this optimal filter cannot be known prior to the filtering operation, and thus the coefficients cannot be hard-coded into the algorithm. This has led to a class of filters known as adaptive filters, which can adjust their coefficients to achieve the desired optimal behavior of the filter.
A prime example is the problem of echo cancellation in telephony [4, 6, 14]. In this scenario, illustrated in Figure 1.1, a user at terminal A sends a signal a[n] to user B, that is simultaneously responding with a signal b[n], which for simple analysis is uncorrelated to a[n]. If the transmitter at B is in close proximity to the receiver, the transformed signal from A may mix with the signal b[n] prior to transmission at
B.Thus the signal received at A, y[n], has the z-transform:

Y (z) = GAB (z)(B(z) + GBA (z)A(z))

1

(1.1)

where GAB (z) is the transfer function for the path from B to A, GBA (z) is the transfer function from A to B , and B(z) and A(z) are the z-transforms of b[n] and a[n], respectively. In order to clean the received signal at A, the part of the signal that depends on the echo of a[n] must be estimated and eliminated. If the behavior of the transfer function applied to a[n] is known, then a filter H(z) = GAB (z)GBA (z), as in Figure 1.1, can be placed to accomplish this. This filter will remove the undesired part of the received signal, and all that will be left is E(z) = GAB (z)B(z).

an
Transmitter A

GBA (z)

Receiver B

H(z) en −
+

yn
ˆ

Receiver A

GAB (z)

Transmitter B

+
+

bn

Figure 1.1: Echo Cancellation Problem
Usually, though, the transfer function GBA (z)GAB (z) is unknown and difficult to estimate. Instead, the echo cancellation filter H(z) is initialized at some value, and then adapted in time to accomplish its task. The adaption algorithm aims to change the tap weights of H(z) in order to best cancel out the part of y[n] that is dependent on a[n].
For this particular task, linear prediction and the orthogonality principle are implemented. Instead of attempting to explicitly define dependence of y[n] on each of the inputs, the problem is reformulated into attempting to find the best estimate of y[n] given only past samples of a[n].At each time step, y[n] is a linear sum of past a[n], b[n] and is consequently in span{a[i], b[i]}, i ≤ n. The predictor, on the other
2

hand, is a linear combination of past a[n] and therefore can only calculate what is in the span{a[i]}, i ≤ n. Thus, by the orthogonality principal, the error exists in the space orthogonal to the prediction space, in this case span{b[i]}, i ≤ n.
Generally this method takes into account all past a[i] and b[i], indicating an infinite impulse response (IIR). Although the transfer function from y[n] to a[n] may have an IIR filter response, all the poles are assumed to be inside the unit circle. Thus all impulse responses are of the form |p|n exp(jnθ), with |p| ≤ 1 and θ = tan−1 (Im(p)/Re(p)) is the angle of p. Therefore, it is not unreasonable to say that y[n] can be approximated with a finite number of past a[n] and b[n], putting it in the span{a[n], a[n − 1], ...a[n − M1 ], b[n], b[n − 1], ...b[n − M2 ]} ∪ V . Here, V represents the noise space, which is orthogonal to the signal spaces spanned by the M1 past a[n] values and the M2 past b[n] values. In terms of the impulse response, given an arbitrarily small number , there is a finite time where the impulse response associated with every pole will drop below . In fact this time period can be calculated to be n = ln ( /gH ) / ln (maxi |pi |), where gH is the gain for the mode corresponding to the pole with the maximum magnitude. Then, given noise in the system with variance
, after time n the level of the signal is below the noise floor, resulting in a negative signal to noise ratio (SNR). Thus the projection of y[n] onto the signal space of
{a[n], a[n − 1], ..., a[n − M1 ]}, estimated by the adaptive filter H(z) can be realized as a finite impulse response (FIR) filter rather than an IIR filter.
Modeling H(z) as a FIR filter is advantageous for adaptive filtering because of the relative ease of retaining stability [7]. In adaptive filters the filter coefficients can take on any values and as such the zeros (and possibly poles) of the filter are free to move about the z-plane. In algorithms that only adapt the zeros of a filter, the necessary condition to force the magnitude of the poles to continuously satisfy |p| < 1 is no

3

longer required. Instability, however, may still occur in an all-zeros model due to the adaptation feedback algorithm, especially in the case where significant quantization is present. The FIR case provides a very simple model for the case where only the zeros are adapted. This is the reason that the two most widely used adaptive filter algorithms, the least mean squares (LMS) and recursive least squares (RLS) both use an FIR structure [7].
The LMS algorithm is designed to recursively calculate the Wiener filter (explained in section 2.2), the corresponding Wiener-Hopf equations, and the concept of steepest descent. Essentially, the error signal, along with a prescribed step size, determine the amount by which the tap weights change in the direction specified by the data vector u[n]. This brings the tap weights closer to the optimal Wiener filter coefficients.
The actual tap weights might never reach the optimal weights, however, due to the resolution determined by the step size. Instead the tap weight values will oscillate about the Wiener filter coefficients. The RLS algorithm is instead based on the least squares curve fitting problem, utilizing the matrix inversion lemma to recursively compute the solution.
Many variations of these two base adaptive algorithms have been designed [7].
These variations resolve some of the inadequacies of the standard algorithms under certain conditions. For example, the adaptive step size least mean squares (ASSLMS) attempts to eliminate long term large oscillations about a local minima, which occur when the step size is too large. A similar variation is the adaptive forgetting factor recursive lease squares (AFF-RLS) algorithm. A more specific example is the recursive least squares adaptive threshold nonlinear algorithm (RLS-ATNA) which smooths out impulsive noise by introducing a nonlinear denominator dependent on the error [10, 11].

4

One of the larger classes of variations of these adaptive algorithms are subband methods [21, 22]. In the subband approach, an M -channel analysis filter bank is employed to break up the desired signal into M subband signals. The adaptive algorithm is then applied to each subband independently, with the final output retrieved by filtering the M subband outputs through a synthesis bank. Using this approach, very large adaptive filters with tap weights numbering in the hundreds can be adapted with only a fraction of the computational complexity. For example the effective number of computations of an LMS filter can be reduced from 2M to 2L + M using a two band filter structure. Here L is the length of the analysis and synthesis bank filters. This type of structure was derived for both the LMS and the RLS adaptive algorithms [6, 23].
This method, though, does not account for either the cross-terms between bands or the slower convergence due to running the subsystems at a slower rates. It has been shown by Petralgia et. al. [2, 21] that the overlap of the analysis filter banks causes energy in the overlap regions to not be fully represented in the subbands. Thus the signal space that is spanned by the input after the analysis bank is a proper subset of the space spanned by the input prior to the analysis bank. The estimation of the output may then be lacking the part of the projection that would have landed in that space, causing the error of the adaptive filter to rise. In addition the slower rates which the subsystems are running at cause a decrease in convergence approximately proportional to the decimation factor. These phenomena have both been experimentally verified [2, 6, 21–23].
Several methods have been proposed to deal with these issues, including oversampling, tridiagonal transfer functions, and non-uniform subband decompositions,
[19, 20]. Oversampling all of the subbands in the analysis bank causes certain regions

5

of the spectrum to be seen by multiple subbands, eliminating the need for extra filters.
This solution expands each of the subbands, increasing their update rates and increasing the convergence rates. This method entails unnecessary sampling, and therefore unnecessary computations spent on subsequent calculations.
Tridiagonal and non-uniform subband methods have been proposed to utilize critical sampling. The tridiagonal approach involves artificially including cross-terms, which depend on the subband filter bank coefficients and the coefficients of the two neighboring analysis bank filters. For instance, the cross-term in the k th band from the transition between the k th and (k+1)th subband filters would be Hk (z)Hk+1 (z)Gk+1 (z).
Here Hi (z) is the ith analysis bank filter, and Gi (z) is the ith subband filter. A symmetric calculation would account for the lower cross-term from the transition between the (k − 1)th and k th subband: Hk−1 (z)Hk (z)Gk−1 (z). This process adds extra bands, and although they are not updated separately since they depend on the neighboring filters, they require more computation to filter and decimate separately. In addition, they do not have the freedom to adapt based on their own errors since these cross-terms are directly related to the main band filters.
The non-uniform subband decomposition accounts for this by simply expanding bands of interest, automatically including all the cross-terms within that band. The trade-off is that the larger bands have more tap weights to adjust than the smaller bands, increasing the computations required to update them. Also, the bandwidths have to be decided ahead of time, requiring a-priori knowledge of the desired inputoutput characteristics. In many cases this is not feasible (e.g. if the system is timevarying). In the case of time-varying systems, a method would be needed to change the subband decomposition depending on the power in each band at the input and output. 6

1.2

Problem Statement

This paper addresses the task of reallocating the bandwidth of non-uniform subband structures on the fly in order to follow the input-output characteristics of the unknown system. With a-priori knowledge of the system, the problem becomes trivial, since the bandwidths can be allocated optimally beforehand. In the case of timevarying or completely unknown systems, the bandwidths cannot be pre-allocated and therefore need to be adjusted as information is obtained about the system.
This problem can be broken up into two main parts. The first is to design an appropriate subband filter structure. This structure must be able to change subband decompositions quickly and efficiently while maintaining the perfect reconstruction condition. Initialization of the structure may or may not be important depending on the adaptability of the algorithm itself. There has been much work in this area, and optimal structures, as well as structures that increase the ease of implementation have been proposed [17, 28]. While the optimal structures are very generalized, the structures that ease implementation are usually based on tree structured filter banks.
The second task is to devise an algorithm by which the bandwidths are reallocated while the adaptive filter is running. Some algorithms to make decisions on the reallocation have been proposed, but these algorithms deal with the oversampled case [15–17]. An algorithm than can retain the critically sampled property of the subband decomposition as it adjusts the subband widths would be beneficial in such adaptive systems.

7

1.3

Previous Work

There have been some proposals for such a subband reallocation algorithm by
McCloud and Etter in [15, 16]. The algorithm outlined in [15] deals with oversampled systems, and focuses on using the subband widths to isolate transition regions. This has been proven by Griesbach in [8] to decrease the minimum attainable steady state error, J(∞). Using smaller subbands around transition regions has been shown to lead to better results when compared to uniform subband decompositions [8]. Their design, however, still tends to larger subband widths in regions where there is no signal relative to the minimum allowable band width, when using smaller widths would save more computations [8].
The algorithm closest to the one proposed here is given by McCloud and Etter [16], but this algorithm depends on the error power to change the subband decomposition.
This dependence can cause the algorithm to suffer from similar unnecessary adaptations as the adaptive algorithms themselves. For instance, under burst noise conditions, not only are the adaptive filters changing due to this extraneous error, but now the entire structure is being changed because of this error. In addition, this algorithm does not address the initialization of the new subband filters. Thus, after every shift in subband structure, the filter has to reconverge to the optimal tap weights for a longer period of time.

1.4

Proposed Solution

This paper proposes an algorithm for allocating subbands for use in critically sampled systems and with a focus on signal strength rather than transition bands, as used by the McCloud and Etter, [16]. Based on the estimates of the input and

8

output power spectra, the widths of the subbands are changed in order to better suit the unknown desired transfer function. Consequently, in bands with low signal power, the bands are kept small in order to continue saving computations.
Tree structured analysis and synthesis banks are used in order to ease the transitioning between subband widths. In addition, to avoid unnecessary setbacks in the convergence, the new subband filters are initialized with an effective transfer function that approximates the total response of the old branch.
This paper is organized as follows: Chapter 2 deals with the underlying theory of adaptive filters. Chapter 3 reviews multirate filter bank theory and the perfect reconstruction criterion. Chapter 4 applies the subband technique to adaptive filters as demonstrated in [20] and Chapter 5 proposes an algorithm to adjust the subband widths and reinitialize the newly formed subband adaptive structure. Chapter 6 shows the results of testing the algorithm under various conditions against the non-adjustable subband filter, and finally Chapter 7 states the conclusions of the experimentation and details additional methods to expand and optimize the proposed algorithm.

9

Chapter 2

Adaptive Filtering
2.1

Adaptive Filter Theory

Adaptive filtering is a powerful tool with many applications, from beam-forming to system identification [7]. The basic concept of adaptive filtering is to have a filter in which the coefficients are not constant in time, and instead vary with respect to a feedback quantity. The feedback algorithm serves to change the weights with respect to this quantity in order to minimize a cost function. As an example, one of the most common applications for adaptive filters is system identification, as shown in Figure
2.1. In system identification, the noisy output of an unknown filter, d[n] + v[n] is compared to the output of the adaptive filter. The weights of the adaptive filter are then changed with respect to the difference, e[n], between the two outputs in such a way as to reduce that error in the next iteration. In this case, the cost function is usually a function of the error, such as the mean square error (MSE) or the sum of the squares.
In order to simplify the adaptive filtering problem, certain assumptions are made.
The most basic assumptions are that the filter is linear, discrete time, and has an
10

an

Unknown System, X(z)

+ +

ˆ
Adaptive System, X(z, en )

vn

+


en

Figure 2.1: General Adaptive Filter as a System Identifier
FIR structure. The reason that the filter is chosen to have a finite impulse response is because FIR filters with constant coefficients have no poles and are therefore automatically stable. When connecting the feedback algorithm to change the filter coefficients, however, even the FIR filter may become unstable. The discrete time assumption allows for a greater flexibility provided for by the programmable nature of the algorithms.
Under these assumptions, there exist a variety of adaptive algorithms to adapt the filter coefficients in order to converge on some optimal set of tap weights. Two of the most widely used and studied are the LMS and RLS algorithms.

2.2

Wiener Filters and LMS Filtering

Given the assumptions of linearity, discrete time and FIR structure, there exists an optimal set of filter tap weights in the least mean squared sense for a stationary input [7]. The least mean squares sense means that the cost function is given by:

J[n] = E |e[n]|2

11

(2.1)

As the name suggests, this is the expected value of the magnitude squared of the error signal. Such a filter is called the Wiener filter. The Wiener-Hopf equation: wopt = R−1 P

(2.2)

states that the optimal set of tap weights for a filter of length M , wopt , is equal to the inverse of the autocorrelation matrix of the past M inputs, u[n]:

R = E u[n]uH [n]

(2.3)

multiplied by the cross-correlation vector of u[n] with the output, d[n]:

P = E u[n]dH [n]

(2.4)

We can observe that the solution depends on a-priori knowledge of the input and output signal statistics. Since this often not the case, adaptive methods that approximate the Wiener filter based on sampled data have been developed. The most common and simplest one is the LMS algorithm.
The LMS algorithm takes the sampled data taken up until time n, and makes the approximations R ≈ u[n]uH [n] and P ≈ u[n]d∗ [n]. Using these approximations in conjunction with the steepest descent method (which states that the optimal change in w is in the direction of − J) of estimating the next step result in:

e(n) = d[n] − wH [n]u(n)

(2.5)

w[n + 1] = w[n] + µu[n]e∗ [n]

(2.6)

If w[n] has length K, these update equations require an effective total of 2K mul-

12

tiplications to evaluate the new tap weights: K multiplications for the wH [n]u[n] operation and K + 1 more for the µu[n]e∗ [n] operation. In this paper, single multiplications such as µe∗ [n] will be ignored as they are dominated by the terms that depend on the filter length (O(K) or higher).

2.3

Kalman Filters and RLS Filtering

The RLS filter is based on the least squares (LS) curve fitting problem [7]. In the
LS problem, a curve is fit to a given set of data points by minimizing the sum of the squares of the errors from all points to the curve. The RLS cost function to minimize is: n

β[i]e2 [i]

J(e[n]) =

(2.7)

i=0

where β[i] is a weight factor that tells the importance of each point in the least squares algorithm and e[i] = d[i] − wH [i]u[i] is the error of the ith iteration. In the
RLS algorithm β[i] = λn−i for 0 ≤ λ ≤ 1. The parameter λ is called the forgetting factor and gives exponentially less credence to past error values. Larger values of λ make the algorithm less adaptable to quick changes since past errors are considered important for longer periods of time. The matrix inversion lemma is then used to calculate the solution to this minimization recursively. An alternate way to view the
RLS filtering problem is as a special case of the Kalman Filter. The Kalman filter deals with a dynamical system consisting of a non-observable internal state vector, x[n], and an observable noisy measurement, y[n]. The Kalman filter characterizes the system by finding the estimate of the hidden state vector with the minimum mean

13

squared error (MMSE). The formulation of the system is:

x[n + 1] = F[n + 1, n]x[n] + ν1 [n] y[n] = C[n]x[n] + ν2 [n]

(2.8)
(2.9)

Equation (2.8) is referred to as the process equation and updates the unknown state x[n]. The measurement equation (2.9) produces the observed quantity y[n], known as the measurement. Here F[n + 1, n] is referred to as the state transition matrix from n to n + 1, C[n] is the measurement matrix at time n, and ν1 [n], ν2 [n] are independent additive white stochastic processes with covariance matrices Q1 [n], Q2 [n] respectively. The equations that define the Kalman filter are:

π[n] = K[n, n − 1]CH [n]
Gf [n] = F[n + 1, n]π[n] (C[n]π[n] + Q2 [n])−1 α[n] = y[n] − C[n]ˆ[n|n − 1] x x[n + 1|n] = F[n + 1, n]ˆ[n|n − 1] + Gf [n]α[n]
ˆ
x
K[n] = [I − F[n, n + 1]Gf [n]C[n]] K[n, n − 1]
K[n + 1, n] = F[n + 1, n]K[n]FH [n + 1, n] + Q1 [n]

(2.10)
(2.11)
(2.12)
(2.13)
(2.14)
(2.15)

The basic idea of this algorithm is to whiten the measurement y[n] to form α[n]
(equation 2.12). Then, independent (white) segments of y[n] are then pieced together to form the state estimate x[n+1|n] (equation (2.13)) based on the Kalman gain, Gf [n]
ˆ
(equation (2.11)). The Kalman gain calculated in equation (2.11) can be thought of as a ratio of covariances and is used to decide how much credence will be given to the new independent piece of information (contained in the latest measurement). Equations

14

(2.14) and (2.15) are used to update the estimate of the covariance matrix of x[n+1|n].
ˆ
These updates are called the Riccati Equations.
The RLS algorithm can then be formulated as a special case of the Kalman filter where: F[n] = λ−1/2

(2.16)

C[n] = uH [n]

(2.17)

v1 [n] = 0

(2.18)

v2 [n] = ν[n]

(2.19)

Here λ is the forgetting factor and ν[n] is white Gaussian noise. The process and measurement equations then become:
1

x[n + 1] = λ− 2 x[n]

(2.20)

y[n] = uH [n]x[n] + ν[n]

The transformation of the Kalman filtering equations is then as follows: n x[n] → λ− 2 w[0] n y[n] → λ− 2 d[n] n ν2 [n] → λ− 2 e∗ [n]
0
x[n + 1|n] → λ−
ˆ

n+1
2

w[n]

K[n] → λ−1 P[n]
Gf [n] → λ−1/2 Gf [n] α[n] → λ−n/2 η[n]

15

(2.21)

The quantity P[n] above is the inverse of the input correlation matrix: n λn−i u[i]uH [i]

R[n] =

(2.22)

i=0

The Kalman filtering equations then yield the RLS algorithm:

gf [n] =

P[n − 1]u[n] λ + uH [n]P[n − 1]u[n]

(2.23)

η[n] = d[n] − wH [n − 1]u[n]

(2.24)

w[n] = w[n − 1] + gf [n]η ∗ [n]

(2.25)

P[n] = λ−1 I − gf [n]uH [n] P[n − 1]

(2.26)

We note that the total computational complexity of the RLS update equations is
3K 2 +2K multiplications per iteration. This comes from the three matrix-vector multiplications (one in equation (2.23) and two in equation (2.26)) and two dot products
(one in each of equations (2.23) and (2.24)). Although the RLS algorithm has a large increase in the computational complexity over the LMS algorithm per iteration, the convergence time is substantially reduced.

2.4

Evaluation of Adaptive Filter Performance

With so many adaptive filter algorithms, it is necessary to have methods to compare the performance of the filters. The major attributes of an adaptive filter that can be compared are the misadjustment and the convergence rate.
As an adaptive filter converges to the optimal filter, it will eventually oscillate about a steady state value, w(∞). The reason the oscillation occurs is due to the resolution of the step size: the filter cannot converge to the exact optimal filter.

16

Instead, the path about the error performance surface attempts to reach that value as in the steepest descent, but keeps overshooting. The misadjustment is a measure of how close this steady state value is to the theoretical optimal value wopt through the corresponding minimum cost function value Jmin . Assuming J[∞] = lim J[n] exists, n→∞ the misadjustment M is defined as:
M =

J[∞]
Jmin

(2.27)

This equation is the ratio of the cost function evaluated at w(∞) and wopt . As
Jmin is the absolute minimum value that can be attained, the misadjustment it always greater then or equal to one. The smaller M is, the better the steady state performance of the filter.
An alternate way to understand the steady state error is to look at the mean squared error (MSE). The MSE is calculated by running the algorithm a number of times and averaging the results. The result is an approximation of J[e[n]]. Since for a given system and cost function, Jmin is constant and the misadjustment of various algorithms can be compared by looking at the MSE after more iterations than the convergence time.
Although the steady state performance of an adaptive filter is important, the filter may take a long time to converge to steady state.The convergence rate gives a measure of how fast the adaptive filter approaches the steady state conditions in the mean.
This quantity is usually measured as the average change in the MSE versus time. This quantity becomes important when dealing with real time applications. In such cases, it may be advantageous to use more complex algorithms that have faster convergence, such as the RLS-based algorithms.
Often, these two parameters are difficult to quantify, leading to visual methods

17

of comparing these characteristics for competing algorithms. Error curves are such a visual aid that allow both the convergence rate and the misadjustment to be viewed simultaneously. The error curve plots either the magnitude of the error, d[n]−wH [n]u[n], the magnitude of the error squared, or the cost function over time either linearly or in decibels (dB). The choice of which scale to view the error curves with is dependent on the ease of which the data it represents can be viewed. The convergence rate can be found by observing the slope near the beginning of the curve, and the misadjustment can be calculated by observing the height of the curve at large n. To observe the misadjustments, decibel scale plots are usually optimal, since small changes in the misadjustment are easier to see. For the convergence, large convergence rates are easier to distinguish on a decibel scale, while slow convergence rates are easier to see on a linear scale.
The error curves can be viewed as the output of a single trial, or as an average of a series of runs. Single runs provide a more accurate representation of the steady state oscillations, while the averaging method eliminates these oscillations to better display the convergence rate and the steady state error.
An example plot is shown in Figure 2.2. This figure shows the plots of the error magnitude, averaged over 100 trials, of four filter types: LMS, RLS, Normalized
LMS (NLMS) and RLS Adaptive Threshold Nonlinear Algorithm (RLS-ATNA). The plot shows that the RLS based algorithms have a much higher convergence rate and misadjustment than the LMS based algorithms. This increase in performance comes at the cost of a much higher computational complexity.

18

Figure 2.2: Error Curve Plot Comparison of Adaptive Algorithms

2.5

Variations of LMS and RLS Filters

Many derivatives of LMS and RLS, the two main adaptive algorithms, have been established. Each of these have been tailored to special situations in order to increase performance. Some of the algorithms are meant to deal with special noise conditions or other environmental conditions (e.g. non-stationarity), while others are designed to reduce the number of computations needed to run the algorithm.
Usually the variations used to increase performance are not used independently, but in conjunction with one another. This allows for an increase in performance in multiple areas. The issue then is that most algorithms that increase performance in one aspect or another require significantly more computations per iteration. Thus, although most variations can be merged together, the detriment in computational complexity may outweigh the gain in performance. The variations used to save computations, however, may lead to poorer performance than even the standard algorithms.
Thus there is a trade-off between the computational complexity of an algorithm and
19

its performance under various conditions.
One of the most basic changes to the LMS and RLS algorithms is to make the parameters, such as the step-size, time-varying [3,24]. In the case of the LMS algorithm, this leads to the Adaptive Step Size LMS (ASS-LMS) algorithm, and in the RLS case, this translates into the Adaptive Forgetting Factor RLS (AFF-RLS) algorithm [11,13].
The ASS-LMS algorithm deals with changing the step size µ into a function of time, µ[n]. A very basic example is when µ[n] = µn for |µ| < 1. In this case, the step size dies out exponentially with time. The idea here as that as the adaption algorithm converges on the optimal tap weights, the step size decreases in order to lower the misadjustment. The AFF-RLS algorithm considers a similar adaption in the forgetting factor as λ = λ[n].
Another common variation, the Normalized LMS (NLMS) algorithm, normalizes the µu[n]e∗ [n] term by the total energy in the input vector. This effectively deals with cases where the input signals are large. When the vector u[n] has large components, the tap weights undergo large changes proportional to u[n]. In order to prevent this, the input vector is instead treated as a unit vector by dividing by the norm squared
|u[n]|2 = uH [n]u[n]. This makes the step size and the error signal more dominant.
An issue here, however, arises when the the norm is very small. In this case the tap weight change is again very large again, because of the division by a small number.
To account for this, a second parameter, δ, is introduced. The normalization factor is then modified to [δ + |u[n]|2 ]−1 , where δ is a small number, but large enough to not cause the tap weights to increase too dramatically.
In RLS and LMS filtering, the adaptation of the tap weight filters is proportional to e[n]. Thus large errors correspond to large changes in the adaptive filter; this problem is similar to that addressed in the NLMS algorithm with the vector u[n].

20

For constant background noise, this does not cause too much of a problem, since the variance of this additive noise is usually small compared to the signal. For shot noise, however, the variance is relatively large, resulting in a very large error regardless of the difference between the adaptive tap weights and the optimal tap weights. Shot noise is defined as a Poisson process, where the events are characterized as additive independent random numbers following a Gaussian distribution with a much larger
2
2 variance than the background noise (σshot >> σν ). Thus for regular RLS and LMS

filters, this causes the tap weights to change dramatically and the filter needs to re-converge on the optimal filter weights.
For the RLS filter type, a derivative has been proposed to combat this effect.
The RLS-ATNA filter makes use of a nonlinear function of the error, f (e[n]) to limit the amount by which the tap weights can change at any given iteration. Again the comparison is made with the NLMS algorithm, where a similar term is introduced.
The difference is that the RLS-ATNA has higher adaptability within the nonlinear section of the algorithm. For example, a closely related algorithm proposed by Koike in [11] uses f (e[n]) = (x + y|e[n]|k )−1 , where both x and y can also be functions of n.
Figure 2.3 shows the superior performance of the NLMS and RLS-ATNA algorithms to the regular LMS and RLS algorithms under shot noise.
Since most of the applications of adaptive filtering are real-time, the time it takes to compute each iteration of the algorithm is important. This is directly related to the computational complexity, or the number of addition and multiplication operations needed for every iteration. Since it is generally accepted that multiplication operations in very large integrated (VLSI) digital signal processing (DSP) systems take an order of magnitude grater time to compute than addition operations, it is the number of multiplications that usually dominate the processing time. This is because in the

21

Figure 2.3: Error Curve Plot Comparison of Adaptive Algorithms with Shot Noise simplest sense, a multiplication unit is a set of adders [9].
Based on this, methods have been devised in order to reduce the complexity of adaptive filtering [7]. One of the main ways of accomplishing this is through batch processing. In batch processing, each iteration is not executed as soon as the data is collected, but instead the data is buffered. When the desired amount of data is obtained, fast matrix operations are used instead of vector operations, allowing for faster calculations. For example fast convolution techniques can be used over a range of inputs instead of filtering directly [18]. Batch processing is one of the few derivatives that focus on saving computation time rather than increasing in performance.
These batch processing techniques are intrinsically tied into frequency domain processing. In the fast convolution case, this is manifested in performing Fast Fourier
Transforms (FFT) in order to reduce an N 2 process to an N log(N ) process. This connection to frequency domain calculations leads to formation of subband adaptive filtering techniques that will be discussed in detail in Chapter 4.

22

Chapter 3

Multirate Filter Banks
3.1

Basics of Multirate Filter Banks

Multirate filter bank theory deals with methods of designing and implementing multiple input-multiple output (MIMO) systems, with subsystems possibly running at different rates. One specific application uses filter bank theory results to calculate the outputs of a long FIR filter using M filters in parallel [25]. This application makes use of the decimation and interpolation operations to change the data rates in such a way that the filtering operation is performed quickly while maintaining the desired output. The interpolation and decimation operations increase and decrease, respectively, the rate of the incoming signal. The decimation operation takes every M th sample of the incoming signal, and drops the rest, effectively resampling the incoming signal at (1/M )th the initial rate. Thus if the initial signal had a time series h[n] =
{h[0], h[1], h[2], . . .}, the decimated signal would be h[M n] = {h[0], h[M ], h[2M ], . . .}.

23

The z-transform expression is given by:

Z [h[M k]] =

1
M

M −1
1

H z M e−

2jmπ
M

(3.1)

m=0

where the 2mπ/M terms for m = 0 come from the M th roots of unity of the complex number z. The frequency domain expression can then be found by using z = ejω , yielding: (↓ M )H(ω) =

1
M

M −1

H m=0 ω
2mπ

M
M

(3.2)

In equations (3.1) and (3.2) the m = 0 terms are referred to as the aliasing terms.
Interpolation, on the other hand, inserts M − 1 zeros between any two consecutive samples of the incoming signal, simulating a sampling at M times the initial sampling rate. The time domain signal is then hup [n] = h[k] if n = kM and zero otherwise.
The z-transform is then:
Z [hup [n]] = H(z M )

(3.3)

In the frequency domain this becomes:

(↑ M )H(ω) = H(M ω)

(3.4)

Equation (3.4) indicates M replications of the spectrum centered at 2πm/M . The spectra at the m = 0 locations are called imaging terms. It is important to note that the periodic nature of the frequency response of the discrete time Fourier transform
(DTFT) needs to be taken into account. This implies that when resampling the signals, either aliasing or imaging can occur. For decimation, the effect is an aliasing
24

of other copies into the (−π, π) range. For interpolation, the contraction brings in duplicates of the frequency response to populate the rest of the (−π, π) range. Figure
3.1 displays this concept graphically. Thus to achieve adequate separation of specific frequency bands, both anti-aliasing and anti-imaging filters need to be implemented.
The cutoff frequencies for these bands depend on the decimation factor being used, since that decimation factor will dictate which spectral bands will be aliased or imaged.

Figure 3.1: Decimation and Interpolation Frequency Transformations
Figure 3.2 shows the full structure of a filter bank complete with anti-aliasing and anti-imaging filters. In Figure 3.2, Hk (z) represents the anti-aliasing filter for the k th band, Fk (z) represents the anti-imaging filter, Gk (z) is the k th subband filter, and
M is the resampling factor. The Hk filters comprise what is called the analysis filter bank, while the Fk filters comprise the synthesis filter bank.
In general, not every band has to have the same decimation factor. Non-uniform subband filters utilize different decimation factors to isolate bands of different widths.
This leads to the concept of undersampling, oversampling and critical sampling. Over-

25

H0 (z)

G0 (z)

↑M

F0 (z)

H1 (z) un ↓M

↓M

G1 (z)

↑M

F1 (z)

+
+

.
.
.

.
.
.

.
.
.

.
.
.

+
+
+

↓M

GM −1 (z)

↑M

.
.
.

HM −1 (z)

yn

FM −1 (z)

Figure 3.2: M Channel Uniform Filter Bank sampling is when the number of samples exiting the analysis bank is greater than the number of samples entering. This corresponds to the sum of the inverse decimation factors over all the bands being greater then one. Undersampling is when fewer samples leave the analysis bank than enter, (the sum of the inverse decimation factors is less than one) and critical sampling is when exactly the same number of samples leave as enter (the sum equals one). In terms of information retention, critical sampling is the ideal case, since the information entering the system can always be perfectly represented by the exiting samples with no redundancy.
Here we consider the insertion of adaptive subband filters, Gi (z), between the analysis and synthesis banks, as shown in Figure 3.2. The analysis bank, Hi (z), and synthesis bank, Fi (z), are then adjusted with the critical sampling constraint to achieve a dynamical subband decomposition. The adaptive subband filters Gi (z) are adapted by algorithms similar to the LMS and RLS algorithms in order to achieve the underlying goals of the system.
The aliasing and imaging can also be used to the advantage of a system designer through what are called the Noble Identities. It can be shown that feeding the output

26

of a filter H(z M ) into a decimation operation (↓ M ) is identical to first decimating by
M , and then filtering by H(z). Similarly, zero-interpolating by M , and then filtering by H(z M ) is equivalent to first filtering by H(z) and then interpolating by M .
This leads to one of the simplest and most effective ways to use such a filter bank in the FIR case: polyphase decomposition. Polyphase decomposition breaks an FIR filter into the M th polyphase components, or components which only contain samples of the form M n + k for fixed k (0 ≤ k ≤ M − 1). In terms of the z-transform, this is represented as:
N

z −i Ei (z M )

(3.5)

alM +i z −l

H(z) =

(3.6)

i=0

where:
N

Ei (z) = l=0 Using this decomposition, any FIR filter can be formed made to fit in the structure of Figure 3.2. In this specific structure (called a polyphase filter bank), the analysis and synthesis banks simply consist of delays, while the Noble identities are used to push the Ei (z M ) terms through the decimation operation. Thus instead of one N M length filter with N M multiplications per time-step, M filters of length N are used.
This requires only N computations per time-step due to the resampling. This concept will be important when the filter bank is adaptive, and adapting the center filters separately will further reduce the number of necessary computations.
A special case of filter banks is the Quadrature Mirror Filter (QMF), or a filter bank with only two channels where H0 (z) = H1 (−z) = F0 (z) and F1 (z) = −H1 (z).
In this case the only filter required to be designed is a low-pass filter, H0 (z). The

27

QMF structure is popular because of the ease of satisfying certain conditions, as will be discussed in section 3.2. More general subband decompositions can also be constructed using embedded QMF filters. This will be discussed further in section
3.3.

3.2

The Perfect Reconstruction (PR) Condition

When a signal is broken up into its subband components, it is often desired to have the analysis and synthesis filter banks have a minimal effect on the signal. This leads to the idea of the perfect reconstruction (PR) condition. In essence, the PR condition states that: a) the aliased terms usually found in the output due to the multi-rate operations are all fully canceled out and b) the remaining total transfer function acting on the signal is reduced to a constant gain and a delay.
The analysis bank transfer function H(z) is defined by Y (z) = H(z)X(z), where
X(z) is the z-transform of the input signal and Y (z) is the M x1 vector consisting of the outputs which feed into the decimation blocks. The synthesis filter bank matrix
F(z) is defined by Y (z) = F(z)X(z), where X(z) is the M x1 vector consisting of the outputs of the interpolation blocks and Y (z) is the output of the filter bank. By defining Ei,k to be the k th polyphase component of the ith analysis filter, and Ri,k to be the ith polyphase component of the k th synthesis filter, the analysis filter matrix

28

H(z), and the synthesis filter matrix F(z) can be reformulated as follows:
T
H(z) =

H0 (z) H1 (z) . . . HM −1 (z)



M)
M)
M)
0
E (z
E0,1 (z
...
E0,M −1 (z z  0,0





M)
M )   z −1
 E1,0 (z M )

E1,1 (z
...
E1,M −1 (z



= 


.
.
.
.
..



.
.
.
.
.
.
.
.
.






EM −1,0 (z M ) EM −1,1 (z M ) . . . EM −1,M −1 (z M ) z 1−M (z)
T
= E(z)

F(z) =

=

=

z 0 z −1 . . . z 1−M (z)

F0 (z) F1 (z) . . . FM −1 (z)

R (z M )
R0,1 (z M )
 0,0

 R1,0 (z M )
R1,1 (z M )
1−M z 2−M . . . z 0 

z
.
.

.
.
.
.


RM −1,0 (z M ) RM −1,1 (z M ) z 1−M

z 2−M

. . . z 0 R(z)

(3.7)

...

R0,M −1

(z M )






...
R1,M −1


.
..

.
.
.


. . . RM −1,M −1 (z M )
(z M )

(3.8)

where E(z) is the analysis polyphase matrix and R(z) is the synthesis polyphase matrix. The PR condition can then be formulated as:
E(z)R(z) = cz −∆ I

(3.9)

where c and ∆ are constants [25]. For the special case of two band filter banks, as shown in Figure 3.3, the PR condition on the analysis and synthesis filter banks is

29

given by

H0 (z)F0 (z) − H1 (z)F1 (z) = 0

(3.10)

H0 (z)F0 (z) + H1 (z)F1 (z) = C

(3.11)

for some constant C. In the QMF case, these constraints reduce to constraints on
H0 (z), the low-pass filter, only:
˜
˜
H0 (z)H0 (z) − H0 (−z)H0 (−z) = 0
˜
˜
H0 (z)H0 (z) + H0 (−z)H0 (−z) = C

H0 (z)

(3.12)
(3.13)

↓2

↑2

F0 (z)

un

+
+

H1 (z)

↓2

↑2

yn
ˆ

F1 (z)

Figure 3.3: Two Channel Filter Bank
In general, the PR condition for FIR analysis and synthesis filter banks is difficult to design for. An alternative to achieve approximate PR is the near perfect reconstruction (NPR) condition. The NPR condition is a weakened version of the PR condition, and states that the constraints are not met precisely, but instead to within some tolerance. Thus, although a small aliased term exists, its magnitude is below some acceptable threshold. This allows iterative algorithms to use established FIR filter design methods to minimize the aliased terms to within an acceptable range [1].

30

3.3

Tree Structured Filter Banks

The tree structured filter bank is a structure that allows filter banks with large numbers of subbands to be constructed from smaller filter banks with smaller numbers of subbands. These structures can result in either uniform or non-uniform subband decompositions. For example, by implementing a structure such as shown in Figures
3.4 and 3.5, a non-uniform subband decomposition of three bands can be obtained.
This is because the upper subband, obtained by high-pass filtering and then decimating by two, is then split again by a second embedded QMF filter bank. The result is one half-band subband and two quarter-band subbands.
This process of embedding filter banks within filter banks can be used to obtain a wide variety of decompositions. For example, to obtain a six channel, uniform filter bank, two three channel filter banks can be embedded in each subband of a QMF filter bank. The effective decimation factor for each resulting channel is then the product of all decimation operations leading up to that channel.
H1 (z)

H1 (z)

un

↓2

H0 (z)

↓2

G2 (z)
1

H0 (z)

↓2

G2 (z)
0

↓2

G1 (z)
0

Figure 3.4: Two Tier Tree Structure Analysis Bank With Subband Filters
One of the main reasons to use tree structured filter banks is the ease of designing for the PR condition. It can be shown that as long as each filter bank satisfies the
PR condition, the overall resulting structure also satisfies the PR condition [25]. In

31

2 g1,n 2 g0,n ↑2

˜
H1 (z)

↑2

˜
H0 (z)

+
+

↑2

↑2

1 g0,n ˜
H1 (z)

˜
H0 (z)

+
+

yn

Figure 3.5: Two Tier Tree Structure Synthesis Bank addition, non-uniform subband decompositions satisfying the PR condition can be realized with less effort; usually methods for creating arbitrary M channel PR filter banks rely on cosine modulation, and result in uniform decompositions [12]. Being able to transform small uniform decomposition filter banks into large non-uniform decompositions greatly relieves the computational burden of designing and realizing the desired system.

32

Chapter 4

Subband Adaptive Filtering
4.1

Background

Subband adaptive filtering is a well-developed concept that uses the resampling operations of a filter bank in order to lower the number of tap weights per adaptive filter. For example, in echo cancellation problems, the signal is restricted to either the bandwidth of human hearing, or the bandwidth of human speech. Thus the filter does not need to take into account the part of the spectrum outside of this area, since a pre-determined band-pass filter can easily cancel out any noise in that region. Thus appropriate filter banks can isolate this band, and adapt a filter H(z) that operates solely on that band.
For illustration, consider a system with a sampling frequency of 80KHz where the signal of interest is in the audible range of 20-20000Hz. If an LMS-type algorithm of length M is used, the corresponding number of multiplications per iteration would be
2M [7]. Consider then, instead, if low- and high-pass filters of length L were used to decompose the signal in QMF form. Each of the resulting subband filters would be of length M/2. The total number of multiplications per iteration is then L per analysis
33

and synthesis filter, a total of 4L, and M per subband filter, a total of 2M . This total is then divided by two since the system is now half-rate, for a total of 2L + M multiplications. Thus as long as L < M/2, computational savings can be realized.
For the case of very large FIR adaptive filters, these savings are quite substantial.
In this example, additional savings can be achieved if the high-pass filter branch is completely ignored. This is usually desired since all of the desired information is in the lower half of the spectrum.

4.2

Uniform Subband Adaptive Filtering

In uniform adaptive filtering, the subband decomposition is done in such a way that all the resulting subbands are of equal width. Therefore for an M band filter bank, each of the subband filters have length N/M , where N is the length of an equivalent fullband filter. These subband filters are updated with respect to the subband error
ˆ
signals ei [n] = di [n] − di [n]. Here di [n] is the desired output of the ith subband and
ˆ
di [n] is the output of the adaptive filter at the same band.
In subband adaptive filtering, both the input and desired output are passed through an analysis filter in order to ensure that all related systems are operating at the same rate. In general, the analysis filter for the input does not have to be identical to that of the desired output, since the adaption occurs post-decimation. As long as the analysis filter for the desired output and the synthesis filter satisfy the perfect reconstruction property, all that has to match are the rates [28].
Methods have been proposed to use this freedom of design to optimize the analysis filter for the input with respect to the MSE criterion, but here only the case where the two analysis banks are equivalent is considered [28]. This case was chosen since the subband adjustment algorithm presented is dependent on the input-output char-

34

acteristics of the subbands of the unknown filter. Thus the power in each spectral region of the output must be compared to the power of the same spectral region of the input.
A variety of uniform subband algorithms have been proposed, both based on the
LMS and the RLS concepts [5, 19, 21, 23, 26]. Originally, oversampled systems have been proposed in order to compensate for any information loss due to slight deviations from the PR condition. Recently, though, critically sampled systems have been of more interest as they allow for the minimum number of samples to be processed without loss of information. For the critically sampled systems, uniform band filters have been proposed for both the LMS and RLS algorithms. However, only LMS algorithms have been applied to non-uniform subband structures [20].
In critically sampled systems, there are two main disadvantages to using subband adaptive filtering. The first is an inherent increase in convergence time due to the adaptation taking place at a slower rate. The second is an increase in the MSE due to the lack of accounting for the aliased terms in the bands. This effect can bee seen by relating the response of the unknown filter through the analysis bank (Dideal (z)) to the
ˆ
response of the analysis bank through the adaptive filters D(z). Here Dideal (z) is an
ˆ
M x1 vector whose ith entry is the ith subband component of the ideal response. D(z) is the M x1 vector containing the estimate of Dideal (z) (the values directly before the synthesis filter). Defining Hi,k (z) = Hi (ze−

j(k−1)2π
M

) as the M xM matrix where Hi (z)

is the it h analysis band filter, G(z) as the M xM subband transfer function matrix,
X(z) as the M xM diagonal matrix with entries Xk,k (z) = X(ze− j(k−1)2π −
M

is the unknown filter to be estimated and Ui (z) = U (ze

j(k−1)2π
M

) where X(z)

) as the M x1 vector

where U (z) is the z-transform of the input signal, the condition that these responses

35

should be equal is given by [6] as:
1

1

1

1

1

H(z M )X(z M )U (z M ) = G(z)H(z M )U (z M )
1

1

(4.1)

1

In equation (4.1), H(z M )X(z M )U (z M ) is the response ideal response Dideal (z) and takes into account the aliasing present due to the decimation operation. Similarly,
1
1
ˆ
G(z)H(z M )U (z M ) is the estimate of Dideal (z), D(z), also including aliasing effects.

By applying the transformation z → z M , the relationship H(z)X(z) = G(z M )H(z) follows directly. In using the PR condition, F(z) = z L H−1 (z) can be used to invert
H(z) to obtain equation (4.2). G(z) can be expressed by [6]:

G(z M ) = H(z)X(z)F(z)

(4.2)

which is equivalent, element-wise, to:
M

Gi,k (z) =

Hi ze

2πj(l−1)
N

X ze

2πj(l−1)
N

Fl ze

2πj(k−1)
N

(4.3)

l=1

This shows that, in general, cross-terms between all subbands are required in order to perfectly model the unknown filter X(z). More specifically, these terms depend on the products Hi ze

2πjωl
N

Fl ze

2πjωk
N

. In some special cases, such as

ideal rectangular filters with no overlaps, these cross-terms are zero and cancel out.
As an example, consider the two band case with Fi (z) = Hi (z). Equation (4.2) can then be expressed as: [6]



+
H0 (z)H1 (z) (X(z) + X(−z))

G(z 2 ) = 

2
2
H0 (z)H1 (z) (X(z) + X(−z)) H1 (z)X(z) + H0 (z)X(−z)
2
H0 (z)X(z)

2
H1 (z)X(−z)

36

(4.4)

Here the off diagonal elements illustrate the dependency on the product of the filters
H0 (z)H1 (z).
For the LMS subband adaptive structure, the cost function is modified from that previously presented in section 2.2 to be the mean of the sum of the squares of error of all subbands:
M

|ei [M ]|2

J(e[n]) = E

(4.5)

i=1

The error in each subband is defined as the output of the subband filter in that channel subtracted from the corresponding desired signal.
In the diagonal case, the subband filter structure is such that only one filter,
Gi,i (z), connects the output of the ith analysis bank channel to the ith synthesis bank channel, as shown in Figure 4.1. The transfer function matrix then has the form:


0
0
G0 (z)

 0
G1 (z)
0



0
G2 (z)
 0
G(z) =  .
 .
.
.
.
.
 .
.
.


 0
0
0


0
0
0


...

0

0

...

0

0

...
..
.

0
.
.
.

0
.
.
.

. . . GM −2 (z)
...

0

0
GM −1 (z)

















(4.6)

In this case the error ei [n] is only dependent on the corresponding filter Gi,i (z).
Thus, by taking the gradient with respect to those filter weights, all the other error terms are eliminated. The resulting update equations are then the same as the equations in section 2.2 for each channel, independently. The number of multiplications needed per iteration is calculated to be 3L + 2K/M . This is derived from the
37

Figure 4.1: Diagonal Subband Filter Structure
3LM multiplications needed to calculate the output of the synthesis and two analysis banks, and the 2KM/M multiplications it takes to update M LMS filters each of length K/M . When the factor of 1/M is accounted for (since the filter is run at
(1/M )th the total rate), the total savings in computations is 2K(M − 1)/M − 3L multiplies. In the diagonal structure there is a lack of the cross-terms shown in equations
(4.2) and (4.3) that allow for exact modeling of the system under general conditions.
Therefore an alternate filter structure has been proposed where the transfer function matrix is not a diagonal matrix, but a tridiagonal matrix [6], [19]. As was shown in equation (4.2), the cross-terms that are required to adapt to an arbitrary unknown

38

filter X(ω) are dependent on the products Hi (z)Fj (z). The tridiagonal structure is motivated by the assumption that such products are zero for |i − j| ≥ 2.Thus all terms aside from Gl,l (z), Gl,l−1 (z) and Gl,l+1 (z) in equation (4.3) become zero. The cross-terms are then introduced as shown in Figure 4.2; the corresponding transfer function matrix is:


0
0
G0 (z) G1 (z)

G (z) G (z) G (z)
0
1
2
 0


G1 (z) G2 (z) G3 (z)
 0
G(z) =  .
 .
.
.
.
.
.
.
 .
.
.
.


 0
0
0
0


0
0
0
0


...
...
...
..
.
...
...

0

0




0
0



0
0



.
.
.
.

.
.


GM −2 (z) GM −1 (z)


GM −2 (z) GM −1 (z)

(4.7)

This structure uses a larger analysis bank, with M channels filtered by Hi2 (z) and M − 1 channels representing the transition regions, filtered by Hi (z)Hj (z). The adaptive filters are then defined by Gi,j (z) = G2i−j (z) along the three main diagonals.
The cross-terms having the analysis filter banks Hi (z)Hj (z) are motivated by equation
(4.3) in the special case where Fi (z) = Hi (z). The resulting number of computations are higher because, in addition to the filtering operations performed in the diagonal design, each analysis filter for the input is twice as long and there are extra filtering operation to filter the off diagonal terms.
When minimizing the cost function in the tridiagonal structure, not all the error terms cancel out. This leads to a very large increase in the number of computations

39

for the update of the subband filter tap weights. The update equations:

H
H
H
Ek [n] = dk [n] − Xk,k [n]Gk [n] − Xk−1,k Gk−1 [n] − Xk,k+1 Gk+1 [n]

(4.8)




Gk [n + 1] = Gk [n] + µk (Xk,k Ek [n] + Xk−1,k Ek−1 [n] + Xk,+1k Ek+1 [n]) (4.9)

reflect this fact, as they are no longer simply a function of the data vector associated
H
with that particular subband [19]. In equations (4.8) and (4.9), Xk,l is the filtered

input ui filtered by the cascaded filter Hk (z)Hl (z). Thus k = l imply the main subbands, while k = l are the cross-terms introduced into the system. A full derivation of the tridiagonal update equations for the case of real inputs and filter coefficients can be found in [19].
In terms of the computational complexity, the filtering in the analysis and synthesis banks increases to LM + 2L(M − 1) computations per M iterations. The LM term comes from the extra filtering in the Hi2 (z) in the analysis bank for the input, and the 2L(M − 1)/M comes from the M − 1 cross-term filters of length 2L. The biggest increase, though, comes from the tripling of the computations needed to update the LMS filters. In the diagonal LMS subband structure, each branch entailed
2K computations. In the diagonal case, accounting for the cross-terms adds two more terms to each equation, tripling the computations to 6K. The total computational complexity then is (3LM + 6K)/M .
The same two topologies that were applied to the LMS subband filtering have been applied to the RLS algorithm. In the case of the subband RLS algorithm, the cost function to minimize is the sum of the least squares errors:
M −1 n

λn−l |ei [n]|2 i J(ei [n]) = i=0 l=0

40

(4.10)

This is similar to solving M simultaneous least squares problems.
For the diagonal case, as with the LMS algorithm, the error in the ith band is dependent only on Gi (z). Therefore the differentiation results in solely that term, and the algorithm is identical to that in section 2.3 for each subband independently.
In the tridiagonal case the error is again dependent on both the cross-terms from the neighboring bands. The full derivation of this algorithm by Alves and Petralgia can be found in [23], and results in the following update equations:

k[n] = P[n − 1]χ[n] λI + χH [n]P[n − 1]χ[n]

−1

(4.11)

P[n] = λ−1 I + k[n]χ[n] P[n − 1]

(4.12)

ρ[n] = λρ[n − 1] + χ[n]d[n]

(4.13)

G[n] = P[n]ρ[n]

(4.14)

where χ[n] is a tridiagonal matrix consisting of the data. Specifically, χi,k [n] = ui,k [n] for |i − k| ≤ 1 and zero otherwise. The total computations required for the tridiagonal subband RLS filter is given by [23] and is shown in Table 4.2.
Algorithm
LMS
RLS

Fullband
2K
2K 2 + 5K

Diagonal Uniform Subband
(3LM + 2K)/M
(3LM 2 + 3K 2 + 2KM )/M 2

Table 4.1: Number of Computations for Various Adaptive Algorithms
Algorithm
LMS
RLS

Number of Computations
(3LM + 6K)/M
K(3M + M 3 + 3M 2 + 2) + M 2 + 6LM − LM

Table 4.2: Number of Computations for Tridiagonal Subband Adaptive Algorithms

41

4.3

Non-Uniform Subband Adaptive Filtering

In the non-uniform subband case, all of the decimation factors are unequal, as shown in Figure 4.3. This means that different adaptive filters will be run at different sampling rates, and over different spectral regions. Therefore the subband filters themselves are of different lengths. As in the uniform subband case, both diagonal and tridiagonal forms have been proposed for this subband decomposition. In the diagonal case, taking the gradient with respect to the ith filter results in loss of all other terms except those involving Gi,i (z).
In the tridiagonal case, however, the structure becomes more complicated than in the uniform case. This is because branches running at different rates are added together. Thus delays have to be placed within the cross-term connections. Further details can be found in [20].
The motivation for the non-uniform subband decomposition is the ability of a large subband to account for all internal cross-terms. Specifically, if one subband spans the spectral region which a uniform subband decomposition would require multiple subbands to span, the cross-terms between all those subbands would be accounted for.
The cross-terms between any two non-uniform subbands still need to be accounted for, hence the derivation of the tridiagonal non-uniform subband filters [20]. If, however, the filtered signal U (z)Hi (z) contains no power, no aliased terms can exist and the cross-terms disappear. Thus cross-terms are not necessary where there is no signal power, an idea that will be made use of when designing the subband allocation algorithm in section 5.3.
With respect to the number of computations per iteration, the difference in the rates at which all the filters are running causes two different increases. Firstly, the branches with the smaller decimation factors have larger adaptive filter lengths. This is

42

because those branches cover larger spectral regions. Secondly, for every M iterations, the largest adaptive filters (those in the branches with the smallest decimation factors) are updated most often. The resulting computational complexities for such algorithms are detailed in Table 4.3.
Algorithm
LMS
RLS

Number of Computations
N
2Ki
3Li
i=1 ( Mi + M 2 )
N
3Li i=1 ( Mi

+

2
3Ki
3
Mi

i

+

2Ki
2 )
Mi

Table 4.3: Number of Computations for Non-Uniform Subband Adaptive Algorithms

4.4

Properties of Subband Adaptive Filters

Regarding performance, subband adaptive filters suffer from both a longer convergence time and a higher misadjustment. The reason for the slower convergence rate is the slower rates at which the subsystems are running. If for a given signal to noise ratio and unknown system the fullband LMS algorithm takes, say, a hundred iterations to converge, running the system at half rate will mean that the entire system converges in approximately two hundred iterations. This has been demonstrated for both the RLS and LMS cases [21], [23]. The lower misadjustment has been attributed to the lack of cross-terms between bands [22], [17]. Although the cross-terms introduced in the tridiagonal filter bank structure seek to mitigate this, the decrease in the steady-state error is not justified by the amount of extra computations needed [22].
The non-uniform subband decomposition is able to achieve better performance than the uniform decomposition if the correct decomposition is chosen. In order to compensate for the cross-terms where necessary, larger subbands can be used over a spectral region. These larger subbands have the added benefit of increasing the convergence rate, since those bands are run at higher rates. In addition, small sub43

bands can be chosen over areas where there is no signal, thereby decreasing the overall computational complexity. Taking full advantage of these benefits, though, requires knowledge of the signal/system pair since these subbands must be set up prior to any computations. In order to utilize the non-uniform subband decomposition more effectively for an arbitrary system, it would be necessary to be able to change the subband widths to suit the input-output characteristics of the unknown filter.

44

Figure 4.2: Tridiagonal Subband Filter Structure

45

Figure 4.3: Non-Uniform Diagonal Subband Filter Structure

46

Chapter 5

Adjustable Subband Adaptive
Filtering
5.1

Motivation for Adjustable Subband Filter Banks

Both uniform and non-uniform subband adaptive filtering reduce the number of total computations needed to adapt the filter at each time increment. The issue with each is that they are highly dependent on the unknown systems transfer function properties. Non-uniform subband designs are more desirable because they take all the cross-terms in an entire band into account, while simultaneously reducing the number of computations in the spectral regions of less interest. The problem that arises, then, is that the input-output characteristics are not always known a-priori. Therefore, it would be advantageous to have an algorithm which can determine these characteristics, and adjust the filter bank bands to merge those two bands, thus accounting for crossterms between those spectral bands.

47

5.2

Previous Work

Previous work in the area of adjustable subbands for use in non-uniform subband adaptive filters has mostly focused on the use of oversampled systems. An algorithm proposed by Grieshbach and Etter sought to adjust the subband decomposition in order to minimize the mean squared error [8]; the focus of the algorithm was to isolate transition bands of the unknown filter to reduce the misadjustment.
Following this work, a structure to ease the implementation of such algorithms was proposed by Oakman and Naylor [17]. This structure utilizes a tree bank similar to the one proposed here, with a similar initialization procedure. The difference, however, is that the structure they propose keeps unused branches, simply moving out the filters past the QMF branches. This means that the initial analysis bank remains the same throughout the duration of computations, while the synthesis bank is the one that is adapted. The method used is to combine the two bands that need to be merged by moving up the synthesis filters associated with those bands. Due to the PR condition imposed on each QMF branch, the reduced branch effectively becomes a delay. The filter placed at the output of the combined branch is then initialized as:

G01 (z) = F0 (z)C0 (z 2 ) + F1 (z)C1 (z 2 )

(5.1)

The result is a framework to adjust subbands which was efficient in the sense that minimal operations had to be performed to change the subband widths [17]. The disadvantage to this structure is that the data is still run through the analysis and synthesis filters of the merged bands, resulting in unnecessary computations. It would be advantageous to be able to change the structure as proposed, but with greater care

48

taken to avoid extraneous computations.

5.3

Proposed Algorithm

The algorithm proposed here utilizes a structure similar to that proposed by Oakman and Naylor. QMF tree structured filter banks allow for easier design of PR filter banks as well as ease of merging in the adaptive analysis and synthesis banks. Instead of merging bands by moving up the synthesis banks, when two bands are determined to need to be merged the entire branch is replaced by an effective filter. This requires a different initialization process than that proposed in [17].
In addition a decision algorithm is proposed to determine when bands should be merged or split. This decision algorithm is based on the ratio of the power spectrum densities of the input and the desired response. Thus any dependence on the error signals, and thus the adaptive filter tap weights, is removed. The power spectra are estimated using the Welch method [27], in which windowed FFT averages over time are used to approximate the power spectra.

Subband Decomposition
In choosing the structure for the analysis and synthesis banks, a QMF based tree topology was chosen. The tree topology allows for both the ease of the adjustment of the subband bandwidths as well as the determination of a two bank PR filter pair.
For this system the only perfect reconstruction filters that need to be designed are one pair of high-pass and low-pass filters. These filters would be used in every embedded
QMF bank, producing a maximum of 2Y subbands, where Y is the number of levels in the analysis or synthesis banks.
The structure is set up by having the embedded QMF banks set up to be able

49

to calculate the maximal number of coefficients. The adaptive filters then connect the nodes of the analysis filter with the corresponding node in the synthesis bank.
When merging two bands, the strategy is to disconnect the entire embedded QMF filter bank and to connect in its place an adaptive filter, as shown in Figure 5.1. In this way any all-pass substructures are not performing any multiplications, thereby saving computations.

Figure 5.1: Subband Merging by Replacement
The strategy of splitting one subband into two smaller ones of half the original spectral length is to disconnect that adaptive filter, and to connect the next level embedded QMF filter bank with the associated half length adaptive filters. This will split the subband in two, reducing the computations needed to update that spectral band. 50

Initialization of Adjusted Filters
When merging a QMF branch, the resulting transfer function needs to be determined from the input-output characteristics of the branch. The output of a QMF branch, including aliasing, is given by:

Y (ω) =

1
M
2 (F0 (ω)[G0 (2ω)

+ GM (2ω − pi)][H0 (ω)U (ω) + H0 (ω − π)U (ω − π)] +
0

F1 (ω)[GM (2ω) + GM (2ω − π)][H1 (ω)U (ω) + H1 (ω − π)U (ω − π)]) (5.2)
1
1

Since F0 , F1 , H0 and H1 abide by the PR condition as previously mentioned in section
3.2, the aliased terms drop out, thus allowing for an equivalent transfer function
G2M (ω) = Y (ω)/U (ω) given by:
0
G2M (ω) =
0

1
M
2 (F0 (ω)[G0 (2ω)

+ GM (2ω − π)]H0 (ω)
0

+F1 (ω)[GM (2ω) + GM (2ω − π)]H1 (ω))
1
1

(5.3)

In order to approximate the resulting equivalent transfer function, the impulse responses of GM , GM , F0 , F1 , H0 and H1 are calculated. This brings into question
0
1 the relative sizes of the filters. G2M is twice the length of GM and GM , each of which
0
0
1
have length L. F0 , F1 , H0 and H1 have length K, and are not necessarily the same as length as GM and GM , or a multiple thereof. This discrepancy is compensated for
0
1 by using F0,eff , F1,eff , H0,eff and H1,eff , as calculated by the series of interpolations and decimations:

H0,eff = ↓ K (Z(ω)) ↑ L (H0 )

(5.4)

The interpolation by L and decimation by K’serve to change the length of the FFTs

51

calculated for the H(z)’s and F (z)’s to the length of the FFT of G2M (z). The values
0
of L and K are respectively:

L

=

K

=

2L
GCF(L, K)
K
GCF(L, K)

(5.5)
(5.6)

where GCF(L, K) is the greatest common factor of L and K. The low-pass filter
Z(ω), serves to smooth the zero filled interpolation.
Since the F0 , F1 , H0 , H1 , K, and L are all known prior to the start of the algorithm, all the needed approximations to the analysis and syntheses filters of length 2i K for
1 ≤ i ≤ log2 (M ) can be pre-computed. Thus no time needs to be spent calculating this information while the algorithm is running.
When decomposing a channel into two smaller subbands, the tap weights are initialized to zeros and no extraneous calculations need to be performed. This is because the proposed algorithm splits a subband when there is minimal power in that spectrum. Thus the total contribution of that band to the total output is minimal.
The only other data that needs to be initialized is the data stored in the delays of the subband filters. This can be taken into account by storing the past 2i K inputs for the filter in the ith level filters. Saving data in this fashion does not take any multiplications, and the data can then be recalled easily into the filter, preventing the need to re-converge after high errors due to starting with zeros or other data.

Power Spectral Density Estimation
In order to adjust the widths of the subbands, the spectral locations of the signal power needs to be determined. This requires knowledge of the power spectrum density, defined as the Fourier transform of the autocorrelation function of a wide sense
52

stationary (WSS) process. By definition, the autocorrelation of a WSS process only depends on the distance in time between any two samples.
For an arbitrary process, wide sense stationarity can be assumed over short periods of time, thus allowing for estimates of the PSD. Methods to estimate the PSD over a time span usually center around the use of periodograms, or the squared magnitude of the discrete fourier transform (DFT). One such method, using the magnitude of the FFT values squared over a moving window, was proposed by Welch in [27]. In
ˆ
Welch’s PSD estimation, P [k], the estimate is given by:
L
ˆ
P [k] =
UK

K

|Aj [k]|2

(5.7)

j=1

where Aj [k] is the FFT of the k th windowed time frame:
L−1
n

xj [n]w[n]e−2jπk L

Aj [k] =

(5.8)

n=0

and U is given by:

U=

1
L

L−1

w2 [n]

(5.9)

n=0

In equation (5.7) there are two degrees of freedom: the windowing function and the number of FFT samples to average. In the PSD estimate used in the subband adjustment algorithm, the number of terms to average is left up to the user as an input. Thus for more stable conditions longer averages can be taken, while for less stable conditions the number of terms to be averaged may be decreased to allow for quicker responses to changing conditions. The windowing function was chosen to be a simple rectangular window to avoid both extraneous computations in calculating the

53

dot product between the input and the window as well as spreading the spectrum due to convolution with the window’s frequency response.
For the FFT operations, the number of points used was determined to be twice the maximum number subbands allowed. This quantity is set at the start of the algorithm. This number was chosen because the desired resolution was exactly one point per minimum size subband.

Decision Algorithm
The proposed decision algorithm was designed to change the subband decomposition in order to create large subbands in spectral regions where the signal is present, and small subbands in the spectral regions where the signal is minimal. The large subbands in the signal region minimize the convergence time as well as account for cross-terms. Smaller subbands in the low power region allows savings in the number of computations as described in section 4.3.
The algorithm makes the decisions on which subbands to merge and which subbands to decompose based on the ratio of the power spectra of the input and desired output. In this way the subband decomposition is dependent only on the input-output characteristics of the potentially changing unknown filter, and not on the error signals. This removes any dependence of the algorithm on the current tap weights of the adaptive filters. The decision itself is based on comparing this ratio against set threshold values γd and γu .
In this implementation, no windowing function was used so that there would be no associated computations. The PSD estimates can then be calculated with a total number of 4M 2 log (2M ) computations by taking the FFT of the 2M blocks of input and desired output data, and squaring the results. Blocks of length 2M were used

54

in order to have the correct spectral resolution of bands of π/M . The result is then averaged with P past calculations, which are stored in memory, to produce the PSD estimate. The algorithm then calculated the ratio of the power spectra estimates.
This is done in order to place more weight on the bands where the output power is much greater than the input power. In order to avoid artifacts of having small values of the PSD estimates, regularization terms δin and δout are introduced in the PSD ration formula:

P SDr [k] =

P SDout [k] + δout
P SDin [k] + δin

(5.10)

In particular, δin prevents the ratio from becoming excessively large for small input power. Although δout is not necessary for this purpose, it serves to guide the choice of the lower threshold γd . Once the algorithm calculates the ratio, the thresholding is applied as described in the pseudocode below.

55

for index goes from 1 to current number of subbands current band power multiplier w is the tap weight vector ; a is the input ; A is the threshold parameter ;
% lambda is the forgetting factor ; P is the estimated inverse covar matrix ;
% k is the kalman gain ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

43
44
45
46
47
48
49

% % Error Checking if size ( u_vector , 1) == 1; u_vector = u_vector . ' ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65

% % Initialization and Preliminary Calculations last_time = min ([ length ( u_vector ) , length ( d_ideal ) ]) ;
% Initialize vectors for efficient for - loops u_vector = [ zeros ( length ( w_start ) -1 , 1) ; u_vector ];
Xi = zeros ( last_time , 1) ; k = zeros ( last_time +1 , length ( w_start ) ) ; w = zeros ( last_time +1 , length ( w_start ) ) ; d_hat = zeros ( last_time , 1) ;
P = eye ( length ( w_start ) ) / ∆ ; f_ATNA = zeros ( last_time +1 , length ( w_start ) ) ;
A_ATNA = 100; lambda_c = 1 - lambda ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

88

66
67
68
69
70
71
72
73
74
75
76
77

% % Run algorithm for index = 2: last_time lambda = 1 - lambda_c ; d_hat ( index -1) = w ( index -1 , :) * u_vector ( index : index + length ( w_start )
-1) ;
Xi ( index -1) = d_ideal ( index ) -w ( index -1 , :) * u_vector ( index : index + length ( w_start ) -1) ;
Pi = P * u_vector ( index : index + length ( w_start ) -1) ; k ( index , :) = Pi ./( lambda + u_vector ( index : index + length ( w_start ) -1) ' *
Pi ) ;
% Difference is here : instead of Kalman * error : Kalman * f ( error , A , m
)
f_ATNA ( index -1) = Xi ( index -1) /(1+( abs ( Xi ( index -1) ) / A_ATNA ) ^ non_lin_factor ) ; w ( index , :) = w ( index -1 , :) + k ( index , :) * conj ( f_ATNA ( index -1) ) ;
A_ATNA = (1 - leak_factor ) * A_ATNA + leak_factor * ATNA_multiplier * Xi ( index -1) ;

78
79

P = lambda ^( -1) *[ eye ( length ( w_start ) ) -( k ( index , :) . ' ) *( u_vector ( index : index + length ( w_start ) -1) ' ) ]* P ; if index ≥ 3 lambda_c = lambda_c + ( lambda_adapt * f_ATNA ( index -1) * f_ATNA ( index -2) *( u_vector ( index : index + length ( w_start ) -1) ' ) *( k ( index
, :) . ' ) ) / lambda_c ; elseif index == 1 || index == 2 lambda_c = lambda_c ; else error ( ' Index ≤ 0 ' ) ; end 80
81

82
83
84
85
86
87
88
89

end
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

90
91
92
93
94
95
96
97
98
99
100

% % Optional Plotting if plot_opt == 1 figure ( figno ) ; plot (1: length ( d_ideal ) , [ d_ideal , d_hat , Xi ]) ; title ( ' System Identification of an FIR Filter ' ) ; legend ( ' Desired ' , ' Output ' , ' Error ' ) ; xlabel ( ' Time Index ' ) ; ylabel ( ' Signal Value ' ) ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

89

A.6
1
2
3
4
5
6
7
8
9
10
11

Noise Generating Function

function [ total_noise ] = AFinput_noise ( noise_len , gauss_stats , pois_lambda , impulse_stats , opts )
%
%
%
%
%
%
%

Adam Charles
11/3/2009
Creates Noise for Adaptive filtering
Background noise + Impulse noise .
Impulse noise is poisson process with gaussian magnitudes

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

% % Decifer Inputs : if opts (1) == 1; gauss_mean = gauss_stats (1) ; gauss_var = gauss_stats (1) ; else gauss_mean = 0; gauss_var = 0; end if opts (2) == 1; impulse_mean = impulse_stats (1) ; impulse_var = impulse_stats (2) ; else impulse_mean = 0; impulse_var = 0; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

31
32
33
34
35
36
37
38
39
40
41

% % Generate Noise :
% Background Noise background_noise = gauss_var * randn ( noise_len , 1) + gauss_mean ;
% Impulse Noise impulse_times = cumsum ( poissrnd ( pois_lambda , noise_len , 1) ) ; impulse_times2 = impulse_times ( find ( impulse_times ≤ noise_len ) ) ; impulse_vals = impulse_var * randn ( length ( impulse_times2 ) , 1) + impulse_mean ; impulse_noise = zeros ( noise_len , 1) ; impulse_noise ( impulse_times2 ) = impulse_vals ;

90

42
43
44
45

total_noise = background_noise + impulse_noise ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.6.1
1
2
3
4
5
6
7
8
9

Non-Stationary Filter Evaluation

function [ output ] = non_stat_filt ( b_in , a_in , data )
%
%
%
%
%

non_stat_filt filters the data through a non - stationary filter b_in has a row of neumerator coefficients for each time - step a_in has a row of denomenator coefficients for each time - step data is the data to be processed

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

% % Error Checking
[ a_i , a_j ] = size ( a_in ) ;
[ b_i , b_j ] = size ( b_in ) ;
[ data_i , data_j ] = size ( data ) ; if a_i == 1 a_in = repmat ( a_in , data_i , 1) ; elseif a_i = data_i && a_i = 1 error ( ' Number of rows of neumerator coefficients must equat number of rows of denomenator coefficients ' ) ; end if b_i == 1 b_in = repmat ( b_in , data_i , 1) ; elseif b_i = data_i && b_i = 1 error ( ' Number of rows of neumerator coefficients must equat number of rows of input data ' ) ; end if data_j = 1 error ( ' Data input must be a column vector ' ) ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

33
34

% % Run Algorithm

91

35
36
37
38
39
40
41
42
43
44
45
46
47

% buffer inputs and outputs d_eff = [ zeros ( b_j -1 , 1) ; data ]; out_eff = zeros ( a_j + data_i , 1) ; for index = 1: data_i
A0 = fliplr ( a_in ( index , 2: end ) ) * out_eff (( index +1) :( index + a_j -1) , 1)
;
B0 = fliplr ( b_in ( index , :) ) * d_eff ( index :( index + b_j -1) , 1) ; out_eff ( a_j + index ) = ( B0 - A0 ) ./( a_in ( index , 1) ) ; end output = out_eff ( a_j +1: end ) ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.7
1

Fullband Testing Code

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

% % RLS testing code :
% Clean Slate : close all clear clc
% Options : run_tests = [1 1 1 1]; % Standard LMS , Standard RLS , NLMS , AFF - RLS - ATNA allow_noise = 1; figno = 1; stationary_filt = 0; t_last = 1500; num_trials = 100; err_AFF_ATNA = 0;
LMS_error = 0;
NLMS_error = 0; e_i = 0;
[ pulse_noise ] = AFinput_noise ( t_last , [0 , 0.001] , 700 , [1 , 25] , [0 ,1]) ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

25
26

% % Test Inputs / Outputs for index = 1: num_trials

92

27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55

% Filter Selection
A_test = [ linspace (7 ,7.1 , t_last ) ; linspace (5 ,5.1 , t_last ) ; linspace (2 ,
1.9 , t_last ) ; linspace (1 ,0.9 , t_last ) ]. ' ;
B_test = [ linspace (9 ,11 , t_last ) ; linspace (2 , -2 , t_last ) ; linspace (1 , 5 , t_last ) ; linspace (3 , -2 , t_last ) ]. ' ;
% A_test = ([ linspace (7 ,7.1 , t_last ) ; linspace (5 ,5.1 , t_last ) ; linspace
(2 , 1.9 , t_last ) ; linspace (1 ,0.9 , t_last ) ] +...
%
0*[ zeros (1 , floor ( t_last /2) ) , 8* ones (1 , ceil ( t_last /2) ) ; zeros (1 , floor ( t_last /2) ) ,...
%
8* ones (1 , ceil ( t_last /2) ) ; zeros (1 , floor ( t_last /2) ) , 8* ones (1 ,
%
ceil ( t_last /2) ) ; zeros (1 , floor ( t_last /2) ) ,...
8* ones (1 , ceil ( t_last /2) ) ]) . ' ;
% B_test = [ linspace (9 ,11 , t_last ) ; linspace (2 , -2 , t_last ) ; linspace (1 ,
5 , t_last ) ; linspace (3 , -2 , t_last ) ]. ' ;
% B_test = [10 -6 -5 9 -8 2 2 1 -3 1 0 -7 4 -2 3 4 2 9 1 2 -6 5 3 -5 3
2 -6 -8 0 9 -2 4]; u_in = 4* randn ( t_last , 1) ;
% u_in = sin (2* pi *[1:1000]) ; d_out = non_stat_filt ( B_test , A_test , u_in ) ;
% fdtool ( B_test , A_test )
FIR_filt_size = 100;
% Freq response of ideal filter
W_vec = linspace (0 , pi , 10000) ;
% if stationary_filt == 1
Htest = freqz ( B_test ( end , :) , A_test ( end , :) , W_vec ) ;
% end
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

56
57
58
59
60
61
62
63
64
65
66
67
68

% % Noise in the System d_var = 0.01; d_mean = 0;
[ d_noise ] = AFinput_noise ( t_last , [0 , 0.001] , 700 , [1 , 30] , [1 ,0]) ;
% colored noise if allow_noise == 1 d_out = d_out + d_noise + pulse_noise ; end %

93

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83

% % Standard LMS if run_tests (1) == 1
LMS_step_size = 0.0003; w_start_LMS = zeros (1 , FIR_filt_size ) ; figno = figno +1; number plot_opt_LMS = 0;

% Step size
% Start Vector
% Incriment figure
% Plot

[ d_hat , LMS_error_temp , w_mat_LMS ] = standardLMS ( u_in , d_out , w_start_LMS , LMS_step_size ,... figno , plot_opt_LMS ) ;
LMS_error = LMS_error + abs ( LMS_error_temp ) / num_trials ; clear LMS_error_temp end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

84
85
86
87
88
89
90
91
92
93
94
95
96
97
98

% % Normal RLS if run_tests (2) == 1 w_start_RLS = zeros (1 , FIR_filt_size ) ; figno = figno + 1; lambda = 0.9;
∆ = 0.1;

% start vector
% Next figure
% Lambda

[ y_i , e_i_temp , w_i ] = standardRLS ( u_in , d_out , w_start_RLS , lambda
, ∆ , figno , 0) ; e_i = e_i + abs ( e_i_temp ) / num_trials ; clear e_i_temp end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

99
100
101
102
103
104
105
106
107
108

% % NLMS if run_tests (3) == 1
NLMS_step_size = 0.3; w_start_NLMS = zeros (1 , FIR_filt_size ) ; figno = figno +1; number plot_opt_NLMS = 0; reg_fact = 0.01;

% Step size
% Start Vector
% Incriment figure
% Plot
% Regulation Factor

[ d_hat_NLMS , NLMS_error_temp , w_mat_NLMS ] = NLMS_alg ( u_in , d_out

94

109
110
111
112
113
114

,... w_start_NLMS , NLMS_step_size , reg_fact , figno , plot_opt_NLMS ) ;
NLMS_error = NLMS_error + abs ( NLMS_error_temp ) / num_trials ; clear NLMS_error_temp end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131

% % Koike ATNA w / AFF if run_tests (4) == 1
% start at w_init = zeros w_start_koike = zeros (1 , FIR_filt_size ) ;
% reasonable forgetting factor forget_fact = 0.99;
% Start Var
∆ = 0.1;
% leakage , multiplier , nonlinear order
A_leak_AFFATNA = 2e -8;
A_mult_AFFATNA = 2.5 e -10; n o n l in_order_AFFATNA = 32; adapt_coeff = 1e -10;
[ d_AFF_ATNA , err_AFF_ATNA_temp , w_AFF_ATNA ] = AFF_RLS_ATNA ( u_in , d_out ,... w_start_koike , forget_fact , adapt_coeff , ∆ , nonlin_order_AFFATNA ,...
A_leak_AFFATNA , A_mult_AFFATNA , figno , 0) ; err_AFF_ATNA = err_AFF_ATNA + abs ( err_AFF_ATNA_temp ) / num_trials ; clear err_AFF_ATNA_temp

132
133
134
135
136
137
138
139
140

end end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

141
142
143
144
145
146
147
148
149
150
151

% % Error Plots err_fig = 6; figure ( err_fig ) ; if run_tests (1) == 1 figure ( err_fig ) , hold on ; plot ( LMS_error , ' b ' ) ; end if run_tests (2) == 1 figure ( err_fig ) , hold on ; plot ( e_i , ' --r ' ) ;

95

152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167

end if run_tests (3) == 1 figure ( err_fig ) , hold on ; plot ( NLMS_error , ' : k ' ) ; end if run_tests (4) == 1 figure ( err_fig ) , hold on ; plot ( err_AFF_ATNA , ' -. b ' ) ; legend ( ' LMS ' , ' RLS ' , ' NLMS ' , ' AFF - RLS - ATNA ' ) xlabel ( ' Index n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ylabel ( ' Absolute error | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , ' Times '
)
end figure ( err_fig ) , hold off ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

168
169

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

Zero-Fill Interpolation Code

function output = interp_zeros ( input , ammt , dim )
%
%
%
%
%
%
%
%

Adam Charles
3/15/2009
Inpterpolates with zero fills along the dimention ' dim ' number of zeros given by ( ammt -1) .
This function was written since MATLAB ' s interp () function does not zero - fill , but instead low pass filters .

if ammt == 0 output = input ; else [M , N ] = size ( input ) ; if dim == 1 output = zeros ( ammt * M - ammt + 1 , N ) ; output (1: ammt : end ,:) = input ; elseif dim == 2 output = zeros (M , ammt * N - ammt + 1) ; output (: , 1: ammt : end ) = input ;

96

24
25
26
27

else error ( ' Not a valid dim ! ' ) ; end end

A.9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

function [ h_eff ] = smartinterp ( h0 , K )
% Adam Charles
% 3/8/2009
%
% [ h_eff ] = smartinterm ( h0 , K ) calculates h_eff in the following manner
:
%
% h_eff = ( decimate by M / gcd (M , K ) ) ( LPF ) ( interpolate by K / gcd (M , K ) ) ( h0 )
%
% Where M = length ( h0 ) . h_eff is essentially h0 estimated with K points
,
% rather then M points .
%
gcd_num = gcd (K , length ( h0 ) ) ; dec_factor = length ( h0 ) / gcd_num ; interp_factor = K / gcd_num ; h_temp1 = interp ( h0 , interp_factor ) ; h_eff = h_temp1 (1: dec_factor : end ) ;
%
%
%
%

Plots were for testing purposes . plot ( h0 , ' r ' ) figure plot ( h_eff , ' b ' )

A.10
1
2
3
4
5
6
7

Length Adjusting Code

Effective Filter Evaluation Code

function [ g0_eff ] = eff_filt ( g0 , g1 , h0 , h1 , f0 , f1 )
% Adam Charles
% 3/8/2009
% Effective Filter Calculations
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

8
9

% % Error Checking

97

10
11
12
13
14
15
16
17
18
19
20
21

if length ( g0 ) = length ( g1 ) error ( ' g0 and g1 must have equal lengths ! ' ) end if ( length ( h0 ) = length ( h1 ) ) || ( length ( h0 ) = length ( f0 ) ) ||...
( length ( h0 ) = length ( f1 ) ) || ( length ( h1 ) = length ( f0 ) ) ||...
( length ( h1 ) = length ( f1 ) ) || ( length ( f0 ) = length ( f1 ) ) error ( ' h0 , h1 , f0 , and f1 must have equal lengths ! ' ) end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

% % Calculations
% Get special lengths :
K = length ( g0 ) ;
% Make all filters the desired 2* K length : h0_eff = smartinterp ( fftshift ( abs ( fft ( h0 ) ) ) , 2* K ) ; h1_eff = smartinterp ( fftshift ( abs ( fft ( h1 ) ) ) , 2* K ) ; f0_eff = smartinterp ( fftshift ( abs ( fft ( f0 ) ) ) , 2* K ) ; f1_eff = smartinterp ( fftshift ( abs ( fft ( f1 ) ) ) , 2* K ) ;
% Take into account imaged parts : g0f = fftshift ( abs ( fft ( g0 ) ) ) ; g1f = fftshift ( abs ( fft ( g1 ) ) ) ; g0_mid = [ g0f (( floor ( K /2) +1) : end ) , g0f , g0f (1: floor ( K /2) ) ]; g1_mid = [ g1f (( floor ( K /2) +1) : end ) , g1f , g1f (1: floor ( K /2) ) ];
% Get aliased parts taken into account : h0_mid = h0_eff + [ h0_eff ( K : end ) , h0_eff (1: K -1) ]; h1_mid = h1_eff + [ h1_eff ( K : end ) , h1_eff (1: K -1) ];
% Put tugether final effective filter : g0_eff = 0.5* real ( ifft ( fftshift ( f0_eff .* g0_mid .* h0_mid + f1_eff .* g1_mid
.* h1_mid ) ) ) ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.11
1
2
3
4
5
6
7
8

Two Band PR Filter Gererating Code

function [ flp , fhp ] = PRfiltbankFIR ( opt )
% PR Filter Generators
%
% equivals : [3/100 , 0.1 , 20]
% kaiservals : [0.45 , 0.585 , 0.415 , 0.55 , 0.1 , 25]

98

9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

% Basic LP / HP filters flpspec . fp = 1/2 - 5/100;
% pass - band frequency as calculated above flpspec . fst = 1/2 + 8.5/100;
% stop - band frequency is 20 Hz above pass - band flpspec . Ap = 0.1;
% worst pass - band attenuation is 1 db flpspec . Ast = 25;
% best stop - band attenuation is 20 db
% generate actual filter from specs flpspecs = fdesign . lowpass ( ' fp , fst , Ap , Ast ' , ... flpspec . fp , flpspec . fst , flpspec . Ap , flpspec . Ast ) ; flp = design ( flpspecs , ' kaiserwin ' ) ; fhpspec . fp = 1/2 - 8.5/100; calculated above fhpspec . fst = 1/2 + 5/100; above pass - band

% stop - band frequency is 20 Hz

fhpspecs = fdesign . highpass ( ' fst , fp , Ast , Ap ' , ... fhpspec . fp , fhpspec . fst , flpspec . Ast , flpspec . Ap ) ; fhp = design ( fhpspecs , ' kaiserwin ' ) ; if strcmp ( opt , ' o ' ) w_1 = linspace (0 , pi , 1000) ;
H1 = freqz ( flp , w_1 ) ;
H2 = freqz ( fhp , w_1 ) ; plot ( w_1 , abs ( H1 ) .^2 + abs ( H2 ) .^2) ; fvtool ( flp , fhp ) end A.12
1
2
3
4
5
6
7
8

% pass - band frequency as

Uniform Subband Filter Generating Code

function [ filtbank ] = filt_bank_gen (M , band_type , dir , plot_opt )
% Tree Struchtured Fitler Bank .
%
%
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

9
10
11
12
13
14
15

% % Error Checking if strcmp ( band_type , ' uniform ' )
% Uniform Filter Bank can only have M = 2^ L outputs if 2^ floor ( log2 ( M ) ) = M error ( ' M must be a power of 2 for a uniform tree filter bank ! ' )
;
end

99

16
17
18

end
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

19
20
21
22
23
24
25

% % Base Filters
[ flp , fhp ] = PRfiltbankFIR ( ' x ' ) ; h_lp = [ flp . Numerator , 0]; coefficients h_hp = fhp . Numerator ;

% Get predesigned filters
% Extract LP filter
% Extract HP filter coefficients

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56

% % Decimation Filter Bank : if strcmp ( band_type , ' uniform ' )
% Uniform Bands
M_2 = log2 ( M ) ; filt_val = 0; for index = 1: M
%
disp ( sprintf ( ' Filter H % d : ' , index -1) ) clear h_temp ; h_temp = 1; filt_path = fliplr ( dec2binvec ( filt_val , M_2 ) ) ; for index2 = 1: M_2 if filt_path ( index2 ) == 0
%
disp ( sprintf ( ' Layer % d is LPF with decimation %d ' , index2 , index2 -1) ) h_temp = conv ( h_temp , interp_zeros ( h_lp , 2^( index2 -1) ,
2) ) ; elseif filt_path ( index2 ) == 1
%
disp ( sprintf ( ' Layer % d is HPF with decimation %d ' , index2 , index2 -1) ) h_temp = conv ( h_temp , interp_zeros ( h_hp , 2^( index2 -1) ,
2) ) ; end end filt_val = filt_val + 1; eval ( sprintf ( ' filtbank . H % d = h_temp ; ' , index -1) ) ; end elseif strcmp ( band_type , ' log ' )
% Logarithmic sized bands for index = 1: M clear h_temp ; if index == 1 h_temp = h_hp ; elseif index == 2 h_temp = conv ( h_lp , interp_zeros ( h_hp , index , 2) ) ;

100

57
58
59
60

elseif index > 2 && index < M h_temp = h_lp ; for index2 = 2: index -1 h_temp = conv ( h_temp , interp_zeros ( h_lp , 2^( index2 -1) ,
2) ) ; end h_temp = conv ( h_temp , interp_zeros ( h_hp , 2^( index -1) , 2) ) ; elseif index == M h_temp = 1; for index2 = 1: index -1 h_temp = conv ( h_temp , interp_zeros ( h_lp , 2^( index2 -1) ,
2) ) ; end end eval ( sprintf ( ' filtbank . H % d = h_temp ; ' , index -1) ) ;

61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77

end end if strcmp ( dir , ' s ' )
% Leave Filter Bank Alone end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

78
79
80
81
82
83
84
85
86
87

% % Synthesis Filter if strcmp ( dir , ' s ' )
% F ( z ) = z ^( - N ) H_para ( z ) for index = 1: M eval ( sprintf ( ' filtbank . H % d = fliplr ( conj ( filtbank . H % d ) ) ; ' , index -1 , index -1) ) ; end end
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

88
89
90
91
92
93
94
95
96
97

% % Plotting Options if plot_opt plot_string = ' fvtool ( filtbank . H0 , 1 ' ; w_1 = linspace (0 , pi , 1000) ;
H_sum = abs ( freqz ( filtbank . H0 , 1 , w_1 ) ) .^2; for index = 2: M plot_string = sprintf ( ' %s , filtbank . H %d , 1 ' , plot_string , index
-1) ; eval ( sprintf ( ' H_sum = H_sum + abs ( freqz ( filtbank . H %d , 1 , w_1 ) )
.^2; ' , index -1) ) ; end 101

98
99
100
101
102
103
104

plot ( w_1 , H_sum ) ; plot_string = sprintf ( ' % s ) ; ' , plot_string ) ; disp ( plot_string ) eval ( plot_string ) ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.13
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

Uniform Subband LMS Algorithm

function [ output , output_error ] = subbandLMSsimple2 ( u_in , d_ideal ,... step_size , w_start , num_bands , band_type )
%
%
%
%
%
%
%

Adam Charles
2/26/2009
Subband LMS Filtering .

Filters through Filterbank with M = num_bands , then performs LMS filtering on each subband seperately . Those results are then put through % the synthesis filter to get the final outpt y ( n ) .
%
% For the matricies used , each row indicates the subband -1 ( row 1 is
% subband 0) , and the column represents the iteration . The updates are
% essentially round robin , as they start at H_0 , work up to H_ {M -1} , then % go back to H_0 .
%

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

21
22
23
24
25
26
27
28
29
30
31
32

% % Error Checking if strcmp ( band_type , ' uniform ' )
% Uniform Filter Bank can only have M = 2^ L outputs if 2^ floor ( log2 ( num_bands ) ) = num_bands error ( ' M must be a power of 2 for a uniform tree filter bank ! ' )
;
end end % Make sure u_in is a vector if size ( u_in , 1) = 1 && size ( u_in , 2) error ( ' u_in must be a vector ! ' ) ;

102

=

1

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58

elseif size ( u_in , 1) == 1 && size ( u_in , 2) u_in = u_in . ' ; end =

1

% Make sure d_ideal is a vector if size ( d_ideal , 1) = 1 && size ( d_ideal , 2) = 1 error ( ' d_ideal must be a vector ! ' ) ; elseif size ( d_ideal , 1) == 1 && size ( d_ideal , 2) d_ideal = d_ideal . ' ; end =

1

% Check size of w_start if size ( w_start , 1) = num_bands && size ( w_start , 2) = num_bands error ( ' w_start must be a vector ! ' ) ; elseif size ( w_start , 1) = num_bands && size ( w_start , 2) == num_bands w_start = w_start . ' ; % w ' s as row vectors elseif isscalar ( w_start ) error ( ' One tap is useless ! ' ) end if ¬isscalar ( step_size ) error ( ' step_size must be a scalar ! ' ) ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

59
60
61
62
63
64
65
66
67
68
69
70
71

% % Prior Calculations and Initializations
% Initialize Tap Weight Matrix w_mat = w_start ;
% Calculate numbwe of ' time steps ' or # times to updaye all filter banks num_update_iters = floor ( min ( length ( u_in ) , length ( d_ideal ) ) / num_bands ) ;
% Pad input vector u_in = [ zeros ( num_bands * size ( w_start , 2) -1 , 1) ; u_in ]; d_hat = zeros ( num_update_iters , num_bands ) ;
LMS_error = zeros ( num_bands , num_update_iters ) ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

72
73
74
75
76
77

% % Run Inputs and Outputs through Filter Banks :
[ filtbank_dec ] = filt_bank_gen ( num_bands , ' uniform ' , ' f ' , 0) ;
[ filtbank_syn ] = filt_bank_gen ( num_bands , ' uniform ' , ' s ' , 0) ;

103

78
79
80
81
82
83
84
85
86
87
88
89
90
91

% fvtool ( filtbank_dec . H0 , 1 , filtbank_dec . H1 , 1)
% fvtool ( filtbank_syn . H0 , 1 , filtbank_syn . H1 , 1)

for index = 1: num_bands
% Filter through the kth filter bank filter eval ( sprintf ( ' u_filt1 . H % d = filter ( filtbank_dec . H %d , 1 , u_in ) ; ' , index -1 , index -1) ) ; eval ( sprintf ( ' d_filt1 . H % d = filter ( filtbank_dec . H %d , 1 , d_ideal ) ; ' , index -1 , index -1) ) ;
% Decimate by M eval ( sprintf ( ' u_dec . H % d = u_filt1 . H % d (1: num_bands : end ) ; ' , index -1 , index -1) ) ; eval ( sprintf ( ' d_dec . H % d = d_filt1 . H % d (1: num_bands : end ) ; ' , index -1 , index -1) ) ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

92
93
94
95
96
97
98

99
100
101
102
103

% % LMS Filtering for index = 1: num_update_iters for index2 = 1: num_bands eval ( sprintf ( ' d_hat ( index2 , index ) = ( w_mat ( index2 , :) ) *( u_dec .
H % d ( index : index + size ( w_mat , 2) -1) ) ; ' , index2 -1) ) ; eval ( sprintf ( ' LMS_error ( index2 , index ) = d_dec . H % d ( index ) d_hat ( index2 , index ) ; ' , index2 -1) ) ; eval ( sprintf ( ' w_mat ( index2 , :) = w_mat ( index2 , :) + ( step_size )
*( u_dec . H % d ( index : index + size ( w_mat , 2) -1) . ' ' ) * conj ( LMS_error
( index2 , index ) ) ; ' , index2 -1) ) ; end end

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

104
105
106
107
108
109
110
111
112
113

% % Filter / Interpolate Outputs : d_out_sum = 0; d_ideal_sum = 0; for index = 1: num_bands
% Interpolate by M : use function eval ( sprintf ( ' d_syn . H % d = interp_zeros ( d_hat (% d , :) , num_bands , 2) ;
' , index -1 , index ) ) ; eval ( sprintf ( ' d_ideal_syn . H % d = interp_zeros ( d_dec . H %d , num_bands ,
1) ; ' , index -1 , index -1) ) ;
% Filter through the kth synthesis bank filter eval ( sprintf ( ' d_filt2 . H % d = filter ( filtbank_syn . H %d , 1 , d_syn . H % d ) ;

104

' , index -1 , index -1 , index -1) ) ; eval ( sprintf ( ' d_ideal_filt1 . H % d = filter ( filtbank_syn . H %d , 1 , d_ideal_syn . H % d ) ; ' , index -1 , index -1 , index -1) ) ; eval ( sprintf ( ' d_out_sum = d_out_sum + d_filt2 . H % d ; ' , index -1) ) ; eval ( sprintf ( ' d_ideal_sum = d_ideal_sum + d_ideal_filt1 . H % d ; ' , index -1) ) ;

114
115
116
117
118
119

end
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

120
121
122
123
124
125
126
127
128
129

% % Specify Outputs : output . out = d_out_sum ; output . ideal = d_ideal_sum ;
% size ( d_ideal_sum ) , (1: length ( d_out_sum ) )
% size ( d_out_sum ) output_error . full = d_ideal_sum . ' - d_out_sum ; output_error . subbands = LMS_error ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

Uniform Subband RLS Algorithm

function [ output , output_error ] = subbandRLSsimple2 ( u_in , d_ideal ,... forget_factor , ∆ , w_start , num_bands )
%
%
%
%
%
%
%

Adam Charles
2/26/2009
Subband RLS Filtering .

Filters through Filterbank with M = num_bands , then performs RLS filtering on each subband seperately . Those results are then put through % the synthesis filter to get the final outpt y ( n ) .
%
% For the matricies used , each row indicates the subband -1 ( row 1 is
% subband 0) , and the column represents the iteration . The updates are
% essentially round robin , as they start at H_0 , work up to H_ {M -1} , then % go back to H_0 .
%

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

105

21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61

% % Error Checking

% Uniform Filter Bank can only have M = 2^ L outputs if 2^ floor ( log2 ( num_bands ) ) = num_bands error ( ' M must be a power of 2 for a uniform tree filter bank ! ' ) ; end % Make sure u_in is a vector if size ( u_in , 1) = 1 && size ( u_in , 2) = 1 error ( ' u_in must be a vector ! ' ) ; elseif size ( u_in , 1) == 1 && size ( u_in , 2) u_in = u_in . ' ; end =

1

% Make sure d_ideal is a vector if size ( d_ideal , 1) = 1 && size ( d_ideal , 2) = 1 error ( ' d_ideal must be a vector ! ' ) ; elseif size ( d_ideal , 1) == 1 && size ( d_ideal , 2) d_ideal = d_ideal . ' ; end =

1

% Check size of w_start if size ( w_start , 1) = num_bands && size ( w_start , 2) = num_bands error ( ' w_start must be a vector ! ' ) ; elseif size ( w_start , 1) = num_bands && size ( w_start , 2) == num_bands w_start = w_start . ' ; % w ' s as row vectors elseif isscalar ( w_start ) error ( ' One tap is useless ! ' ) end if ¬isscalar ( forget_factor ) error ( ' forget_factor must be a scalar ! ' ) ; end if ¬isscalar ( ∆ ) error ( '∆ must be a scalar ! ' ) ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

62
63
64
65
66
67

% % Prior Calculations and Initializations
% Initialize Tap Weight Matrix w_mat = w_start ;
% Calculate numbwe of ' time steps ' or # times to updaye all filter

106

68
69
70
71
72
73
74
75
76
77

banks num_update_iters = floor ( min ( length ( u_in ) , length ( d_ideal ) ) / num_bands ) ;
% Pad input vector u_in = [ zeros ( num_bands * size ( w_start , 2) -1 , 1) ; u_in ]; d_hat = zeros ( num_update_iters , num_bands ) ;
RLS_error = zeros ( num_bands , num_update_iters ) ;
P0 = eye ( size ( w_mat , 2) ) / ∆ ; kal_gain = 0; pi_vec = 0;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

78
79
80
81
82
83
84
85
86
87
88
89
90
91
92

% % Run Inputs and Outputs through Filter Banks :
[ filtbank_dec ] = filt_bank_gen ( num_bands , ' uniform ' , ' f ' , 0) ;
[ filtbank_syn ] = filt_bank_gen ( num_bands , ' uniform ' , ' s ' , 0) ; for index = 1: num_bands
% Filter through the kth filter bank filter eval ( sprintf ( ' u_filt1 . H % d = filter ( filtbank_dec . H %d , 1 , u_in ) ; ' , index -1 , index -1) ) ; eval ( sprintf ( ' d_filt1 . H % d = filter ( filtbank_dec . H %d , 1 , d_ideal ) ; ' , index -1 , index -1) ) ;
% Decimate by M eval ( sprintf ( ' u_dec . H % d = u_filt1 . H % d (1: num_bands : end ) ; ' , index -1 , index -1) ) ; eval ( sprintf ( ' d_dec . H % d = d_filt1 . H % d (1: num_bands : end ) ; ' , index -1 , index -1) ) ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

93
94
95
96
97

98
99
100
101

% % RLS Filtering for index = 1: num_update_iters for index2 = 1: num_bands eval ( sprintf ( ' kal_gain = ( P0 *( u_dec . H % d ( index : index + size ( w_mat ,
2) -1) ) ) /( forget_factor + ( u_dec . H % d ( index : index + size ( w_mat ,
2) -1) ) ' ' * P0 *( u_dec . H % d ( index : index + size ( w_mat , 2) -1) ) ) ; ' , index2 -1 , index2 -1 , index2 -1) ) ; eval ( sprintf ( ' d_hat ( index2 , index ) = ( w_mat ( index2 , :) ) *( u_dec .
H % d ( index : index + size ( w_mat , 2) -1) ) ; ' , index2 -1) ) ; eval ( sprintf ( ' RLS_error ( index2 , index ) = d_dec . H % d ( index ) d_hat ( index2 , index ) ; ' , index2 -1) ) ; eval ( sprintf ( ' w_mat ( index2 , :) = w_mat ( index2 , :) + kal_gain . ' '
* conj ( RLS_error ( index2 , index ) ) ; ' ) ) ; eval ( sprintf ( ' P0 = ( eye ( size ( w_mat , 2) ) - kal_gain *( u_dec . H % d ( index : index + size ( w_mat , 2) -1) ) ' ' ) * P0 / forget_factor ; ' , index2

107

-1) ) ;
102
103
104
105

end end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121

% % Filter / Interpolate Outputs : d_out_sum = 0; d_ideal_sum = 0; for index = 1: num_bands
% Interpolate by M : use function eval ( sprintf ( ' d_syn . H % d = interp_zeros ( d_hat (% d , :) , num_bands , 2) ;
' , index -1 , index ) ) ; eval ( sprintf ( ' d_ideal_syn . H % d = interp_zeros ( d_dec . H %d , num_bands ,
1) ; ' , index -1 , index -1) ) ;
% Filter through the kth synthesis bank filter eval ( sprintf ( ' d_filt2 . H % d = filter ( filtbank_syn . H %d , 1 , d_syn . H % d ) ;
' , index -1 , index -1 , index -1) ) ; eval ( sprintf ( ' d_ideal_filt1 . H % d = filter ( filtbank_syn . H %d , 1 , d_ideal_syn . H % d ) ; ' , index -1 , index -1 , index -1) ) ; eval ( sprintf ( ' d_out_sum = d_out_sum + d_filt2 . H % d ; ' , index -1) ) ; eval ( sprintf ( ' d_ideal_sum = d_ideal_sum + d_ideal_filt1 . H % d ; ' , index -1) ) ; end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

122
123
124
125
126
127
128
129

% % Specify Outputs : output . out = d_out_sum ; output . ideal = d_ideal_sum ; output_error . full = d_ideal_sum (1: length ( d_out_sum ) ) . ' - d_out_sum ; output_error . subbands = RLS_error ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.15
1
2
3
4

Adjustable Non-Uniform Subband LMS Algorithm

function [ output , output_error , fin_dec_vals ] = subbandNULMS_adapt ( u_in
, d_ideal ,... step_size , w0_len , num_bands_max , start_dec_factrs , PSD_mem , thresh_vals )
% Adam Charles

108

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

%
%
%
%
%
%

3/21/2009
Adjustable Nonuniform Subband LMS Filtering .

Filters through Filterbank with M = num_bands , then performs LMS filtering on each subband seperately . Those results are then put through % the synthesis filter to get the final outpt y ( n ) .
%
% For the matricies used , each row indicates the subband -1 ( row 1 is
% subband 0) , and the column represents the iteration . The updates are
% essentially round robin , as they start at H_0 , work up to H_ {M -1} , then % go back to H_0 .
%

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48

% % Error Checking

% Uniform Filter Bank can only have M = 2^ L outputs if 2^ floor ( log2 ( num_bands_max ) ) = num_bands_max error ( ' M must be a power of 2 for a uniform tree filter bank ! ' ) ; end for index_check = 1: length ( start_dec_factrs ) if 2^ floor ( log2 ( start_dec_factrs ( index_check ) ) ) = start_dec_factrs ( index_check ) error ( ' Decimation Factors must be a power of 2 for a uniform tree filter bank ! ' ) ; end end
% Make sure u_in is a vector if size ( u_in , 1) = 1 && size ( u_in , 2) = 1 error ( ' u_in must be a vector ! ' ) ; elseif size ( u_in , 1) == 1 && size ( u_in , 2) u_in = u_in . ' ; end =

1

% Make sure d_ideal is a vector if size ( d_ideal , 1) = 1 && size ( d_ideal , 2) = 1 error ( ' d_ideal must be a vector ! ' ) ; elseif size ( d_ideal , 1) == 1 && size ( d_ideal , 2) d_ideal = d_ideal . ' ; end 109

=

1

49
50
51
52
53
54
55
56
57
58
59
60
61
62

if ¬isscalar ( step_size ) error ( ' step_size must be a scalar ! ' ) ; end if sum (1./ start_dec_factrs ) = 1 error ( ' Not a valid starting point ! ' ) end if floor ( w0_len / num_bands_max ) = w0_len / num_bands_max error ( ' w0_len must be divisable by the max number of bands ! ' ) end %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81

82
83
84
85
86
87
88
89

% % Prior Calculations and Initializations
% Calculate number of ' time steps ' or # times to updaye all filter banks % LMS related cariables num_update_iters = floor ( min ( length ( u_in ) , length ( d_ideal ) ) / num_bands_max ) ; d_hat = zeros ( num_update_iters , num_bands_max ) ;
LMS_error = zeros ( num_bands_max , num_update_iters ) ; dec_factors = start_dec_factrs ; n_per_band = num_bands_max *( dec_factors ) ;
PSDin_est = zeros (1 , 2* num_bands_max ) ;
PSDout_est = zeros (1 , 2* num_bands_max ) ; u_max_eff = num_bands_max * floor ( length ( u_in ) / num_bands_max ) ;
% Feedback related variables
PSD_mem_in = zeros (2* num_bands_max , PSD_mem ) ;
PSD_mem_out = zeros (2* num_bands_max , PSD_mem ) ;
PSD_array_in = zeros ( num_bands_max , floor ( num_update_iters /2) ) ;
PSD_array_out = zeros ( num_bands_max , floor ( num_update_iters /2) ) ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % Set up Filter Banks and Initialize Memory :
[ filtbank_bin_dec ] = filt_bank_gen (2 , ' uniform ' , ' f ' , 0) ;
[ filtbank_bin_syn ] = filt_bank_gen (2 , ' uniform ' , ' s ' , 0) ;
% Each synthesis / analysis filter needs length ( filtbank_bin_dec ) ammount of memory
% the i ^ th level has 2^ i filters in it (2 for 1 st level , 4 for second
...)
% First column cooresponds to the ' bottom ' branch , or lowest spectral band .

110

90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107

for index = 1: log2 ( num_bands_max ) eval ( sprintf ( ' analysis_data . level % d = zeros ( floor ( u_max_eff /(2^ index ) ) + w0_len /(2^ index ) , 2^ index ) ; ' , index ) ) ; eval ( sprintf ( ' ideal_analysis_data . level % d = zeros ( floor ( length ( d_ideal ) /(2^ index ) ) , 2^ index ) ; ' , index ) ) ; eval ( sprintf ( ' ideal_synthesis_data . level % d = zeros ( floor ( u_max_eff
/(2^ index ) ) , 2^ index ) ; ' , index ) ) ; eval ( sprintf ( ' synthesis_data . level % d = zeros ( floor ( u_max_eff /(2^ index ) ) , 2^ index ) ; ' , index ) ) ; eval ( sprintf ( ' LMS_error . level % d = zeros ( floor ( u_max_eff /(2^ index ) ) ,
2^ index ) ; ' , index ) ) ; eval ( sprintf ( ' w_array . level % d = zeros ( w0_len /(2^ index ) , 2^ index ) ; ' , index ) ) ; end eval ( sprintf ( ' analysis_data . level0 = [ zeros ( w0_len -1 , 1) ; u_in ]; ' ) ) ; eval ( sprintf ( ' ideal_analysis_da ta . level0 = d_ideal ; ' ) ) ; eval ( sprintf ( ' synthesis_data . level0 = zeros ( u_max_eff , 1) ; ' ) ) ; eval ( sprintf ( ' ideal_synthesis_d ata . level0 = zeros ( u_max_eff , 1) ; ' ) ) ; eval ( sprintf ( ' LMS_error . level0 = zeros ( u_max_eff , 1) ; ' ) ) ; eval ( sprintf ( ' w_array . level0 = zeros ( w0_len , 2^ index ) ; ' ) ) ;

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123

% % Get Signal at Each Node in the Analysis Tree disp ( ' Analysis Filter Start ' ) for index = 1: log2 ( num_bands_max ) for index2 = 1: index
% Input Analysis Processing eval ( sprintf ( ' temp_LP = filter ( filtbank_bin_dec . H0 , 1 , analysis_data . level % d (: , index2 ) ) ; ' , index -1) ) ; eval ( sprintf ( ' temp_HP = filter ( filtbank_bin_dec . H1 , 1 , analysis_data . level % d (: , index2 ) ) ; ' , index -1) ) ; eval ( sprintf ( ' analysis_data . level % d (: , (2* index2 -1) ) = temp_LP
(1:2: end ) ; ' , index ) ) ; eval ( sprintf ( ' analysis_data . level % d (: , 2* index2 ) = temp_HP (1:2: end ) ; ' , index ) ) ; clear temp_HP temp_LP
% Ideal Output Analysis Processing eval ( sprintf ( ' temp_DLP = filter ( filtbank_bin_dec . H0 , 1 , ideal_analysis_data . level % d (: , index2 ) ) ; ' , index -1) ) ; eval ( sprintf ( ' temp_DHP = filter ( filtbank_bin_dec . H1 , 1 , ideal_analysis_data . level % d (: , index2 ) ) ; ' , index -1) ) ; eval ( sprintf ( ' ideal_ana lysis_data . level % d (: , (2* index2 -1) ) = temp_DLP (1:2: end ) ; ' , index ) ) ; eval ( sprintf ( ' ideal_ana lysis_data . level % d (: , 2* index2 ) = temp_DHP (1:2: end ) ; ' , index ) ) ;

111

124
125
126
127
128
129

clear temp_DHP temp_DLP end end disp ( ' Analysis Filter End ' )
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154

155
156

157
158
159

% % Run Algorithm disp ( ' LMS Filter Start ' ) for index = 1: num_update_iters
% Number in dec_factors is decimation factor bands : also number of times % that band is updated per iteration . e . g . num_update_iters /[2 , 4 ,
4] =
% [2 , 1 , 1]... for index2 = 1: length ( dec_factors ) for index3 = 1:( num_bands_max / dec_factors ( index2 ) )
% % LMS Algorithm
% Data into the filter is from
% analysis_data . level [ log2 ( dec_factors ( index2 ) ) ]()
% Ideal Output is from
% ideal_analysis_da ta . level [ log2 ( dec_factors ( index2 ) ) ]
% Output goes into
% synthesis_data . level [ log2 ( dec_factors ( index2 ) ) ]
% Filter is w_array . level [ log2 ( dec_factors ( index2 ) ) ]
% Branch number is 1 + sum (1./ dec_factors (1: index2 ) ) * dec_factors ( index2 ) dec_factors2 = [ inf , dec_factors ]; branch_num = 1 + sum (1./ dec_factors2 (1: index2 ) ) * dec_factors
( index2 ) ; u_pos_back = ( index -1) * num_bands_max / dec_factors ( index2 ) + index3 ;
%
dec_factors ; u_pos = u_pos_back + w0_len /( dec_factors ( index2 ) ) ;
% Evaluate output eval ( sprintf ( ' ideal _synthesis_data . level % d ( u_pos_back , branch_num ) = ideal_analysis_data . level % d ( u_pos_back , branch_num ) ; ' ...
, log2 ( dec_factors ( index2 ) ) , log2 ( dec_factors ( index2 ) ) )
)
eval ( sprintf ( ' synthesis_data . level % d ( u_pos_back , branch_num
) = w_array . level % d (: , branch_num ) ' ' * analysis_data . level
% d (( u_pos_back +1) : u_pos , branch_num ) ; ' ...
, log2 ( dec_factors ( index2 ) ) , log2 ( dec_factors ( index2 ) ) , log2 ( dec_factors ( index2 ) ) ) ) ;
% Get error eval ( sprintf ( ' LMS_error . level % d ( u_pos_back , branch_num ) = ideal_analysis_data . level % d ( u_pos_back , branch_num ) -

112

synthesis_data . level % d ( u_pos_back , branch_num ) ; ' ...
, log2 ( dec_factors ( index2 ) ) , log2 ( dec_factors ( index2 ) ) , log2 ( dec_factors ( index2 ) ) ) ) ;
% Update tap weights : eval ( sprintf ( ' w_array . level % d (: , branch_num ) = w_array . level % d (: , branch_num ) + step_size * analysis_data . level % d
(( u_pos_back +1) : u_pos , branch_num ) * conj ( LMS_error . level % d ( u_pos_back , branch_num ) ) ; ' ...
, log2 ( dec_factors ( index2 ) ) , log2 ( dec_factors ( index2 ) ) , log2 ( dec_factors ( index2 ) ) , log2 ( dec_factors ( index2 )
) ));

160
161
162

163

164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183

%

184
185
186
187
188
189
190
191

%
%
%

192

%

193

%

end end % % Place Feedback HERE
% Update PSD arrays every other full iteration ( sice we need
% 2* num_bands_max ) new samples for new window . if index == 2* floor ( index /2) && ( index + 2) * num_bands_max - 1 ≤ length ( u_in )
% Calculate net FFT ^2 for pwelch - type PSD estimate : temp_PSDin = fftshift ( abs ( fft ( u_in ( index * num_bands_max :( index
+2) * num_bands_max -1) ) ) .^2) ; temp_PSDout = fftshift ( abs ( fft ( d_ideal ( index * num_bands_max :( index +2) * num_bands_max -1) ) ) .^2) ;
% Update PSD estimate memory array
PSD_mem_in = [ temp_PSDin , PSD_mem_in (: , 1:( end -1) ) ];
PSD_mem_out = [ temp_PSDout , PSD_mem_out (: , 1:( end -1) ) ];
% Take mean of past K FFT ^2 to get PSD approximate at time N
PSD_array_in (: , index ) = mean ( PSD_mem_in ( num_bands_max +1:2* num_bands_max , :) , 2) ;
PSD_array_out (: , index ) = mean ( PSD_mem_out ( num_bands_max +1:2* num_bands_max , :) , 2) ;
PSD_array_out (: , index ) ./ PSD_array_in (: , index )
% And now the important part : make a decision for the new
% dec_factors vector !! This is a function of the PSD_INest ,
% PSD_OUTest , current dec_factorsand possible some black magic .
[ dec_factors_new , w_new ] = subband_update ( dec_factors , w_array
,...
PSD_array_in (: , index ) , PSD_array_out (: , index ) , thresh_vals ,... num_bands_max , filtbank_bin_syn . H0 , filtbank_bin_syn . H1 ) ; dec_factors = dec_factors_new ; w_array = w_new ; end if index == 100 dec_factors = [8 8 8 8 4 4]; w_array . level1 (: , 2) = eff_filt ( w_array . level2 (: , 4) . ' , w_array . level2 (: , 3) . ' ...
, filtbank_bin_dec . H0 , filtbank_bin_dec . H1 , filtbank_bin_syn . H0 , filtbank_bin_syn . H1 ) . ' ; end 113

194
195
196
197

end disp ( ' Analysis Filter End ' )
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

198
199
200
201
202
203
204
205
206
207

208
209
210
211
212
213

214
215
216
217
218
219
220

% % Filter / Interpolate Outputs : disp ( ' Synthesis Filter Start ' )
% Rebuild signals : start from the outside in ... for index = log2 ( num_bands_max ) : -1:1 for index2 = 1: index
% Output Analysis Processing eval ( sprintf ( ' temp_LP = [ interp_zeros ( synthesis_data . level % d (: ,
(2* index2 -1) ) , 2 , 1) ; 0]; ' , index ) ) ; eval ( sprintf ( ' temp_HP = [ interp_zeros ( synthesis_data . level % d (: ,
2* index2 ) , 2 , 1) ; 0]; ' , index ) ) ; eval ( sprintf ( ' synthesis_data . level % d (: , index2 ) = synthesis_data . level % d (: , index2 ) + filter ( filtbank_bin_dec .
H0 , 1 , temp_LP ) + filter ( filtbank_bin_dec . H1 , 1 , temp_HP ) ; '
...
, index -1 , index -1) ) ; clear temp_HP temp_LP
% Ideal Output Analysis Processing eval ( sprintf ( ' temp_DLP = [ interp_zeros ( ideal_synthesis_data . level % d (: , (2* index2 -1) ) , 2 , 1) ; 0]; ' , index ) ) ; eval ( sprintf ( ' temp_DHP = [ interp_zeros ( ideal_synthesis_data . level % d (: , 2* index2 ) , 2 , 1) ; 0]; ' , index ) ) ; eval ( sprintf ( ' ideal_syn thesis_data . level % d (: , index2 ) = ideal_synthesis_data . level % d (: , index2 ) + filter ( filtbank_bin_dec . H0 , 1 , temp_DLP ) + filter ( filtbank_bin_dec .
H1 , 1 , temp_DHP ) ; ' ...
, index -1 , index -1) ) ; clear temp_DHP temp_DLP end end disp ( ' Synthesis Filter End ' )
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

221
222
223
224
225
226
227
228

% % Specify Outputs : fin_dec_vals = dec_factors ; output . out = synthesis_data ; output . ideal = ideal_synthesis_data ; output . PSD_INest = PSD_array_in ; output . PSD_OUTest = PSD_array_out ; output_error . full = ideal_synthesis_data . level0 (1: length ( i d e a l_synthesis_data . level0 ) ) - synthesis_data . level0 ;

114

229
230
231

output_error . subbands = LMS_error ;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

232

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.16

1
2
3
4
5
6
7
8
9
10

Adjustable Non-Uniform Subband Update Algorithm

function [ dec_factors_new , w_new ] = subband_update ( dec_factors_old , w_old , PSD_array_in , PSD_array_out , thresh_vals , max_bands , H0 , H1 )
%
% Adam Charles
% 3/24/2009
%
% Decision algorithm for updating subband decimation factors
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

%% w_new = w_old ;
% dec_factors_new = 0;
% PSD_in0 = find ( PSD_array_in == 0) ;
% PSD_array_in ( PSD_in0 ) = 10^( -5) ;
PSDratio = ( PSD_array_out + 0.0001) ./( PSD_array_in + 0.001) ;
%
%
%
%
%

index_low = find ( PSDratio < thresh_vals (1) ) ; index_high = find ( PSDratio > thresh_vals (2) ) ; power_weights = zeros ( length ( dec_factors_old ) , 1) ; power_weights ( index_low ) = -2; power_weights ( index_high ) = 1;

branch_test = 0; df_new_index = 1; for index = 1: length ( dec_factors_old ) index_start = max_bands * sum (1./ dec_factors_old (1: index -1) ) + 1; band_power ( index ) = mean ( PSDratio ( index_start : index_start -1+ max_bands /( dec_factors_old ( index ) ) ) ) ; if branch_test == 0 && index = length ( dec_factors_old ) band_test = ceil ( dec_factors_old ( index ) * sum (1./ dec_factors_old

115

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61

62
63
64
65
66
67

%
%

(1: index ) ) /2) - ceil ( dec_factors_old ( index +1) * sum (1./ dec_factors_old (1: index +1) ) /2) ; % dec_factors_old ( index +1) * if dec_factors_old ( index ) == dec_factors_old ( index +1) && band_test == 0 branch_test = 1; else % if dec_factors_old ( index ) = dec_factors_old ( index +1) || branch_test = 0 if band_power ( index ) < thresh_vals (1) && dec_factors_old ( index ) < max_bands
% Decompose into 2 bands dec_factors_new ( df_new_index : df_new_index +1) = 2*[1 ,1]* dec_factors_old ( index ) ; df_new_index = df_new_index +2; else dec_factors_new ( df_new_index ) = dec_factors_old ( index ) ; df_new_index = df_new_index +1; end end elseif branch_test == 0 && index == length ( dec_factors_old ) if band_power ( index ) < thresh_vals (1) && dec_factors_old ( index )
< max_bands
% Decompose into 2 bands dec_factors_new ( df_new_index : df_new_index +1) = 2*[1 ,1]* dec_factors_old ( index ) ; df_new_index = df_new_index +2; else dec_factors_new ( df_new_index ) = dec_factors_old ( index ) ; df_new_index = df_new_index +1; end elseif branch_test == 1 if (0.5*( band_power ( index ) + band_power ( index -1) ) > thresh_vals
(2) ) && ( dec_factors_old ( index ) > 2)
% Consolodate Branch index dec_factors_old dec_factors_new ( df_new_index ) = 0.5* dec_factors_old ( index ) ; band_num = dec_factors_old ( index ) * sum (1./ dec_factors_old (1: index ) ) ; eval ( sprintf ( ' w_new . level % d (: , 0.5*( band_num ) ) = eff_filt ( w_old . level % d (: , band_num -1) . ' ' , w_old . level % d (: , band_num ) . ' ' , H0 , H1 , H0 , H1 ) . ' ' ; ' ...
, log2 ( dec_factors_old ( index ) ) -1 , log2 ( dec_factors_old ( index ) ) , log2 ( dec_factors_old ( index ) ) ) ) df_new_index = df_new_index + 1; elseif ( band_power ( index -1) < thresh_vals (1) ) &&( band_power ( index
) < thresh_vals (1) ) &&( dec_factors_old ( index -1) < max_bands )
% decompose both dec_factors_new ( df_new_index : df_new_index +1) = 2*[1 ,1]* dec_factors_old ( index -1) ; dec_factors_new ( df_new_index +2: df_new_index +3) = 2*[1 ,1]*

116

dec_factors_old ( index ) ; df_new_index = df_new_index +4; elseif ( band_power ( index -1) < thresh_vals (1) ) &&( dec_factors_old ( index -1) < max_bands )
% decompose index -1 dec_factors_new ( df_new_index : df_new_index +1) = 2*[1 ,1]* dec_factors_old ( index -1) ; dec_factors_new ( df_new_index +2) = dec_factors_old ( index ) ; df_new_index = df_new_index +3; elseif ( band_power ( index ) < thresh_vals (1) ) &&( dec_factors_old ( index ) < max_bands )
% decompose index dec_factors_new ( df_new_index ) = dec_factors_old ( index -1) ; dec_factors_new ( df_new_index +1: df_new_index +2) = 2*[1 ,1]* dec_factors_old ( index ) ; df_new_index = df_new_index +3; else % leave it alone ... dec_factors_new ( df_new_index ) = dec_factors_old ( index -1) ; dec_factors_new ( df_new_index +1) = dec_factors_old ( index ) ; df_new_index = df_new_index +2; end branch_test = 0;

68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90

end

end

A.17
1

Test Filter Construction Code

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

2
3
4
5
6
7
8
9
10
11

% % Low - Pass Filter ftest_lpspec . fp = 1/4;
% pass - band frequency as calculated above ftest_lpspec . fst = 1/4 + 1/32;
% stop - band frequency is 20 Hz above pass - band ftest_lpspec . Ap = 0.1;
% worst pass - band attenuation is
1 db ftest_lpspec . Ast = 60;
% best stop - band attenuation is
20 db
% generate actual filter from specs ftest_lpspecs = fdesign . lowpass ( ' fp , fst , Ap , Ast ' , ... ftest_lpspec . fp , ftest_lpspec . fst , ftest_lpspec . Ap , ftest_lpspec .
Ast ) ; ftest_lp = design ( ftest_lpspecs , ' kaiserwin ' ) ;

117

12
13
14
15

% fvtool ( ftest_lp )
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

% % Band - Pass Filter ftest_bpspec . fst1 = 1/3; ftest_bpspec . fp1 = 1/3 + 1/27; ftest_bpspec . fp2 = 1/3 + 1/17 + 1/4; ftest_bpspec . fst2 = 1/3 + 1/17 + 1/2 + 1/27; ftest_bpspec . ast1 = 60; ftest_bpspec . ap = 0.1; ftest_bpspec . ast2 = 70; ftest_bpspecs = fdesign . bandpass ( ' fst1 , fp1 , fp2 , fst2 , ast1 , ap , ast2 ' ,... ftest_bpspec . fst1 , ftest_bpspec . fp1 , ftest_bpspec . fp2 , ftest_bpspec
. fst2 ,... ftest_bpspec . ast1 , ftest_bpspec . ap , ftest_bpspec . ast2 ) ; ftest_bp = design ( ftest_bpspecs , ' kaiserwin ' ) ;

% fvtool ( ftest_bp )
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

33
34
35
36
37
38
39
40
41
42
43
44
45

% % High - Pass Filter ftest_hpspec . fst = 25/32;
% pass - band frequency as calculated above ftest_hpspec . fp = 13/16;
% stop - band frequency is 20 Hz above pass - band ftest_hpspec . Ast = 60;
% worst pass - band attenuation is
1 db ftest_hpspec . Ap = 0.1;
% best stop - band attenuation is
20 db
% generate actual filter from specs ftest_hpspecs = fdesign . highpass ( ' fst , fp , ast , ap ' , ... ftest_hpspec . fst , ftest_hpspec . fp , ftest_hpspec . Ast , ftest_hpspec .
Ap ) ; ftest_hp = design ( ftest_hpspecs , ' kaiserwin ' ) ;
% fvtool ( ftest_hp )
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

46
47
48
49

% % Band - Stop Filter ftest_bsspec . fp1 = 1/3; ftest_bsspec . fst1 = 1/3 + 1/27;

118

50
51
52
53
54
55
56
57
58
59
60
61
62

ftest_bsspec . fst2 = 1/3 + 1/17 + 1/4; ftest_bsspec . fp2 = 1/3 + 1/17 + 1/2 + 1/27; ftest_bsspec . ap1 = 0.1; ftest_bsspec . ast = 60; ftest_bsspec . ap2 = 0.2; ftest_bsspecs = fdesign . bandstop ( ' fp1 , fst1 , fst2 , fp2 , ap1 , ast , ap2 ' ,... ftest_bsspec . fp1 , ftest_bsspec . fst1 , ftest_bsspec . fst2 , ftest_bsspec . fp2 ,... ftest_bsspec . ap1 , ftest_bsspec . ast , ftest_bsspec . ap2 ) ; ftest_bs = design ( ftest_bsspecs , ' kaiserwin ' ) ;
% fvtool ( ftest_bs )
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

A.18
1

Subband Algorithm Testing Code

%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

2
3

% % Subband Adaptive Filtering Testing Code
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

4
5
6
7

clear clc close all
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

8
9
10
11
12
13
14
15

% % Choose which filters to try lms_uniform_opt = 0; lms_reg_opt = 0; rls_uniform_opt = 0; rls_reg_opt = 0; lm s_ no nu niform_opt = 1;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

16
17
18
19
20
21
22

% % Make testing data n_max = 32000;
% H = poly ( rand (60 , 1) ) ;
% H = H ./ norm ( H ) ;
% Test Filters filt_test_create 119

23
24
25
26
27
28
29

30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

Hlp = ftest_lp . Numerator ;
Hhp = ftest_hp . Numerator ;
Hbp = ftest_bp . Numerator ;
Hbs = ftest_bs . Numerator ;
% fvtool ( ftest_lp )
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % Make Options step_size = 0.01; num_bands = 16; num_weights = ceil ( length ( Hbs ) / num_bands ) * num_bands ; num_trials = 1; band_type = ' uniform ' ; w_start = zeros ( num_bands , num_weights / num_bands ) ; w_start_LMS = zeros (1 , num_weights ) ; forget_factor = 0.99;
∆ = 0.9; thresh_vals = [0.1 , 0.5]/4; % 10^(12/20) *10.^( -[50;15]/20) ; for index2 = 1: log2 ( num_bands ) eval ( sprintf ( ' output_error_NULMS . subbands . level % d = 0; ' , index2 ) ) end output_error_LMS . full = 0; output_error_LMS . subbands = 0; ou tp ut _e rror_NULMS . full = 0;
LMS_error = 0;
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

51
52
53
54
55
56
57
58
59
60
61
62
63

% % Run tests for index = 1: num_trials disp ( sprintf ( ' Iteration % d ' , index ) ) u_in = 0.6* randn ( n_max , 1) ;
[ sys_noise ] = AFinput_noise ( length ( u_in ) , [0 , 0.01] , 5000 , [1 , 30] ,
[1 ,0]) ; d_ideal = [ filter ( Hbp , 1 , u_in (1: n_max /2) ) ; filter ( Hlp , 1 , u_in ( n_max /2+1: end ) ) ] + sys_noise ;
%
d_ideal = filter ( Hbs , 1 , u_in ) + sys_noise ; if lms_uniform_opt
[ output_LMS , outp ut_er ror_ LMS_ temp ] = subbandLMSsimple2 ( u_in , d_ideal , step_size , w_start , num_bands , band_type ) ; output_error_LMS . full = output_error_LMS . full + abs ( out put_ error _LMS _tem p . full ) / num_trials ; output_error_LMS . subbands = output_error_LMS . subbands + abs ( out put_ error _LMS _tem p . subbands ) / num_trials ;

120

64
65
66
67
68
69
70
71
72
73
74

75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90

91
92
93
94
95
96
97

end if lms_reg_opt
[ d_hat_LMS , LMS_error_temp , w_mat ] = standardLMS ( u_in , d_ideal , w_start_LMS , step_size , 7 , 0) ;
LMS_error = LMS_error + abs ( LMS_error_temp ) / num_trials ; clear LMS_error_temp end if rls_uniform_opt
[ output , output_error ] = subbandRLSsimple2 ( u_in , d_ideal , forget_factor , ∆ , w_start , num_bands ) ; output_error . full = output_error . full + abs ( output_error . full ) / num_trials ; for index2 = 1: log2 ( num_bands ) eval ( sprintf ( ' output_error . subbands . level % d = output_error . subbands . level % d + abs ( output_error . subbands . level % d ) / num_trials ; ' ...
, index2 , index2 , index2 ) ) end end if rls_reg_opt
[ d_hat_RLS , Xi , w ] = standardRLS ( u_in , d_ideal , w_start_LMS , forget_factor , ∆ , 8 , 0) ; end if l ms _n onuniform_opt w0_len = num_weights ; num_bands_max = num_bands ; start_dec_factrs = [2 2];
PSD_mem = 50;
[ output_NULMS , output_error_NULMS_temp , fin_dec_vals ] = subbandNULMS_adapt ( u_in , d_ideal , step_size ,... w0_len , num_bands_max , start_dec_factrs , PSD_mem , thresh_vals ) ; ou tp ut_error_NULMS . full = output_error_NULMS . full + abs ( o ut pu t _e r ro r_ N UL M S_ te m p . full ) / num_trials ; for index2 = 1: log2 ( num_bands ) eval ( sprintf ( ' output_error_NULMS . subbands . level % d = output_error_NULMS . subbands . level % d + abs ( o ut pu t _e r ro r_ N UL M S_ te m p . subbands . level % d ) / num_trials ; ' ...
, index2 , index2 , index2 ) ) end clear o ut p ut _e r ro r _N UL M S_ t em p end end
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

98
99
100
101

% % Plotting if rls_uniform_opt && rls_reg_opt figure ;

121

102

subplot (3 ,1 ,1) , plot (1: size ( output_error . subbands , 2) , abs ( Xi (1: size ( output_error . subbands , 2) ) ) ) xlabel ( ' Iteration Number , n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ; ylabel ( ' RLS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) ; subplot (3 ,1 ,2) , plot (1: length ( output_error . full ) , abs ( output_error . full . ' ) , ' b ' , 1: length ( Xi ) , abs ( Xi ) , ' --r ' ) xlabel ( ' Iteration Number , n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ; ylabel ( ' RLS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) ; legend ( ' Subband RLS ' , ' Fullband RLS ' ) subplot (3 ,1 ,3) , plot ( abs ( output_error . subbands . ' ) ) xlabel ( ' Iteration Number , n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ; ylabel ( ' RLS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) ; legend ( ' Subband 1 ' , ' Subband 2 ' , ' Subband 3 ' , ' Subband 4 ' , ' Subband
5 ' , ' Subband 6 ' , ' Subband 7 ' , ' Subband 8 ' ) ;

103
104
105
106
107
108
109
110
111
112
113
114
115
116
117

118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138

end if lms_uniform_opt && lms_reg_opt figure ; subplot (3 ,1 ,1) , plot (1: length ( output_error_LMS . full ) , abs ( output_error_LMS . full . ' ) , ' b ' , 1: length ( LMS_error ) , abs (
LMS_error ) , ' --r ' ) xlabel ( ' Iteration Number , n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ; ylabel ( ' LMS Absolute Value , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) ; legend ( ' Subband LMS ' , ' Fullband LMS ' ) subplot (3 ,1 ,2) , plot (1: size ( output_error_LMS . subbands , 2) , abs (
LMS_error (1: size ( output_error_LMS . subbands , 2) ) ) ) xlabel ( ' Iteration Number , n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ; ylabel ( ' LMS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) ; subplot (3 ,1 ,3) , plot ( abs ( output_error_LMS . subbands . ' ) ) xlabel ( ' Iteration Number , n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ; ylabel ( ' LMS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) ; legend ( ' Subband 1 ' , ' Subband 2 ' , ' Subband 3 ' , ' Subband 4 ' , ' Subband
5 ' , ' Subband 6 ' , ' Subband 7 ' , ' Subband 8 ' ) ; end if lms _n onuniform_opt figure ; subplot (4 ,1 ,1) , plot (( abs ( output_error_NULMS . full ) ) ) subplot (4 ,1 ,2) , plot (( abs ( output_error_NULMS . subbands . level1 ) ) ) subplot (4 ,1 ,3) , plot (( abs ( output_error_NULMS . subbands . level2 ) ) ) subplot (4 ,1 ,4) , plot ( abs ( output_error_NULMS . subbands . level3 ) ) end if lms _n onuniform_opt % && lms_reg_opt

122

139
140
141
142
143

figure ; hold on plot (( abs ( output_error_NULMS . full ) ) ) plot (( abs ( output_error_NULMS2 . full ) ) , ' --r ' ) xlabel ( ' Iteration Number n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ylabel ( ' LMS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) legend ( ' Sub - optimal Decomposition ' , ' Optimal Decomposition ' ) hold off

144
145
146
147
148
149
150

figure ; hold on subplot (4 , 1 , 1) , plot (( abs ( output_error_NULMS . full ) ) ) xlabel ( ' Iteration Number n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ylabel ( ' LMS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) subplot (4 , 1 , 2) , plot (( abs ( output_error_NULMS . subbands . level1 ) ) ) xlabel ( ' Half Rate Iteration Number n ' , ' FontSize ' , 12 , ' FontName ' ,
' Times ' ) ylabel ( ' LMS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) subplot (4 , 1 , 3) , plot (( abs ( output_error_NULMS . subbands . level2 ) ) ) xlabel ( ' Quarter Rate Iteration Number n ' , ' FontSize ' , 12 , ' FontName
' , ' Times ' ) ylabel ( ' LMS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) subplot (4 , 1 , 4) , plot (20* log10 ( abs ( output_error_NULMS . subbands . level2 ) ) ) xlabel ( ' Eigth Rate Iteration Number n ' , ' FontSize ' , 12 , ' FontName ' ,
' Times ' ) ylabel ( ' LMS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' )

151
152
153
154
155
156
157
158
159
160
161
162

figure ; hold on plot (20* log10 ( abs ( output_error_NULMS . full ) ) ) % plot ( output_NULMS . out
. level0 ) % plot (20* log10 ( abs ( output_error_NULMS2 . full ) ) , ' --r ' ) xlabel ( ' Iteration Number n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ylabel ( ' LMS Absolute Error , | e [ n ]| ' , ' FontSize ' , 12 , ' FontName ' , '
Times ' ) legend ( ' Sub - optimal Decomposition ' , ' Optimal Decomposition ' ) hold off

163
164
165
166
167
168
169
170
171
172
173
174
175
176

end

figure ; xPSDvals = num_bands_max *(1: size ( output_NULMS . PSD_OUTest , 2) ) ; yPSDvals = pi *((1: size ( output_NULMS . PSD_OUTest , 1) ) -1) / num_bands_max ; surf ( xPSDvals , yPSDvals , ( output_NULMS . PSD_OUTest +0.001) ./( output_NULMS
. PSD_INest + 0.01) ) xlabel ( ' Iteration Number n ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' ) ylabel ( ' Frequency Band ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' )

123

177

zlabel ( ' PSD Ratio Estimate ' , ' FontSize ' , 12 , ' FontName ' , ' Times ' )

124

Bibliography
[1] G. K. Singh A. Kumar and R. S. Anand. Near perfect reconstruction quadrature mirror filter. Proceedings of World Academy of Science, Engineering and
Technology, 27:204–207, February 2008.
[2] S. K. Mitra A. Mahalanobis, S. Song and M. R. Petraglia. Adaptive fir filters based on structural subband decomposition for system identification problems.
IEEE Transactions on Circuits and Systems, 40(6):375–381, June 1993.
[3] M. A. Aldajani. Adaptive step size sign least mean squares. IEEE ICASSP, pages 996–672, 2004.
[4] M. R. Petralgia F. S. Hallack. Performance comparison of adaptive algorithms applied to acoustic echo cancelling. IEEE, pages 1147–1150, 2003.
[5] A. Gilloire and M. Vetterli. Adaptive filtering in sub-bands. IEEE, pages 1572–
1575, 1988.
[6] A. Gilloire and M. Vetterli. Adaptive filtering in subbands with critical sampling: Analysis, experiments, and application to acoustic echo cancellation. IEEE
Transactions on Signal Processing, 40(8):1862–1875, August 1992.
[7] S. Haykin. Adaptive Filter Theory. Prentice-Hall, fourth edition, 2002.
[8] T. Bose J. D. Grieshbach and D. M. Etter. Non-uniform filterbank bandwidth allocation for system modeling subband adaptive filters. IEEE, pages 1473–1476,
1999.
[9] D. E. Knuth. The Art of Computer Programming Volume 2; Seminumerical
Algorithms. Addison-Wesley, third edition, 1998.
[10] S. Koike. Adaptive threshold nonlinear algorithm for adaptive filters with robustness against impulse noise. IEEE Transactions on Signal Processing, 45(9):2391–
2395, September 1997.

125

[11] S. Koike. Adaptive forgetting factor recursive least squares adaptive threshold nonlinear algorithm (aff-rls-atna) for identification of nonstationary systems.
IEEE Transactions, 3(3):617–620, 2003.
[12] R. D. Koipillai and P. P. Vaidyanathan. Cosine-modulated fir filter banks satisfying perfect reconstruction. IEEE Transactions on Signal Processing, 40(4):770–
783, April 1992.
[13] V. Krishnamurthy and S. Singh. Adaptive forgetting factor recursive least squares for blind interference suppression in ds/cdma systems. IEEE, pages 2469–2472,
2000.
[14] R. T. B. Vasconcellos M. R. Petralgia and R. G. Alves. Performance comparison of adaptive subbands and structures applied to acoustic echo cancelling. IEEE, pages 1535–1539, 2001.
[15] M. L. McCloud and D. M. Etter. Subband adaptive filtering with time-varying nonuniform filter banks. IEEE, pages 1953–1956, 1997.
[16] M. L. McCloud and D. M. Etter. A technique for nonuniform subband adaptive filtering with varying bandwith filter banks. IEEE, pages 1315–1318, 1997.
[17] A. Oakman and P. Naylor. Dynamic structures for non-uniform subband adaptive filtering. IEEE Transactions, pages 3717–3720, 2001.
[18] K. K. Parhi. VLSI Digital Signal Processing Systems, Design and Implementation. John Wiley & Sons, Inc, 1999.
[19] M. R. Petraglia and P. B. Batalheiro. Filter bank design for a subband adaptive filtering structure with critical sampling. IEEE Transactions on Circuits and
Systems, 51(6):1194–1202, June 2004.
[20] M. R. Petraglia and P. B. Batalheiro. Nonuniform subband adaptive filtering with critical sampling. IEEE Transactions, pages 565–575, 2008.
[21] M. R. Petraglia and S. K. Mitra. Adaptive fir filter structure based on the generalized subband decomposition of fir filters. IEEE Transactions on Circuits and Systems, 40(6):354–362, June 1993.
[22] M. R. Petraglia and S. K. Mitra. Performance analysis of adaptive filter structures based on subband decomposition. IEEE Transactions, 6:60–63, 1993.
[23] J. A. Apolinario Jr. R. G. Alves, M. R. Petraglia and P. S. R. Diniz. Rls algorithm for a new subband adaptive structure with critical sampling. IEEE Transactions, pages 442–447, 1998.
126

[24] A. Balasubramanian R. W. Wies and J. W. Pierre. Using adaptive step-size least mean squares (aslms) for estimating low-frequency electromechanical modes in power systems. IEEE, pages 1–8, 2006.
[25] P. P. Vaidyanathan. Multirate Filters and Filter Banks. Prentice-Hall, 1993.
[26] M. Vetterli. A theory of multirate filter banks. IEEE Transactions on Acoustics,
Speech, and Signal Processing, 35(3):1572–1575, March 1987.
[27] P. D. Welch. The use of fast fourier transform for the estimation of power spectra:
A method based on time averaging over short, modified periodograms. IEEE
Transactions on Audio and Electronics, 14(2):70–73, June 1967.
[28] S. Kinjo Y. Higa, H. Ochi. A subband adaptive filter with the statistically optimum analysis filter bank. IEEE Transactions on Circuits and Systems,
45(8):1150–1154, August 1998.

127

Similar Documents

Premium Essay

Wireless

...PENYERAHAN DAN PENILAIAN TUGASAN ASSIGNMENT SUBMISSION AND ASSESSMENT KOD KURSUS /COURSE CODE : CBWT3103 TAJUK KURSUS /COURSE TITLE : INTRODUCTION TO WIRELESS TECHNOLOGY SEMESTER /SEMESTER : JANUARI/JANUARY 2011 _________________________________________________________________________ ARAHAN KEPADA PELAJAR / INSTRUCTIONS TO STUDENTS 1. Tugasan ini mengandungi SATU (1) soalan sahaja yang disediakan dalam bahasa modul bercetak kursus ini. / This assignment contains only ONE (1) question that is set in the language of the printed module for the course. 2. Jawab dalam Bahasa Melayu atau Bahasa Inggeris. / Answer in Malay or English. 3. Muatturunkan templet tugasan versi bahasa yang berkenaan daripada MyVLE untuk penyediaan dan penyerahan tugasan anda. / Download the language version of the assignment template concerned from the MyVLE for preparation and submission of your assignment. 4. (i) Bagi soalan Tugasan yang berasaskan esei / For essay based assignment: Tugasan anda hendaklah antara 2500 hingga 3000 patah perkataan tidak termasuk rujukan. Bilangan perkataan hendaklah ditunjukkan di hujung tugasan anda. Jumlah perkataan hendaklah ditunjukkan di penghujung Tugasan. Tugasan anda hendaklah ditaip dengan menggunakan saiz fon 12 Times New Roman dan langkau baris 1.5. Jangan menyalin soalan dan arahan tugasan dalam jawapan anda. / Your assignment should be between 2500 to 3000 words excluding references. The number of words should be shown at the end of your assignment. Your assignment...

Words: 2663 - Pages: 11

Premium Essay

Wireless Technology

...Wireless Technology BIS/221 Wireless Technology Technology has greatly evolved in the past 10 years. There was a time when a paper back book was more common than a cellular or wireless device. Today, however, wireless device are more and more common and are pretty much a must have. Wireless devices simplify how we perform daily task by being easier to access because we can carry these items with us. A few wireless devices that have made my life easier are my wireless printer, GPS, Wi-Fi, Bluetooth, and a few wireless household products. Wireless technology has also allowed disabled individuals to enjoy activities that they normally cannot enjoy. The use of a wireless printer has made it easier to print documents either from my wireless computer or my cell phones. I no longer have to go into the computer room to complete these task. I honestly no longer need a computer room; because I have a wireless computer and printer I no longer have to be in a designated area to complete homework or perform task on the computer. I can now do those things from anywhere in my home. Wi-Fi has also made in possible to surf the web on my computer from anywhere there is a Wi-Fi connected. The use of Wi-Fi has eliminated the need to use cellular data in turn lowering my cell phone bill. The use of GPS has also made it easier to navigate to unfamiliar places by enabling me to easily get direction either from the GPS on my cell phone or a dedicated GPS device...

Words: 537 - Pages: 3

Premium Essay

Wireless Signals

...Wireless Communications are very important in our daily lives. Practically everywhere we turn we can see someone working away on a laptop, an iPad, or some other form of wireless PDA. This form of communication is usually used for personal and professional reasons. There are four main types of wireless technology in play here. The four types of wireless technology is Wi-Fi, Cellular, Bluetooth and WiMAX (Cox, 2013). Wi-Fi is the most popular form of wireless technology that we come to depend on today. Wi-Fi uses IEEE 802.11 specifications to provide secure network connections at home, in the office and in the public at places like libraries, coffee houses. Many business offices have Wi-Fi set up so that their employees can be more mobile using laptops they can retreat from their desk and in to a conference room for a quiet space to work. Wi-Fi connection within a network presents challenges for any IT group with regard to security as well as architecture of the network. Many wireless Routers and AP’s are designed to provide quick speeds and secure connection for Wi-Fi some more powerful than others. Many PDA devices are being designed to act as a hot spot, providing Wi-Fi access to laptops and other devices. This brings me to speak about Cellular technology. When we think of cellular technology, we mostly think about being on the go and needing to make a phone call with a wireless phone. Cellular signals use connected transmitters or cells as Michael Cox calls them to...

Words: 725 - Pages: 3

Free Essay

Wireless Paper

...J. Chem. Chem. Eng. 5 (2011) 897-902 Remote Control of Fed-Batch Fermentation Systems Eric Moreau3, Floyd Inman, III1, Sunita Singh2, Heather Walters1 and Leonard Holmes1* 1. Biotechnology Research and Training Center, University of North Carolina at Pembroke, Pembroke, NC, USA 2. Central Institute of Agricultural Engineering, Bhopal, Madhya Pradesh, India 3.Université de Picardie Jules Verne, Amiens, France Received: June 14, 2011 / Accepted: July 11, 2011 / Published: October 10, 2011. Abstract: Bioreactor operation requires continuous monitoring of fermentation parameters and real-time control over bioreactor devices. Remote monitoring and control of the bioreactor’s computer via the Internet avoids the necessity of personnel being continually onsite during operation. A two liter Sartorius-stedim Biostat® A Plus fermentation system was networked and interfaced with the commercial software from GoToMyPC® to allow remote control of the fermentation system utilizing the internet. The fermentation vessel was equipped with hardware calibrated for monitoring and controlling culture parameters during experimentations. The uniform resource locator controlled night-vision web camera allowed continuous monitoring of the glass fermentation vessel during the day and at night. The main window screen of the laboratory computer can be securely accessed from any portable device (i.e. laptop) capable of establishing an Internet connection and executing the commercial software from GoToMyPC®...

Words: 2136 - Pages: 9

Premium Essay

Wireless

...WIRELESS COMMUNICATION Contents Page No 1. Dedication…………………………………………………………..01 2. Background…………………………………………………………02 3. Acknowledgement………………………………………………….03 4. Abstract.....................................................................................…….04 5. Introduction………………………………………………….….. 05 6. Introduction to Technology……………………………….……. 06 7. Introduction to Wireless Communication……………….……….06 8. Definition of Wireless………………………………………..……..07 9. Wireless Communication Model……………………………..……07 10. Wireless Communication Technologies……………………..……08 11. Wireless Usage……………………………………………..……….10 12. Wireless Security…………………………………………..……….10 13. Security about……………………………………………….……...11 14. Role of Security…………………………………………………….11 15. Wireless Devices…………………………………………….……...13 16. Wireless Prices……………………………………………….……..14 17. Wireless Devices Availability………………….…………………..14 18. Wireless Devices Manufacturers………………………….………14 19. Wireless Service Companies………………………………………15 20. Conclusion…………………………………………………………..15 21. Reference……………………………………………………………16 Dedication We dedicate this project to our loving parents whose prayers are always with us. Furthermore, We confer this project to all the teachers in our whole educational...

Words: 2185 - Pages: 9

Premium Essay

Wireless Information

...Wireless in AmericA Innovative ■ Competitive ■ World Leader is the most innovative and competitive in the world-—the gold standard that others aspire to emulate. The U.s. wireless industry The industry is providing American wireless users with the best value proposition on the planet. We use our devices to talk more, pay less and have more wireless broadband subscribers than any other developed country. The industry’s competition and innovation have also created a fantastic array of choices for consumers, who can select from several national service providers, and many regional and local carriers. They have the option of prepaid or postpaid service. More than 630 handsets are available in the U.S. market. Mobile applications barely existed just three years ago, yet today, there are more than 500,000 from which to choose. Wireless technology is helping us live and work better than ever before, and is having profound impacts in areas such as healthcare, transportation, energy, education and many more. The U.S. wireless industry leads the way in the widespread deployment of high-speed networks… and even in the most challenging of economic times, continues to outpace its counterparts around the world when it comes to investing in infrastructure. The U.S. wireless industry continues to play a key role in our country’s economic development and enriches all of our lives thanks to our hallmark innovation and competition. specTrUm The Lifeblood of Wireless Wireless service...

Words: 9934 - Pages: 40

Premium Essay

Wireless Communication

...With the drastic increase in new technology this past century, I am sure that many people have pondered the question: “I wonder what new technology is going to be part of our lives in the next 10 years or so?” Wireless communications is the one particular technology that has always fascinated me. The convenience and mass usage of cordless phones, cell phones, wireless networking, GPS or navigation systems have always captured my attention. A few years ago I first heard of the name Bluetooth. Along with the interesting name, a “futuristic” scenario was embossed in my memory whereby a cell phone would communicate with one’s fridge at home and then notify the owner that the milk-supply was running low. This may seem a little far-fetched but it may very well be a reality and a standard in a few decades. Bluetooth’s primary purpose is to enable short-range wireless voice and data communications anywhere in the world. The way Bluetooth actually works by allowing users to connect to a wide range of telecommunication and computer devices without cables, namely mobile phones, portable computers, personal handheld devices, as well as connection to the Internet (bluetooth.com, ‘how it works’). In the following, I will further explain what, how and where Bluetooth works and what the potential use of this new technology is. The Bluetooth name comes from a Scandinavian king Harald Blatand, where “Blatand” is translated to “Blue Tooth” (Miller Michael, page 26). This king was known for uniting...

Words: 1942 - Pages: 8

Premium Essay

Wireless Signals

...Wireless Signals Raquel Barthelmes IT/242 September 30, 2012 Randy Guin   Abstract This is a short study of the different types of wireless technologies in common use today. This study will touch on only four types, although there are actually numerous types fulfilling many different uses. Wireless Signals Introduction In the world today, we have begun to rely heavily on various new technologies. Everything from the way our vehicles run to being able to spy remotely on our own houses can be done with the advances made in technology in the last few decades. One of the newest and most frequently used advances today is those that have been made in wireless technology, although that in itself is by no means new. The very first completely wireless system was patented by a man named Marconi in 1897(JPL’s Website, 1995), and a new future in communication was conceived. There are currently numerous wireless technologies available around the world which enable many tasks to be completed more quickly than ever before. From networking to opening a garage door, wireless technology has invaded the every-day lives of billions of people. Businesses are able to employ people to work remotely from home on a scale never before seen, which brings us to the first type of wireless technology I will outline here: Networking. Networking Wireless networking is perhaps the most commonly used type of wireless technology, primarily because every...

Words: 1036 - Pages: 5

Free Essay

Wireless

...Introduction We have been hired to deploy a wireless network for Citizens First National Bank. We are going to design, test and deploy this network. Banks have been slow to adapt this technology because if the high security standards they must follow. Since security has been improved and the benefits are so great more and more companies are starting to deploy wireless networks. It the next couple pages I will go through the scope of the project along with the goals and requirements. There are some initial testing that we have to get down and setup so auditors will be able to test before we can deploy to all branches. By using auditors it helps protect the bank with a paper trail incase an incident ever happened. This would show that we have followed all requirements and made every possible attempt to protect our customers and employees. We will have 60 days to setup and test before we have to purchase equipment. This should provide plenty of time to get a location up and running and have their auditors in to do testing. After initial testing we will then put a plan in place to bring all branches online one at a time while sticking to the budget. Scope and Goals Scope The major driving force of this project is to add a wireless infrastructure to the entire bank that would allow seamless use between all branches, while maintaining the high security standards that the FCC requires to help protect the customers and employees of the bank. The details of the...

Words: 786 - Pages: 4

Free Essay

Wireless

...TERM PAPER Wireless LAN Security Enabling and Protecting the Enterprise INSIDE INSIDE ∆ Wireless LAN Technology ∆ ∆ ∆ Benefits of Wireless LANs Security Risks and Technical Challenges Recommendations WIRELESS LAN SECURITY Contents Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Wireless LAN Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Benefits of Wireless LANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Simplified Implementation and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Extended Reach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Increased Worker Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Reduced Total Cost of Ownership and Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Security Risks and Technical Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 “Leaky” Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Unapproved Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Exposure of Wireless Devices . . . . . . . . . . . . . . . ....

Words: 2559 - Pages: 11

Premium Essay

Wireless Proposal

...Week 4 Learning Team Wireless Proposal BIS/220 October 21, 2013 Wireless Technology Proposal The following proposal is for Party Plate’s possible introduction of wireless technologies into the work environment. We have identified two wireless technologies that would be beneficial for implementation into the organization. The goal would be for these wireless technologies to help streamline our data management and increase our productivity. Within this proposal we will identify the benefits and the risks of implementing these technologies. In order to succeed and grow in today’s ever-changing business environment it is critical to keep up with changes in new technologies. To be competitive and continue advancing in marketing we must ensure that our staff has the competitive advantage by being able to utilize the latest technologies and data management tools that are available. Our goal as a company is to be able to meet the ever-changing demands of the customer by equipping our staff with the latest marketing data, communication data, and reporting data. Having these abilities would allow for us to research any topic effectively and efficiently.  We must also have a platform that allows our internal staff to share sales and marketing information across all of the departments, both in office and out in the field. Customer interaction must be optimized through emails, video conferencing and phone calls. In order to accomplish our goals here at Party Plates we have put together...

Words: 801 - Pages: 4

Premium Essay

Wireless Signals

...SIGNALS Wireless Signals xxxxxxxx IT/242 xxxxxxxxx xxxxxxxxx Wireless Signals There are many different types of wireless technologies accomplishing a wide variety of solutions as we trend toward a wireless world. The electromagnetic spectrum is being exploited like no other time in history. With so many choices of wireless technologies to discuss, it is difficult to narrow them down to four. The four different types of wireless technologies to be discussed are cellular, satellite, Radio Frequency Identification, and Wireless Fidelity. Cellular communications are rapidly changing as more wireless applications are realized. The potential for cellular use has transcended mere phone communication use as developers delve into other cellular applications. Cellular technology works on the premise of radio communications through lower power base stations known as cells (Brain, Tyson, & Layton). The world has built an extensive and reliable cellular infrastructure and is now midst of significant improvement. With third generation (3G) and forth generation (4G) cellular communication, it is now possible to have mobile broadband technology. Other applications include WAN backup and alarm system transmission. An example of this is the use of 3G backup devices used in all State Farm agencies. This device is connected to the router in lieu of a dial backup. Satellite is another form of wireless technology. This digital signal transmits...

Words: 582 - Pages: 3

Premium Essay

Wireless

...Wireless Security Technical Point-of-View Wireless Security Technical Point-of-View W ireless network (Wi-Fi) is now widely established and utilized at home, offices and everywhere in public areas such as rail stations, streets, and etc. This newsletter provides the technical knowledge of Wi-Fi technologies, relevant threats and countermeasures for building a secure internal Wi-Fi network. For the end user best practices of using Wi-Fi, please refer to another newsletter entitled “Wireless Network, Best Practices for General User”. Wireless Technologies | Classification of Networks Technological advancement in wireless communications has led to the worldwide proliferation of networks. The various kinds of network technologies developed can be classified into the following categories according to their range of coverage: Wireless Wide Area Network (WWAN) WWAN offers the largest coverage. Voice and data can be transferred between mobile phones via messaging apps, web pages and video conferencing. In order to secure the transfer, encryption and authentication methods are adopted. Examples of WWAN are 4G, 3G and 2G networks. Wireless Metropolitan Area Network (WMAN) MAN (Metropolitan Area Network) covers across the entire city and WMAN provides the Wi-Fi network similar to MAN. WiMAX and Wireless MAN are both examples of this kind. Wireless Local Area Network (WLAN) WLAN is an 802.11i wireless network that facilitates the access of corporate environment...

Words: 4503 - Pages: 19

Free Essay

Wireless Technology Upgrade

...Wireless technology upgrade Tony Madrid Keller Graduate School of Management of DeVry University Decatur, Georgia NETW562: Wireless Devices & Apps Table of Contents Introduction 1 Strategic business assessment 3 Competition 4 Tradeoff analysis and rationale 5 System selection 5 Customer devices 6 Design a Wireless System 7 System description 9 Efficient Support 9 Quality of Service (QoS) 10 Service description 10 Network Detection and Selection 10 Service Continuity with Seamless Connections 10 Topology Independence 10 Coverage analysis 11 Initial capabilities and limitations 11 Less Complexity, Faster Transmission 13 Enhanced Mobile Gaming 13 Presence 14 Broadband Access in Remote Locations 14 Financial outlay 16 Conclusion 17 Wireless technology upgrade Introduction In telecommunications, 4G (also known as 4-G) is an acronym used to refer to the fourth generation of technologies for mobile telephony. It is the successor of the technologies 2G and 3G; 4G is based entirely on IP protocol, with a system of systems and a network of networks, which is achieved through the convergence of wired and wireless networks. This technology may be used by wireless modems, smart phones and other mobile devices. The 4th generation technology gives ultra broadband experience over the internet access on mobile devices, like, laptop with USB wireless modems, mobile devices and smart phones. Imaginable application, which includes, the...

Words: 4563 - Pages: 19

Free Essay

Updated Wireless

...Updated Wireless At Reading Room Bookstore/ Café Prepared for: Mrs. Karen Dickens, Owner of Reading Room Bookstore/ Café Prepared by: Wendell Brown, Peter Bacon, and Alex McCurdy Owners of Keep Austin Wireless April 25, 2014 Table of Contents Executive Summary 2 Introduction 2 Methods 3  Which 802.11 specification is best for your specific needs? 3  Should we get a router that works at 2.4 GHz, 5 GHz, or both? 3  What security level is best for your business? 4  What speed router should be considered? 4  What else should our router(s) have for current or future considerations? 4  What would happen if something were to go down? 5  How many UPS-powered outlets do we need? 5  How much power do our devices require? 5  How long do we need to use the battery? 5  How many feet of Ethernet cabling do we need? 5 Results 7 Conclusions 8 Decision Matrix 10 Recommendation 11 Assumptions 13 Installation 15 Equipment Parts List 16 Works Cited 17 List of Illustrations Figure 1: Picture of Linksys EA6900 (Linksys Employee, 2014) 7 Figure 2: Picture of ASUS RT-AC66U (Endgadget Employee, 2014) 7 Figure 3: Picture of Trendnet TEW-812DRU (Softpedia Employee, 2014) 7 Figure 4: Decision Matrix 8 Figure 5: High Level Parts Diagram 10 Table 1: Detailed parts list (Bacon, 2014) 10 Executive Summary This report was developed to summarize the newer, better, and more secure wireless network...

Words: 3845 - Pages: 16