Since the ancient times of human history, there have been many attempts to define intelligence. Aristotle argued that all persons express similar intellectual faculties and that the differences were due to the teaching and example. In more recent times, intelligence has been defined as a set of innate cognitive functions, adaptive, imaginative, etc., arising from a human or animal biological brain. Among these, the capacity of adaptation is the main prerogative present in all definitions of intelligent behavior. From the biological point of view, adaptation is a property that all living organisms possess and that can be interpreted both as a propensity to the species improvement and as a conservative process tending to the species preservation over time of the life. From the psychological point of view, adaptation is synonymous with learning. In this sense, learning is a behavioral function, more or less conscious, of a subject that adapts her or his attitude as a result of experience: learning is to adapt. In intelligent systems, whether biologically inspired, or entirely artificial, adaptation and methods to carry it out represent an essential prerogative. In this framework, adaptive filters are defined as information processing systems, analog or digital, capable of autonomously adjusting their parameters in response to external stimuli. In other words, the system learns independently and adapts its parameters to achieve a certain processing goal such as extracting the useful information from an acquired signal and the removal of disturbances due to noise or other sources interfering or, more generally, the adaptive filter provides the elimination of the redundant information. In fact, in support of this hypothesis, the British neuroscientist Horace B. Barlow in 1953 discovered that the frog brain has neurons which fire in response to specific visual stimuli and concluded that one of the main aims of visual processing is the reduction of redundancy. His works have been milestones in the study of the properties of the biological nervous system. Indeed, his researches demonstrate that the main function of a machine perception is to eliminate redundant information coming from receptors. The usability of adaptive signal processing methods to the solution of real problems is extensive and represents a paradigm for many strategic applications. Adaptive signal processing methods are used in economic and financial sciences, in v
vi
Preface
engineering and social sciences, in medicine, biology, and neuroscience, and in many other areas of high strategic interest. Adaptive signal processing is also a very active field of study and research that, for a thorough understanding, requires advanced interdisciplinary knowledge.
Objectives of the Text The aim of this book is to provide advanced theoretical and practical tools for the study and determination of circuit structures and robust algorithms for the adaptive signals processing in different application scenarios. Example can be found in multimodal and multimedia communications, the biological and biomedical areas, economic model, environmental sciences, acoustics, telecommunications, remote sensing, monitoring, and, in general, modeling and prediction of complex physical phenomena. In particular, in addition to presenting the fundamental theoretical base concepts, the most important adaptive algorithms are introduced, while also providing tools to evaluate the algorithms’ performance. The reader, in addition to acquiring the basic theories, will be able to design and implement the algorithms and evaluate their performance for specific applications. The idea of the text is based on years of teaching activities of the author, during the course Algorithms for Adaptive Signal Processing held at the Faculty of Information Engineering of “Sapienza” University of Rome. In preparing the book, particular attention was paid in the first chapters and in mathematics appendices, which make the text suitable to their readers without special prerequisites, other than those common to all first 3-year courses of the Information Engineering Faculty and other scientific faculties. Adaptive filters are nonstationary nonlinear and time-varying dynamic systems, and, at times, to avoid a simplistic approach, the arguments may have some conceits that might be difficult to understand. For this reason, many of the subjects are introduced by considering different points of view and with multiple levels of analysis. In the literature on this topic numerous and authoritative texts are available. The reasons that led to the writing of this work are linked to a philosophically different vision of intelligent signal processing. In fact, adaptive filtering methods can be introduced starting from different theories. In this work we wanted to avoid an “ideological” approach related to some specific discipline, but we wanted to put an emphasis on interdisciplinarity presenting the most important topics with different paradigms. For example, a central importance argument, as the least mean squares (LMS) algorithm, is exposed in three distinct and, to some extent, original ways. In the first, following a more systemistic criteria, the LMS is presented by considering an energy approach through the Lyapunov attractor. In the second, with a “classic” statistical approach, is introduced the stochastic approximation of the
Preface
vii
gradient-descent optimization method. In the third mode, following a different approach, is presented considering the simple axiomatic properties of minimal perturbation. Moreover, it should be noted that this philosophy is not only a pedagogical exercise, but it is of fundamental importance in more advanced topics and theoretical demonstrations where, following a philosophy rather than the other, it happens very often to tread winding roads and dead end.
Organization and Structure of the Book The sequence of arguments is presented in classical mode. The first part introduces the basic concepts of optimal linear filtering. Following the first and second order, online and batch processing techniques are presented. A particular effort has been made in trying to present arguments with a common formalism, while trying to remain faithful to the considered original references. The entire notation is defined in discrete time and the algorithms are presented in order to facilitate the reader to the writing of computer programs, for the practical realization of the applications described in the text. The book consists of nine chapters, each one containing the references where the reader can independently deepen topics of particular interest, and three mathematics appendices. Chapter 1 covers preliminary topics of the discrete-time signals and circuits and some basic methods of digital signal processing. Chapter 2 introduces the basic definitions of adaptive filtering theory and the main filters topologies are discussed. In addition, the concept of cost function to be minimized and the main philosophies concerning adaptation methods are introduced. Finally, the main application fields of adaptive signal processing techniques are presented and discussed. In Chap. 3, the Wiener optimal filtering theory is presented. In particular, the problems of the mean square error minimization and of its optimal value determination are addressed. The formulation of the normal equations and the optimal Wiener filter in discrete time is introduced. Moreover, the type 1, 2, and 3 multichannel notations and its multi-input-output optimal filter generalization are presented. Are also discussed corollaries, and presented some applications related to the random sequences prediction and estimation. In Chap. 4, adaptation methods, in the case that the input signals are not statistically characterized, are addressed. The principle of least squares (LS), bringing back the estimation problem into an optimization algorithm, is introduced. The normal equations in the Yule–Walker formulation are introduced and the similarities and differences with Wiener optimal filtering theory are also discussed. Moreover, the minimum variance optimal estimators, the normal equations weighing techniques, the regularization LS approach, and the linearly constrained and the nonlinear LS techniques are introduced. The algebraic methods to matrix decomposition for solving the LS systems in the cases and of
viii
Preface
over/under-determined equations system are also introduced and discussed. The technique of singular value decomposition in the solution of the LS systems is discussed. The method of Lyapunov attractor for the iterative LS solution is presented, and the least mean squares and Kaczmarz algorithms, seen as an iterative LS solution, are introduced. Finally, the methodology of total least squares (TLS) and the matching pursuit algorithms for underdetermined sparse LS systems are presented and discussed. Chapter 5 introduces the first-order adaptation algorithms for online adaptive filtering. The methods are presented with a classical statistical approach and the LMS algorithm with the stochastic gradient paradigm. In addition, methods for performance evaluation of adaptation algorithms, with particular reference to the convergence speed and tracking analysis, are presented and discussed. Some general axiomatic properties of the adaptive filters are introduced. Moreover, the methodology of stochastic difference equations, as a general method for evaluating the performance of online adaptation algorithms, is introduced. Finally, variants of the LMS algorithm, some multichannel algorithms applications, and delayed learning algorithms, such as the class filtered-x LMS in its various forms, the method filtered error LMS, and the method of the adjoint network, are presented and discussed. In Chap. 6, the most important second-order algorithms for the solution of LS equations with recursive methods are introduced. In the first part of the chapter, the Newton’s method and its version with time-average correlation estimation, defining the class of adaptive algorithms such as sequential regression, are briefly exposed. Subsequently, in the context of the second-order algorithms, a variant of the NLMS algorithm, called affine projection algorithm (APA), is presented. Thereafter the family of algorithms called recursive least squares (RLS) is presented, and their convergence characteristics are studied. In the following, some RLS variants and generalizations as the Kalman filter are presented. Moreover, some criteria to study the performance of adaptive algorithms operating in nonstationary environments are introduced. Finally, a more general adaptation law based on natural gradient approach, considering sparsity constraints, is briefly introduced. In Chap. 7, structures and algorithms for the implementation of adaptive filters in batch and online mode, operating in transformed domain (typically the frequency domain), are introduced. In the first part of the chapter, the block LMS algorithm is introduced. Successively, two paragraphs about the frequency domain constrained algorithms known as frequency domain adaptive filters (FDAF), the unconstrained FDAF, and the partitioned FDAF are introduced. In the third paragraph, the transformed domain adaptive algorithms, referred to as transform-domain adaptive filters (TDAF), are presented. The chapter also introduces the multirate methods and the subband adaptive filters (SAFs). In Chap. 8, the forward and backward linear prediction and the issue of the order recursive algorithms are considered. Both of these topics are related to implementative structures with particular robustness and efficiency properties. In connection with this last aspect, the subject of the filter circuit structure and the
Preface
ix
adaptation algorithm is introduced, in relation to the problems of noise control, scaling and efficient computation, and effects due to coefficients quantization. Chapter 9 introduces the problem space-time domain adaptive filtering, in which the signals are acquired by homogeneous sensor arrays, arranged in different spatial positions. This issue, known in the literature as array processing (AP), is of fundamental interest in many application fields. In particular, the basic concepts of discrete space-time filtering are introduced. The first part of the chapter introduces the basics of the anechoic and echoic wave propagation model, the sensors directivity functions, the signal model, and steering vectors of some typical array geometries. The characteristics of noise field in various application contexts and the array quality indices are also discussed. In the second part of the chapter, methods for conventional beamforming are introduced, and the radiation characteristics are discussed, the main design criteria in relation to the optimization of quality indices. Moreover, the broadband beamformer with spectral decomposition and the methods of direct synthesis of the spatial response are introduced and discussed. In the third part of the chapter, the statistically optimal static beamforming is introduced. The LS methodology is extended in order to minimize the interference related to the noise field. In addition, the super-directive methods, the related regularized solution techniques, and the post-filtering method are discussed. The minimum variance broadband method (the Frost algorithm) is also presented. In the fourth part, the adaptive mode for the determination of the online beamforming operating nonstationary signal condition is presented. In the final part of the chapter, the issue of the time-delay estimation (TDE) and direction of arrival (DOA) estimation in the case of free-field narrow-band signals and in the case of broadband signals in reverberant environment is presented. In addition, in order to have a possible self-contained text there are three appendices, with a common formalism to all the arguments, that recall to the reader some basic necessary prerequisites for a proper understanding of the topics covered in this book. In Appendix A, some basic concepts and quick reference of linear algebra are recalled. In Appendix B, the basic concepts of the nonlinear programming are briefly introduced. In particular, some fundamental concepts of the unconstrained and the constrained optimization methods are presented. Finally, in Appendix C some basic concepts on random variables, stochastic processes, and estimation theory are recalled. For editorial choice further study and insights, exercises, project proposals, the study of real application, and a library containing MATLAB (® registered trademark of The MathWorks, Inc.) codes for the calculation of main algorithms discussed in the this text are inserted into a second volume which is currently being written. Additional materials to the text can be found at: http://www.uncini.com/FASP Rome, Italy
Aurelio Uncini
ThiS is a FM Blank Page
Acknowledgments
Many colleagues have contributed to the creation of this book giving useful tips, reading the drafts, or enduring my musings on the subject. I wish to thank my collaborators, Raffaele Parisi and Michele Scarpiniti, of the Department of Information Engineering, Electronic and Telecommunication (DIET) of “Sapienza” University of Rome, and the colleagues from other universities: Stefano Squartini of the Polytechnic University of Marche—Italy; Alberto Carini of the University of Urbino—Italy; and Francesco Palmieri of the Second University of Naples—Italy; Gino Baldi of KPMG. I would also like to thank all students and thesis students attending the research laboratory Intelligent Signal Processing & Multimedia Lab (ISPAMM LAB) at the DIET, where they have been implemented and compared many of the algorithms presented in the text. A special thanks goes to PhD students and Post Doc researchers, Danilo Comminiello and Simone Scardapane, who carried out an effective proofreading. A special thanks to all the authors in the bibliography of each chapter. This book is formed by a mosaic of arguments, where each tile is made up of one atom of knowledge. My original contribution, if they are successful in my work, is only in the vision of the whole, i.e., in the picture that emerges from the mosaic of this knowledge. Finally, a special thanks goes to my wife Silvia and my daughter Claudia to whom I subtracted a lot of my time and who have supported me during the writing of the work. The book is dedicated to them.
xi
ThiS is a FM Blank Page
Abbreviations and Acronyms
∅ ℤ ℝ ℂ (ℝ,ℂ) acf AD-LMS AEC AF AIC ALE AML ANC ANN AP APA AR ARMA ASO ASR AST ATF AWGN BF BFGS BI_ART BIBO BLMS BLP BLUE BSP
Empty set Integer number Real number Complex number Real or complex number Autocorrelation function Adjoint LMS Adaptive echo canceller Adaptive filter Adaptive interference canceller Adaptive line enhancement Approximate maximum likelihood Active noise cancellation or control Artificial neural network Array processing Affine projection algorithm Autoregressive Autoregressive moving average Approximate stochastic optimization Automatic speech recognition Affine scaling transformation Acoustic transfer function Additive Gaussian white noise Beamforming Broyden–Fletcher–Goldfarb–Shanno Block iterative algebraic reconstruction technique Bounded-input–bounded-output Block least mean squares Backward linear prediction Best linear unbiased estimator Blind signal processing xiii
xiv
BSS ccf CC-FDAF CF CFDAF CGA CLS CPSD CQF CRB CRLS CT CTFS CTFT DAM DCT DFS DFT DHT DI DLMS DLS DMA DOA DOI DSFB DSP DST DT DTFT DWSB ECG EEG EGA EMSE ESPRIT ESR EWRLS FAEST FB FBLMS FBLP FDAF FDE
Abbreviations and Acronyms
Blind signal separation Crosscorrelation function Circular convolution frequency domain adaptive filters Cost function Constrained frequency domain adaptive filters Conjugate gradient algorithms Constrained least squares Cross power spectral density Conjugate quadrature filters Crame´r–Rao bound Conventional RLS Continuous time Continuous time Fourier series Continuous time Fourier transform Direct-averaging method discrete cosine transform Discrete Fourier series Discrete Fourier transform Discrete Hartley transform Directivity index Delayed LMS Data least squares Differential microphones array Direction of arrivals Direction of interest Delay and sum beamforming Digital signal process/or/ing Discrete sine transform Discrete time Discrete time Fourier transform Delay and weighted sum beamforming Electrocardiogram Electroencephalogram Exponentiated gradient algorithms Excess mean square error Estimation signal parameters rotational invariance technique Error sequential regression Exponentially weighted RLS Fast a posteriori error sequential technique Filter bank Fast block least mean squares Forward–backward linear prediction Frequency domain adaptive filters Finite difference equation
Abbreviations and Acronyms
FFT FIR FKA FLMS FLP FOCUSS FOV FRLS FSBF FTF FX-LMS GCC GP-LCLMS GSC GTLS ICA IC iid IIR IPNLMS ISI KF KLD KLT LCLMS LCMV LD LDA LHA LMF LMS LORETA LPC LS LSE LSE LSUE MA MAC MAF MCA MEFEX MFB MIL
Fast Fourier transform Finite impulse response Fast Kalman algorithm Fast LMS Forward linear prediction FOCal Underdetermined System Solver Field of view Fast RLS Filter and sum beamforming Fast transversal (RLS) filter Filtered-x LMS Generalized cross-correlation Gradient projection LCLMS Generalized sidelobe canceller Generalized total least squares Independent component analysis Initial conditions Independent and identically distributed Infinite impulse response Improved PNLMS Inter-symbol interference Kalman Filter Kullback–Leibler divergence Karhunen–Loeve transform Linearly constrained least mean squares Linearly constrained minimum variance Look directions Levinson–Durbin algorithm Linear harmonic array Least mean fourth Least mean squares LOw-Resolution Electromagnetic Tomography Algorithm Linear prediction coding Least squares Least square error Least squares error Least squares unbiased estimator Moving average Multiply and accumulate Multi-delay adaptive filter Minor component analysis Multiple error filtered-x Matched filter beamformer Matrix inversion lemma
xv
xvi
MIMO MISO ML MLDE MLP MMSE MNS MPA MRA MSC MSC MSD MSE MUSIC MVDR MVU NAPA NGA NLMS NLR OA-FDAF ODE OS-FDAF PAPA PARCOR PBFDAF PCA PFDABF PFDAF PHAT PNLMS PRC PSD PSK Q.E.D QAM QMF RLS RNN ROF RTF RV SAF SBC
Abbreviations and Acronyms
Multiple-input multiple-output Multiple-input single-output Maximum likelihood Maximum-likelihood distortionless estimator Multilayer perceptron Minimum mean square error Minimum norm solution Matching pursuit algorithms Main response axis Magnitude square coherence Multiple sidelobe canceller Mean square deviation Mean squares error Multiple signal classification Minimum variance distortionless response Minimum variance unbiased Natural APA Natural gradient algorithm Normalized least mean squares Nonlinear regression Overlap-add frequency domain adaptive filters Ordinary difference equation Overlap-save frequency domain adaptive filters Proportional APA Partial correlation Partitioned block frequency domain adaptive filters Principal component analysis Partitioned frequency domain adaptive beamformer Partitioned frequency domain adaptive filters Phase transform Proportionate NLMS Perfect reconstruction conditions Power spectral density Phase shift keying Quod erat demonstrandum (this completes the proof) Quadrature amplitude modulation Quadrature mirror filters Recursive least squares Recurrent neural network Recursive order filter Room transfer functions Random variable Subband adaptive filters Subband coding
Subband decomposition Smoothed coherence transform Steepest-descent algorithms Superdirective beamforming Stochastic difference equation Spatial directivity spectrum Signed-error LMS Stochastic-gradient algorithms Single-input multiple-output Single input single output Signal-to-noise ratio Source of interest Stochastic processes Signed-regressor LMS Steered response power Sum of squares errors Sign–sign LMS Short-time transformation Singular value decomposition Time-bandwidth-product Transform-domain adaptive filters Time delay estimation Transfer function Transfer function ratio Total least squares Uniform circular array Unconstrained frequency domain adaptive filters Uniform linear array Very large array Very large-scale integration Weights error vector White Gaussian noise Weighted total least Weighted minimum norm solution Weighted projection operators Weighted sum beamforming
In all real physical situations, in the communication processes, and in the wider meaning terms, it is usual to think the signals as variable physical quantity or symbols, to which is associated a certain information. A signal that carries information is variable and, in general, we are interested in the time (or other)-domain variation: signal ⟺ function of time or, more generally, signal ⟺ function of several variables. Examples of signals are continuous bounded functions of time as the human voice, a sound wave produced by a musical instrument, a signal from a transducer, an image, video, etc. In these cases we speak of signals defined in the time domain or of analog or continuous-time (CT) signals. An image is a continuous function of two spatial variables, while a video consists of a continuous bounded time-space function. Examples of one- and two-dimensional real signals are shown in Fig. 1.1. In the case of one-dimensional signals, from the mathematical point of view, it is convenient to represent this variability with a time continuous function, denoted xa(t), where the subscript a stands for analog. A signal is defined as analog when it is in close analogy to a real-world physical quantity such as, for example, the voltage and current of an electrical circuit. The analog signals are then, by their nature, usually represented with real everywhere continuous functions. Sometimes, as in telecommunications modulation process case or in particular real physical situations, the signals can be defined in the complex domain. In Fig. 1.2 is reported an example of a complex domain signal written as xðtÞ ¼ xR ðtÞ þ j xI ðtÞ ¼ eαt ejωt where xR ðtÞ and xI ðtÞ are, respectively, the real and imaginary signal parts, ω is defined as angular frequency (or radian frequency, pulsatance, etc.), and α is defined as damping coefficient. Other times, as in the case of pulse signals, the boundedness constraint can be removed.
Fig. 1.1 Examples of real analog or continuous-time signals (a) human voice tract; (b) image of Lena x I (t )
x I (t )
xR (t )
xI (t ) = e -a t sin(wt )
xR (t ) = e -a t cos(wt )
t
t
x (t ) = e -a t e jwt t
xR (t )
Fig. 1.2 Example of signal defined in the complex domain. Representation of a damped complex sinusoid
1.1.1
Discrete-Time Signals
In certain situations it is possible to define a signal, which contains a certain information, with a real or complex sequence of numbers. In this case, the signal is limited to a discrete set of values defined in a precise time instant. This signal is therefore defined as a discrete-time signal or sequence or time series. For discrete-time (DT) signals description, it is usual to use the form x½n, where the index n ∈ ℤ can be any physical variable (such as time, distance, etc.) but which frequently is a time index. In addition, the square brackets are used just to emphasize the discrete nature of the signal that represents the process. Therefore, the DT signals are defined by a sequence that can be generated through an algorithm or, as often happens, by a sampling process that transforms, under appropriate assumptions, an analog signal into a sequence. Examples of such signals are audio wave files (with the extension .wav) commonly found in PCs. In fact, these files are DT signals stored on the hard drive (or memory) with a specific format. Previously acquired through your sound card or generated with appropriate algorithms, these signals can be listened, viewed, edited, processed, etc. An example of a graphical representation of a sequence is shown in Fig. 1.3.
1.2 Basic Deterministic Sequences Fig. 1.3 Example of a discrete-time signal or sequence
3
x[ -2] = 1 x[ -1] = -1
x[n ]
x[0] = 1.9 x[1] = -1.7 x[2] = 1.8 x[3] = 0.7
x[0] x[-2]
-2
1.1.2
x[-1]
-1
0
x[1]
1
x[2]
2
3
n
Deterministic and Random Sequences
A sequence is said to be deterministic if it is fully predictable or if it is generated by an algorithm which exactly determines the value for each n. In this case the information content carried by the signal is null because it is entirely predictable. A sequence is said to be random (or aleatory or stochastic) if it evolves over time (or in other domains) in unpredictable ways (or not entirely predictable). The characterization of a random sequence can be carried out by statistical quantities related to the signal which may present some regularity. Even if not exactly predictable sample by sample, the random signals can be predicted in its average behavior. In other words, the sequence can be described, characterized, and processed, taking into consideration their statistical parameters rather than by an explicit equation (Fig. 1.4). For more details and random signal characterization, see Appendix C on stochastic processes.
1.2
Basic Deterministic Sequences
In the study and DT signals applications, it is usual to encounter deterministic signals easily generated with simple algorithms. As we shall see in the following chapters, these sequences may be useful for DT systems characterization [1, 2].
1.2.1
Unitary Impulse
The unitary impulse, called also DT delta function, is a sequence, shown in Fig. 1.5a, defined as
4
1 Discrete-Time Signals and Circuits Fundamentals Deterministic signals 2
1
1
Signal Amplitude
Signal Amplitude
Random signals 2
0
-1
-2
0
200
400 600 Time Index [n]
800
1000
0
-1
-2
0
200
400 600 Time Index [n]
800
1000
Fig. 1.4 An example of random and deterministic sequences
Property An arbitrary sequence x½n can be represented as a sum of delayed and weighted impulses: sampling property. Therefore we can write x½n ¼
1 X
x½kδ½n k:
k¼1
1.2.2
Unit Step
The unit step sequence is a sequence (see Fig. 1.5b) defined as u½n ¼
1 0
for n 0 for n < 0:
ð1:2Þ
In addition, it is easy to show that the unit step sequence verifies the property
1.3 Discrete-Time Signal Representation with Unitary Transformations
5
8 1 X < u½ n ¼ δ½n k u½n∴ k¼0 : δ½n ¼ u½n u½n 1:
1.2.3
Real and Complex Exponential Sequences
The real and complex exponential sequence is defined as x½n ¼ Aαn
A, α ∈ ðℝ; ℂÞ:
ð1:3Þ
The exponential sequence can take various shapes depending on the actual values that can assume the α and A coefficients. Figure 1.6 shows the trends of real sequences for some values of α and A. In the complex case we have that A ¼ jAjejϕ and α ¼ jαjejω. Moreover, note that using Euler’s formula, the sequence can be rewritten as x½n ¼ jAjjαjn ejðωnþϕÞ ¼ jAjjαjn cos ðωn þ ϕÞ þ j sin ðωn þ ϕÞ ;
ð1:4Þ
where the parameters A, α, ω, and ϕ are defined, respectively, as A amplitude, α damping coefficient, ω angular frequency (or radial frequency, pulsatance, .. .Þ, ϕ phase (Fig. 1.7). From the above expression it can be seen that the sequence x½n has an envelope that is a function of the parameters α and its shape appears to be jαj < 1 jαj ¼ 1 jαj > 1
decreasing with n, constant, increasing with n:
Special cases of the expression (1.4), for α ¼ 1, are shown below jAjejðωnþϕÞ
ejðωnþϕÞ þ ejðωnþϕÞ cos ðωn þ ϕÞ ¼ 2 ejðωnþϕÞ ejðωnþϕÞ sin ðωn þ ϕÞ ¼ j2
1.3
complex sinusoid, real cosine, real sinusoid:
Discrete-Time Signal Representation with Unitary Transformations
Let us consider real or complex domain finite duration sequences, indicated as x ∈ ðℝ; ℂÞðN1Þ ≜ x½0 x½1
x½N 1
T
:
ð1:5Þ
6
1 Discrete-Time Signals and Circuits Fundamentals x[n]
1. a) Explain systolic array with a neat sketch. 8. b) Explain the advantages of adaptive signal processing with applications. 12. 2. a) Derive the equation for the ...
Write short notes on : (5Ã4=20). i) Systolic array. ii) Predictive speech model. iii) Approaches to develop linear adaptive filters. iv) Applications of adaptive filters.
Page 3 of 623. Page 3 of 623. Geometria y Trigonometria - Aurelio Baldor.pdf. Geometria y Trigonometria - Aurelio Baldor.pdf. Open. Extract. Open with. Sign In.
Vá»i a4 2 2. 3. 2 4 88 94 9. 3. 2. a dk. a aa a. a dk... Câu 4. Vì xe Äến C dừng hẳn nên thá»i gian xe Äi từ B Äến C thá»a mãn 8 0. 8. a. ta t do Äó. quà ng ÄÆ°á»ng BC là . 2 2. 2 2 4 16 4 16 256 16. 8 8. 1,5. 24( ) AB. a a S
bicicleta de "carreras", fuimos al taller de Enrique Moleón y me compró una Adelay de segunda. mano. Enrique Moleón me sacó la licencia y corrà las últimas ...
Apr 13, 2013 - Adaptation of Mediterranean conifers to their environment in- volves a suite .... Bank protein database using Geneious Pro software (Drummond et al., 2011). ..... halepensis as previously defined with chloroplast markers (e.g..
Analysis and Machine Intelligence, 11(8), 1989, 859-872. [12] J. Sklansky, Measuring concavity on a rectangular mosaic. IEEE Transactions on Computing, ...
Jan 15, 2008 - rithm, complex system, discrete-time stochastic model, coupling ... the point of view of automatic control, the drivers of these cars must control ...
to Dynamically Adaptive Systems (DAS). DAS can be seen as open distributed systems that have the faculty to adapt themselves to the ongoing circumstances ...
Dec 15, 2000 - Department of Business Studies, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong ... Year Plan to direct the growth of trade intermediaries. ... by which trading companies adapt their strategies and structure.
aNASA-Langley Research Center, Mail Stop 308, Hampton, Virginia, USA; ... Direct model reference adaptive control is considered when the plant-model ...
Sep 15, 2004 - (2004) suggested that the fronto-cen- tral P3 elicited on successful stop trials was a suitable candidate for the expression of response-inhibitory ...
in forward direction and hence can be implemented on-line. Adaptive critic based ... network instead of two required in a standard adaptive critic design.
Conventional evolutionary game theory predicts that natural selection favors the ... In both studies the authors concluded that games on graphs open a window.