REVIEW - Adaptive Filtering and Change Detection


Title:

Adaptive Filtering and Change Detection

Author:

Fredrik Gustafsson

ISBN:

Publisher:

Wiley (2000)

Pages:

510pp

Reviewer:

Lawrence Dack

Reviewed:

April 2001

Rating:

★★★☆☆


developing methods to analyse and predict physical or financial systems

I imagine this book sits outside the normal bounds of the ACCU book review list, since it is a deeply mathematical treatment of a specialised topic. If your interests lie solely in software development, you can stop reading here; however, if like me you are involved in developing methods to analyse and predict physical or financial systems, keep going. There is a health warning attached though. To get beyond the first few pages you will need at least a graduate-level background in statistical theory, calculus, matrix algebra, signal modelling and classical filter theory (poles and zeroes, etc).

The basic problem tackled in adaptive filtering is that of estimating the current value of a signal when all that is available are measurements of that signal corrupted by noise and the characteristics of the noise or signal may change with time. Fundamentally the signal is extracted by performing a weighted average of a number of past measurements in order to smooth out the noise. The hard bit is working out the weights to apply and a large body of literature exists suggesting different methods by which this can be done. Generally the weights start at some initial guess and are adjusted according to some criterion until (for constant signal and noise characteristics) they reach some steady state, at which point the filter should be tracking the signal through the noise. If the signal or noise characteristics change gradually, then the weights will adjust smoothly to cope with the new situation and the signal will continue to be well estimated. However, if the characteristics change suddenly, then the filter will take some time to adapt to the new conditions and in the meantime the quality of the signal estimate will be poor. This problem arises because of a trade-off implied by the averaging process embedded in the filter; the more measurements used in the average, the better the quality of the steady-state estimate, but the slower the response to changes.

Many variations of the basic problem exist. The problem considered above is classified as 'signal estimation'. Sometimes the noisy signal is expected to be a function with unknown parameters that are to be estimated ('parameter estimation'). Again, if the signal is generated by a physical system for which a mathematical model exists, then one can attempt to estimate the internal (and not directly measurable) state of the system ('state estimation'). Each of these cases can be considered a generalisation of the previous one. As a simple illustration of the differences, imagine an analysis of oscillatory vibration measured inside a moving car signal estimation would attempt to track the shape of the vibration as it varied with time, parameter estimation would attempt to estimate the frequency of the vibration, while state estimation might attempt to deduce which gear the car was in at the time.

'In the lab' the performance of an adaptive filter is normally measured as a 'cost', which is some function of the difference between the true (and normally unknowable) signal value and the signal estimate the filter provides, where the best ('optimal') filters are those which minimise that cost function on average. There are several plausible cost functions in common use and so it is sometimes possible to find several different filters all claiming to be optimal solutions to the same class of problem - because the researchers concerned have chosen difference cost functions.

Many real-world systems exhibit behaviour that can normally be analysed by an adaptive filter, but is subject to intermittent, sudden, change - for example, the characteristics of noise or vibration in a car could change suddenly every time the driver changed gear. As stated earlier, sudden change is not a situation in which a 'standard' adaptive filter excels. This book is concerned with the analysis of such systems. It describes methods by which sudden change can be detected - useful in itself for applications involving fault detection - and then extensions to adaptive filter techniques allowing them to cope better with sudden change.

This structure of this book is a little complex. It considers a set of techniques for change detection, including

  • Stopping rules - pragmatic measures, which without estimating the signal can indicate when a sudden change has occurred
  • Likelihood ratios - methods which estimate the relative likelihood of 'change' to 'no change' after each measurement and flag situations where a change is highly likely.
  • Two-filter methods - one averaging over a long time history to estimate the signal, the other averaging over a short time history to estimate the detection of change.
  • Multiple hypothesis methods. These are used when not one, but several changes are expected. Conceptually these evaluate all possible combinations of 'change' and 'no change' against the measurements and report the most likely. To avoid combinatorial explosion, the list of possible combinations is pruned regularly, leaving a few likely hypotheses.

These techniques are combined with a number of adaptive filter types and applied to each of the three types of estimation (signal, parameter and state) described above. It's not quite a matrix structure - the book does not consider all combinations - but most of the basic ideas get reused a few times in different contexts. I can see the reasons for this approach, but it does give the book a slightly muddled feel.

The theory is illustrated with real-world examples drawn from the author's considerable research experience. These, together with sections providing advice on the practical implementation and tuning of the techniques discussed, set this book apart from many other textbooks covering similar material; the intent is obviously to help the reader solve real problems rather than pass academic exams. That said the book is still hard going. Much of this is due to the mathematical content, but typographical and grammatical errors are common and these distract from the meaning at times.

This book has acquired a permanent place on my bookshelf because it brings together topics I have previously only seen in research papers and provides hard-won practical advice on how those topics may be tuned to solve real problems.


Book cover image courtesy of Open Library.





Your Privacy

By clicking "Accept Non-Essential Cookies" you agree ACCU can store non-essential cookies on your device and disclose information in accordance with our Privacy Policy and Cookie Policy.

Current Setting: Non-Essential Cookies REJECTED


By clicking "Include Third Party Content" you agree ACCU can forward your IP address to third-party sites (such as YouTube) to enhance the information presented on this site, and that third-party sites may store cookies on your device.

Current Setting: Third Party Content EXCLUDED



Settings can be changed at any time from the Cookie Policy page.