Review by
Dr Steve Brooks,
University of Bristol.
The full review can be found in The Statistician, 1998, Vol. 47, p. 541.
This (softback) text is the latest in the Cambridge series in Statistical
and Probabilistic Mathematics. Based upon the authors experiences in
teaching resampling methods both in undergraduate courses and in
short courses presented around the world, this book provides a very
practical introduction to the theory and application of Bootstrap
methodology.
The authors do not say who their target audience is, but I would say that it is probably aimed at the statistically literate practitioner with little or no previous knowledge of resampling methods. In this regard, the book is equally useful as an accompaniment to an advanced undergraduate or postgraduate course, or as a reference for statistical practitioners who would like to learn about such methods. The book contains an impressive range of examples each of which is accompanied by simple functions written for SPlus, so that the examples are easy to recreate. A disk containing all the example codes and data sets is provided with the book.
The book starts with a gentle introduction to the basic bootstrap methods. The parametric and nonparametric bootstrap are discussed, together with the jackknife and other subsampling methods. Also, simple bootstrap confidence intervals are introduced, together with error reduction techniques. Chapter 3 extends these basic ideas, talking about semiparametric models, censoring, missing data and the iterative Bootstrap.
Chapter 4 introduces the idea of the Bootstrap test, both parametric and nonparametric, whilst Chapter 5 deals with Bootstrap confidence intervals in more detail than the simple introductory exposition given in Chapter 2. Having described most of the basic methodology, the authors then move on to describe the application of Bootstrap methodology to linear regression problems. This discussion is extended in Chapter 7 to include generalised linear models, survival data and other non-linear models, as well as nonparametric regression problems.
Problems of complex dependence, such as those that occur in Time Series, are addressed in Chapter 8, before methods for improving Bootstrap-based calculation in terms of variance reduction are discussed in Chapter 9. Finally, the ideas of the Bayesian Bootstrap and the Bootstrap likelihood are discussed in Chapter 10, under the general heading of semiparametric likelihood inference.
Each chapter ends with a section containing bibliographic notes. These sections are invaluable since, as the authors say, it is not possible to cover everything even in a book as large as this one. Following on from the bibliographic notes comes a section of problems, to test the readers understanding of the previous chapter, together with a collection of ``practicals'', based upon the examples and accompanying SPlus functions. These are a nice addition to the text and add to its usefulness as a course companion. However, no solutions or suggestions for either the problems or practicals are provided.
In all, this is an extremely readable book. The language is sometimes a bit technical,
which may mean that the reader has to work quite hard to follow the text sometimes.
However, it is at a much lower level than many other recent books on the subject and I would
have no hesitation in recommending it as the most useful reference available
for people wishing to learn or teach this subject. Certainly, this book is an essential
addition to any library which aims to cover the area of computational statistics,
and would be a very useful addition to any personal library for those who use, or wish
to use, Bootstrap methodology.
Back to the top.
Some quotes from the review by
Professor S. T. Buckland,
School of Mathematical and Computational Sciences, University of St. Andrews.
The full review can be found in Biometrics, 1998, Vol. 54, p. 795.
Bootstrap and related computer-intensive methods have become indispensable
tools for most applied biometricians. Until now there has not been a book
that I have been able to recommend enthusiastically to such practitioners.
This book is well-structured,[and] covers the main methodologies without
getting bogged down in unnecessary detail. [...]
The clarity of the text will be appreciated, and many practising statisticians would benefit immensely by reading this book. [...] The authors have succeeded in writing an accessible and readable book that will more than meet the needs of most biometricians.
To summarise, an excellent book which explains difficult concepts clearly and
concisely. I strongly recommend it for users and teachers of bootstrap
methods.
Back to the top.
Review by
Dr. James Carpenter,
London School of Hygiene and Tropical Medicine, to appear in
Epidemiology and Infection.
The advent of modern computers has enabled statisticians to make extensive
use of simulation both to check the
assumptions underlying conventional analyses and to
tackle previously intractable problems. Within a frequentist
framework, the last
twenty years have seen the intensive development of
resampling or bootstrap methods for evaluating the
properties of estimators,
hypothesis testing and model selection, for both independent
and dependent data.
This text seeks not only to provide an introduction to bootstrap methods, but also to illustrate their application to a wide range of practical problems. The emphasis on applications is reinforced by the accompanying Splus software, which permits readers to work through the examples and practical exercises in the text for themselves and provides an excellent tool-kit for tackling research problems.
Chapters 2 and 3 present the key bootstrap concepts for independent data, including survival, stratified, finite population, missing and hierarchical data. They consider estimation of bias and standard error, together with simple confidence interval techniques. In addition, the accessible discussion of influence values, which enable an increased understanding of and confidence in bootstrap calculations, is a welcome feature.
The remainder of the book considers detailed application of the bootstrap to a variety of problems. Chapter 4 deals with testing and Chapter 5 confidence intervals, including a comparison of the various methods. Chapter 6 extends these ideas in the context of linear and robust regression, and also considers bootstrap procedures for model selection. Appropriate approaches for non-linear regression techniques, including generalised linear models, are described in Chapter 7, while Chapter 8 deals with more complex data structures such as time series and point processes.
Chapter 9 reviews the multifarious techniques available for reducing the number of bootstrap simulations, and illustrates their use with a number of examples. Potentially one of the more difficult chapters, its contents could help make bootstrap analysis of complex problems computationally feasible. A curtailed account of semi-parametric likelihood inference is given in Chapter 10, and Chapter 11 describes the computer code and illustrates its use.
Ample references are provided by each chapter's bibliographic notes and the concluding bibliography; the subject and example indexes are comprehensive and helpful. Hints and references, but no solutions, are provided for the theoretical problems and computer practicals which conclude each chapter.
While simulation can help researchers avoid tedious mathematical calculations based on dubious assumptions, it does not obviate the need for critical evaluation of the statistical techniques available and their suitability for the problem in hand. The authors illustrate these points with real examples, and this, I believe, represents the particular strength of this book. It should help researchers exploit the power of bootstrap techniques while avoiding the pitfalls.
Overall, this is a timely, comprehensive and well presented text on the
bootstrap, which I recommend to statistical practitioners
researchers and students alike.
Back to the top.