Design of Experiments for Reliability Achievement
Part 1 of the Wiley in Probability and Statistics series
ENABLES READERS TO UNDERSTAND THE METHODS OF EXPERIMENTAL DESIGN TO SUCCESSFULLY CONDUCT LIFE TESTING TO IMPROVE PRODUCT RELIABILITY
This book illustrates how experimental design and life testing can be used to understand product reliability in order to enable reliability improvements. The book is divided into four sections. The first section focuses on statistical distributions and methods for modeling reliability data. The second section provides an overview of design of experiments including response surface methodology and optimal designs. The third section describes regression models for reliability analysis focused on lifetime data. This section provides the methods for how data collected in a designed experiment can be properly analyzed. The final section of the book pulls together all of the prior sections with customized experiments that are uniquely suited for reliability testing. Throughout the text, there is a focus on reliability applications and methods. It addresses both optimal and robust design with censored data.
To aid in reader comprehension, examples and case studies are included throughout the text to illustrate the key factors in designing experiments and emphasize how experiments involving life testing are inherently different. The book provides numerous state-of-the-art exercises and solutions to help readers better understand the real-world applications of experimental design and reliability. The authors utilize R and JMP® software throughout as appropriate, and a supplemental website contains the related data sets.
Written by internationally known experts in the fields of experimental design methodology and reliability data analysis, sample topics covered in the book include:
• An introduction to reliability, lifetime distributions, censoring, and inference for parameter of lifetime distributions
• Design of experiments, optimal design, and robust design
• Lifetime regression, parametric regression models, and the Cox Proportional Hazard Model
• Design strategies for reliability achievement
• Accelerated testing, models for acceleration, and design of experiments for accelerated testing
The text features an accessible approach to reliability for readers with various levels of technical expertise. This book is a key reference for statistical researchers, reliability engineers, quality engineers, and professionals in applied statistics and engineering. It is a comprehensive textbook for upper-undergraduate and graduate-level courses in statistics and engineering.
Statistics and Causality
Methods for Applied Empirical Research
Part 2 of the Wiley in Probability and Statistics series
A one-of-a-kind guide to identifying and dealing with modern statistical developments in causality.
Written by a group of well-known experts, “Statistics and Causality: Methods for Applied Empirical Research” focuses on the most up-to-date developments in statistical methods in respect to causality. Illustrating the properties of statistical methods to theories of causality, the book features a summary of the latest developments in methods for statistical analysis of causality hypotheses.
The book is divided into five accessible and independent parts. The first part introduces the foundations of causal structures and discusses issues associated with standard mechanistic and difference-making theories of causality. The second part features novel generalizations of methods designed to make statements concerning the direction of effects. The third part illustrates advances in Granger-causality testing and related issues. The fourth part focuses on counterfactual approaches and propensity score analysis. Finally, the fifth part presents designs for causal inference with an overview of the research designs commonly used in epidemiology. “Statistics and Causality: Methods for Applied Empirical Research” also includes:
• New statistical methodologies and approaches to causal analysis in the context of the continuing development of philosophical theories
• End-of-chapter bibliographies that provide references for further discussions and additional research topics
• Discussions on the use and applicability of software when appropriate
“Statistics and Causality: Methods for Applied Empirical Research” is an ideal reference for practicing statisticians, applied mathematicians, psychologists, sociologists, logicians, medical professionals, epidemiologists, and educators who want to learn more about new methodologies in causal analysis. The book is also an excellent textbook for graduate-level courses in causality and qualitative logic.
Robust Correlation
Theory and Applications
Part 3 of the Wiley in Probability and Statistics series
This book presents material on both the analysis of the classical concepts of correlation and on the development of their robust versions, as well as discussing the related concepts of correlation matrices, partial correlation, canonical correlation, rank correlations, with the corresponding robust and non-robust estimation procedures. Every chapter contains a set of examples with simulated and real-life data.
Key features:
• Makes modern and robust correlation methods readily available and understandable to practitioners, specialists, and consultants working in various fields.
• Focuses on implementation of methodology and application of robust correlation with R.
• Introduces the main approaches in robust statistics, such as Huber's minimax approach and Hampel's approach based on influence functions.
• Explores various robust estimates of the correlation coefficient including the minimax variance and bias estimates as well as the most B-and V-robust estimates.
• Contains applications of robust correlation methods to exploratory data analysis, multivariate statistics, statistics of time series, and to real-life data.
• Includes an accompanying website featuring computer code and datasets
• Features exercises and examples throughout the text using both small and large data sets.
Theoretical and applied statisticians, specialists in multivariate statistics, robust statistics, robust time series analysis, data analysis and signal processing will benefit from this book. Practitioners who use correlation-based methods in their work as well as postgraduate students in statistics will also find this book useful.
Time Series Analysis
Nonstationary and Noninvertible Distribution Theory
Part 4 of the Wiley in Probability and Statistics series
Reflects the developments and new directions in the field since the publication of the first successful edition and contains a complete set of problems and solutions.
This revised and expanded edition reflects the developments and new directions in the field since the publication of the first edition. In particular, sections on nonstationary panel data analysis and a discussion on the distinction between deterministic and stochastic trends have been added. Three new chapters on long-memory discrete-time and continuous-time processes have also been created, whereas some chapters have been merged and some sections deleted. The first eleven chapters of the first edition have been compressed into ten chapters, with a chapter on nonstationary panel added and located under Part I: Analysis of Non-fractional Time Series. Chapters 12 to 14 have been newly written under Part II: Analysis of Fractional Time Series. Chapter 12 discusses the basic theory of long-memory processes by introducing ARFIMA models and the fractional Brownian motion (fBm). Chapter 13 is concerned with the computation of distributions of quadratic functionals of the fBm and its ratio. Next, Chapter 14 introduces the fractional Ornstein—Uhlenbeck process, on which the statistical inference is discussed. Finally, Chapter 15 gives a complete set of solutions to problems posed at the end of most sections. This new edition features:
• Sections to discuss nonstationary panel data analysis, the problem of differentiating between deterministic and stochastic trends, and nonstationary processes of local deviations from a unit root
• Consideration of the maximum likelihood estimator of the drift parameter, as well as asymptotics as the sampling span increases
• Discussions on not only nonstationary but also noninvertible time series from a theoretical viewpoint
• New topics such as the computation of limiting local powers of panel unit root tests, the derivation of the fractional unit root distribution, and unit root tests under the fBm error
“Time Series Analysis: Nonstationary and Noninvertible Distribution Theory”, Second Edition, is a reference for graduate students in econometrics or time series analysis.
Probability and Conditional Expectation
Fundamentals for the Empirical Sciences
Part 5 of the Wiley in Probability and Statistics series
“Probability and Conditional Expectations” bridges the gap between books on probability theory and statistics by providing the probabilistic concepts estimated and tested in analysis of variance, regression analysis, factor analysis, structural equation modeling, hierarchical linear models and analysis of qualitative data. The authors emphasize the theory of conditional expectations that is also fundamental to conditional independence and conditional distributions.
“Probability and Conditional Expectations”
• Presents a rigorous and detailed mathematical treatment of probability theory focusing on concepts that are fundamental to understand what we are estimating in applied statistics.
• Explores the basics of random variables along with extensive coverage of measurable functions and integration.
• Extensively treats conditional expectations also with respect to a conditional probability measure and the concept of conditional effect functions, which are crucial in the analysis of causal effects.
• Is illustrated throughout with simple examples, numerous exercises and detailed solutions.
• Provides website links to further resources including videos of courses delivered by the authors as well as R code exercises to help illustrate the theory presented throughout the book.
Theory of Probability
A Critical Introductory Treatment
Part 6 of the Wiley in Probability and Statistics series
First issued in translation as a two-volume work in 1975, this classic book provides the first complete development of the theory of probability from a subjectivist viewpoint. It proceeds from a detailed discussion of the philosophical mathematical aspects to a detailed mathematical treatment of probability and statistics.
De Finetti's theory of probability is one of the foundations of Bayesian theory. De Finetti stated that probability is nothing but a subjective analysis of the likelihood that something will happen and that that probability does not exist outside the mind. It is the rate at which a person is willing to bet on something happening. This view is directly opposed to the classicist/ frequentist view of the likelihood of a particular outcome of an event, which assumes that the same event could be identically repeated many times over, and the 'probability' of a particular outcome has to do with the fraction of the time that outcome results from the repeated trials.
Nonparametric Finance
Part 33 of the Wiley in Probability and Statistics series
An Introduction to Machine Learning in Finance, With Mathematical Background, Data Visualization, and R.
Nonparametric function estimation is an important part of machine learning, which is becoming increasingly important in quantitative finance. Nonparametric Finance provides graduate students and finance professionals with a foundation in nonparametric function estimation and the underlying mathematics. Combining practical applications, mathematically rigorous presentation, and statistical data analysis into a single volume, this book presents detailed instruction in discrete chapters that allow readers to dip in as needed without reading from beginning to end.
Coverage includes statistical finance, risk management, portfolio management, and securities pricing to provide a practical knowledge base, and the introductory chapter introduces basic finance concepts for readers with a strictly mathematical background. Economic significance
is emphasized over statistical significance throughout, and R code is provided to help readers reproduce the research, computations, and figures being discussed. Strong graphical content clarifies the methods and demonstrates essential visualization techniques, while deep mathematical and statistical insight backs up practical applications.
Written for the leading edge of finance, Nonparametric Finance:
• Introduces basic statistical finance concepts, including univariate and multivariate data analysis, time series analysis, and prediction
• Provides risk management guidance through volatility prediction, quantiles, and value-at-risk
• Examines portfolio theory, performance measurement, Markowitz portfolios, dynamic portfolio selection, and more
• Discusses fundamental theorems of asset pricing, Black-Scholes pricing and hedging, quadratic pricing and hedging, option portfolios, interest rate derivatives, and other asset pricing principles
• Provides supplementary R code and numerous graphics to reinforce complex content
Nonparametric function estimation has received little attention in the context of risk management and option pricing, despite its useful applications and benefits. This book provides the essential background and practical knowledge needed to take full advantage of these little-used methods and turn them into real-world advantage.
Measuring Agreement
Models, Methods, and Applications
Part 34 of the Wiley in Probability and Statistics series
Presents statistical methodologies for analyzing common types of data from method comparison experiments and illustrates their applications through detailed case studies.
“Measuring Agreement: Models, Methods, and Applications” features statistical evaluation of agreement between two or more methods of measurement of a variable with a primary focus on continuous data. The authors view the analysis of method comparison data as a two-step procedure where an adequate model for the data is found, and then inferential techniques are applied for appropriate functions of parameters of the model. The presentation is accessible to a wide audience and provides the necessary technical details and references. In addition, the authors present chapter-length explorations of data from paired measurements designs, repeated measurements designs, and multiple methods; data with covariates; and heteroscedastic, longitudinal, and categorical data. The book also:
• Strikes a balance between theory and applications
• Presents parametric as well as nonparametric methodologies
• Provides a concise introduction to Cohen's kappa coefficient and other measures of agreement for binary and categorical data
• Discusses sample size determination for trials on measuring agreement
• Contains real-world case studies and exercises throughout
• Provides a supplemental website containing the related datasets and R code
“Measuring Agreement: Models, Methods, and Applications” is a resource for statisticians and biostatisticians engaged in data analysis, consultancy, and methodological research. It is a reference for clinical chemists, ecologists, biomedical and other scientists who deal with development and validation of measurement methods. This book can also serve as a graduate-level text for students in statistics and biostatistics.
Geostatistical Functional Data Analysis
Part 46 of the Wiley in Probability and Statistics series
Explore the intersection between geostatistics and functional data analysis with this insightful new reference.
“Geostatistical Functional Data Analysis” presents a unified approach to modelling functional data when spatial and spatio-temporal correlations are present. The Editors link together the wide research areas of geostatistics and functional data analysis to provide the reader with a new area called geostatistical functional data analysis that will bring new insights and new open questions to researchers coming from both scientific fields. This book provides a complete and up-to-date account to deal with functional data that is spatially correlated, but also includes the most innovative developments in different open avenues in this field.
Containing contributions from leading experts in the field, this practical guide provides readers with the necessary tools to employ and adapt classic statistical techniques to handle spatial regression. The book also includes:
• A thorough introduction to the spatial kriging methodology when working with functions
• A detailed exposition of more classical statistical techniques adapted to the functional case and extended to handle spatial correlations
• Practical discussions of ANOVA, regression, and clustering methods to explore spatial correlation in a collection of curves sampled in a region
• In-depth explorations of the similarities and differences between spatio-temporal data analysis and functional data analysis
Aimed at mathematicians, statisticians, postgraduate students, and researchers involved in the analysis of functional and spatial data, “Geostatistical Functional Data Analysis” will also prove to be a powerful addition to the libraries of geoscientists, environmental scientists, and economists seeking insightful new knowledge and questions at the interface of geostatistics and functional data analysis.
The Statistical Analysis of Doubly Truncated Data
With Applications in R
by Prof Jacobo De Uña-Álvarez
Part 64 of the Wiley in Probability and Statistics series
A thorough treatment of the statistical methods used to analyze doubly truncated data
In The Statistical Analysis of Doubly Truncated Data, an expert team of statisticians delivers an up-to-date review of existing methods used to deal with randomly truncated data, with a focus on the challenging problem of random double truncation. The authors comprehensively introduce doubly truncated data before moving on to discussions of the latest developments in the field.
The book offers readers examples with R code along with real data from astronomy, engineering, and the biomedical sciences to illustrate and highlight the methods described within. Linear regression models for doubly truncated responses are provided and the influence of the bandwidth in the performance of kernel-type estimators, as well as guidelines for the selection of the smoothing parameter, are explored.
Fully nonparametric and semiparametric estimators are explored and illustrated with real data. R code for reproducing the data examples is also provided. The book also offers:
• A thorough introduction to the existing methods that deal with randomly truncated data
• Comprehensive explorations of linear regression models for doubly truncated responses
• Practical discussions of the influence of bandwidth in the performance of kernel-type estimators and guidelines for the selection of the smoothing parameter
• In-depth examinations of nonparametric and semiparametric estimators
Perfect for statistical professionals with some background in mathematical statistics, biostatisticians, and mathematicians with an interest in survival analysis and epidemiology, The Statistical Analysis of Doubly Truncated Data is also an invaluable addition to the libraries of biomedical scientists and practitioners, as well as postgraduate students studying survival analysis.
Survival Models and Data Analysis
by Regina C. Elandt-Johnson
Part 110 of the Wiley in Probability and Statistics series
Survival analysis deals with the distribution of lifetimes, essentially the times from an initiating event such as birth or the start of a job to some terminal event such as death or pension. This book, originally published in 1980, surveys and analyzes methods that use survival measurements and concepts, and helps readers apply the appropriate method for a given situation. Four broad sections cover introductions to data, univariate survival function, multiple-failure data, and advanced topics.
Time Series Analysis With Long Memory in View
Part 215 of the Wiley in Probability and Statistics series
Provides a simple exposition of the basic time series material, and insights into underlying technical aspects and methods of proof.
Long memory time series are characterized by a strong dependence between distant events. This book introduces readers to the theory and foundations of univariate time series analysis with a focus on long memory and fractional integration, which are embedded into the general framework. It presents the general theory of time series, including some issues that are not treated in other books on time series, such as ergodicity, persistence versus memory, asymptotic properties of the periodogram, and Whittle estimation. Further chapters address the general functional central limit theory, parametric and semiparametric estimation of the long memory parameter, and locally optimal tests.
Intuitive and easy to read, “Time Series Analysis with Long Memory in View” offers chapters that cover: Stationary Processes; Moving Averages and Linear Processes; Frequency Domain Analysis; Differencing and Integration; Fractionally Integrated Processes; Sample Means; Parametric Estimators; Semiparametric Estimators; and Testing. It also discusses further topics. This book:
• Offers beginning-of-chapter examples as well as end-of-chapter technical arguments and proofs
• Contains many new results on long memory processes which have not appeared in previous and existing textbooks
• Takes a basic mathematics (Calculus) approach to the topic of time series analysis with long memory
• Contains 25 illustrative figures as well as lists of notations and acronyms
“Time Series Analysis with Long Memory in View” is an ideal text for first year PhD students, researchers, and practitioners in statistics, econometrics, and any application area that uses time series over a long period. It would also benefit researchers, undergraduates, and practitioners in those areas who require a rigorous introduction to time series analysis.
Fundamental Statistical Inference
A Computational Approach
Part 216 of the Wiley in Probability and Statistics series
A hands-on approach to statistical inference that addresses the latest developments in this ever-growing field.
This clear and accessible book for beginning graduate students offers a practical and detailed approach to the field of statistical inference, providing complete derivations of results, discussions, and MATLAB programs for computation. It emphasizes details of the relevance of the material, intuition, and discussions with a view towards very modern statistical inference. In addition to classic subjects associated with mathematical statistics, topics include an intuitive presentation of the (single and double) bootstrap for confidence interval calculations, shrinkage estimation, tail (maximal moment) estimation, and a variety of methods of point estimation besides maximum likelihood, including use of characteristic functions, and indirect inference. Practical examples of all methods are given. Estimation issues associated with the discrete mixtures of normal distribution, and their solutions, are developed in detail. Much emphasis throughout is on non-Gaussian distributions, including details on working with the stable Paretian distribution and fast calculation of the noncentral Student's t. An entire chapter is dedicated to optimization, including development of Hessian-based methods, as well as heuristic/genetic algorithms that do not require continuity, with MATLAB codes provided.
The book includes both theory and nontechnical discussions, along with a substantial reference to the literature, with an emphasis on alternative, more modern approaches. The recent literature on the misuse of hypothesis testing and p-values for model selection is discussed, and emphasis is given to alternative model selection methods, though hypothesis testing of distributional assumptions is covered in detail, notably for the normal distribution.
Presented in three parts-Essential Concepts in Statistics; Further Fundamental Concepts in Statistics; and Additional Topics-“Fundamental Statistical Inference: A Computational Approach” offers comprehensive chapters on: Introducing Point and Interval Estimation; Goodness of Fit and Hypothesis Testing; Likelihood; Numerical Optimization; Methods of Point Estimation; Q-Q Plots and Distribution Testing; Unbiased Point Estimation and Bias Reduction; Analytic Interval Estimation; Inference in a Heavy-Tailed Context; The Method of Indirect Inference; and, as an appendix, A Review of Fundamental Concepts in Probability Theory, the latter to keep the book self-contained, and giving material on some advanced subjects such as saddlepoint approximations, expected shortfall in finance, calculation with the stable Paretian distribution, and convergence theorems and proofs.
Experiments
Planning, Analysis, and Optimization
Part 247 of the Wiley in Probability and Statistics series
A COMPREHENSIVE REVIEW OF MODERN EXPERIMENTAL DESIGN
“Experiments: Planning, Analysis, and Optimization”, Third Edition provides a complete discussion of modern experimental design for product and process improvement-the design and analysis of experiments and their applications for system optimization, robustness, and treatment comparison. While maintaining the same easy-to-follow style as the previous editions, this book continues to present an integrated system of experimental design and analysis that can be applied across various fields of research including engineering, medicine, and the physical sciences. New chapters provide modern updates on practical optimal design and computer experiments, an explanation of computer simulations as an alternative to physical experiments. Each chapter begins with a real-world example of an experiment followed by the methods required to design that type of experiment. The chapters conclude with an application of the methods to the experiment, bridging the gap between theory and practice.
The authors modernize accepted methodologies while refining many cutting-edge topics including robust parameter design, analysis of non-normal data, analysis of experiments with complex aliasing, multilevel designs, minimum aberration designs, and orthogonal arrays.
The third edition includes:
• Information on the design and analysis of computer experiments
• A discussion of practical optimal design of experiments
• An introduction to conditional main effect (CME) analysis and definitive screening designs (DSDs)
• New exercise problems
This book includes valuable exercises and problems, allowing the reader to gauge their progress and retention of the book's subject matter as they complete each chapter.
Drawing on examples from their combined years of working with industrial clients, the authors present many cutting-edge topics in a single, easily accessible source. Extensive case studies, including goals, data, and experimental designs, are also included, and the book's data sets can be found on a related FTP site, along with additional supplemental material. Chapter summaries provide a succinct outline of discussed methods, and extensive appendices direct readers to resources for further study.
“Experiments: Planning, Analysis, and Optimization”, Third Edition is an excellent book for design of experiments courses at the upper-undergraduate and graduate levels. It is also a valuable resource for practicing engineers and statisticians.
Theory of Ridge Regression Estimation With Applications
by A. K. Md. Ehsanes Saleh
Part 285 of the Wiley in Probability and Statistics series
A guide to the systematic analytical results for ridge, LASSO, preliminary test, and Stein-type estimators with applications.
“Theory of Ridge Regression Estimation with Applications” offers a comprehensive guide to the theory and methods of estimation. Ridge regression and LASSO are at the center of all penalty estimators in a range of standard models that are used in many applied statistical analyses. Written by noted experts in the field, the book contains a thorough introduction to penalty and shrinkage estimation and explores the role that ridge, LASSO, and logistic regression play in the computer intensive area of neural network and big data analysis.
Designed to be accessible, the book presents detailed coverage of the basic terminology related to various models such as the location and simple linear models, normal and rank theory-based ridge, LASSO, preliminary test and Stein-type estimators. The authors also include problem sets to enhance learning. This book is a volume in the Wiley Series in Probability and Statistics series that provides essential and invaluable reading for all statisticians. This important resource:
• Offers theoretical coverage and computer-intensive applications of the procedures presented
• Contains solutions and alternate methods for prediction accuracy and selecting model procedures
• Presents the first book to focus on ridge regression and unifies past research with current methodology
• Uses R throughout the text and includes a companion website containing convenient data sets
Written for graduate students, practitioners, and researchers in various fields of science, “Theory of Ridge Regression Estimation with Applications” is an authoritative guide to the theory and methodology of statistical estimation.
Survey Measurement and Process Quality
Part 324 of the Wiley in Probability and Statistics series
An in-depth look at current issues, new research findings, and interdisciplinary exchange in survey methodology and processing.
“Survey Measurement and Process Quality” extends the marriage of traditional survey issues and continuous quality improvement further than any other contemporary volume. It documents the current state of the field, reports new research findings, and promotes interdisciplinary exchange in questionnaire design, data collection, data processing, quality assessment, and effects of errors on estimation and analysis.
The book's five sections discuss a broad range of issues and topics in each of five major areas, including
* Questionnaire design-conceptualization, design of rating scales for effective measurement, self-administered questionnaires, and more
* Data collection-new technology, interviewer effects, interview mode, children as respondents
* Post-survey processing and operations-modeling of classification operations, coding based on such systems, editing, integrating processes
* Quality assessment and control-total quality management, developing current best methods, service quality, quality efforts across organizations
* Effects of misclassification on estimation, analysis, and interpretation-misclassification and other measurement errors, new variance estimators that account for measurement error, estimators of nonsampling error components in interview surveys
“Survey Measurement and Process Quality” is an indispensable resource for survey practitioners and managers as well as an excellent supplemental text for undergraduate and graduate courses and special seminars.
Advanced Analysis of Variance
Part 384 of the Wiley in Probability and Statistics series
Introducing a revolutionary new model for the statistical analysis of experimental data
In this important book, internationally acclaimed statistician, Chihiro Hirotsu, goes beyond classical analysis of variance (ANOVA) model to offer a unified theory and advanced techniques for the statistical analysis of experimental data. Dr. Hirotsu introduces the groundbreaking concept of advanced analysis of variance (AANOVA) and explains how the AANOVA approach exceeds the limitations of ANOVA methods to allow for global reasoning utilizing special methods of simultaneous inference leading to individual conclusions.
Focusing on normal, binomial, and categorical data, Dr. Hirotsu explores ANOVA theory and practice and reviews current developments in the field. He then introduces three new advanced approaches, namely: testing for equivalence and non-inferiority, simultaneous testing for directional (monotonic or restricted) alternatives and change-point hypotheses, and analyses emerging from categorical data. Using real-world examples, he shows how these three recognizable families of problems have important applications in most practical activities involving experimental data in an array of research areas, including bioequivalence, clinical trials, industrial experiments, pharmaco-statistics, and quality control, to name just a few.
• Written in an expository style which will encourage readers to explore applications for AANOVA techniques in their own research
• Focuses on dealing with real data, providing real-world examples drawn from the fields of statistical quality control, clinical trials, and drug testing
• Describes advanced methods developed and refined by the author over the course of his long career as research engineer and statistician
• Introduces advanced technologies for AANOVA data analysis that build upon the basic ANOVA principles and practices
Introducing a breakthrough approach to statistical analysis which overcomes the limitations of the ANOVA model, Advanced Analysis of Variance is an indispensable resource for researchers and practitioners working in fields within which the statistical analysis of experimental data is a crucial research component.
Chihiro Hirotsu is a Senior Researcher at the Collaborative Research Center, Meisei University, and Professor Emeritus at the University of Tokyo. He is a fellow of the American Statistical Association, an elected member of the International Statistical Institute, and he has been awarded the Japan Statistical Society Prize (2005) and the Ouchi Prize (2006). His work has been published in Biometrika, Biometrics, and Computational Statistics & Data Analysis, among other premier research journals.
An Introduction to Envelopes
Dimension Reduction for Efficient Estimation in Multivariate Statistics
Part 401 of the Wiley in Probability and Statistics series
Written by the leading expert in the field, this text reviews the major new developments in envelope models and methods
An Introduction to Envelopes provides an overview of the theory and methods of envelopes, a class of procedures for increasing efficiency in multivariate analyses without altering traditional objectives. The author offers a balance between foundations and methodology by integrating illustrative examples that show how envelopes can be used in practice. He discusses how to use envelopes to target selected coefficients and explores predictor envelopes and their connection with partial least squares regression. The book reveals the potential for envelope methodology to improve estimation of a multivariate mean.
The text also includes information on how envelopes can be used in generalized linear models, regressions with a matrix-valued response, and reviews work on sparse and Bayesian response envelopes. In addition, the text explores relationships between envelopes and other dimension reduction methods, including canonical correlations, reduced-rank regression, supervised singular value decomposition, sufficient dimension reduction, principal components, and principal fitted components. This important resource:
• Offers a text written by the leading expert in this field
• Describes groundbreaking work that puts the focus on this burgeoning area of study
• Covers the important new developments in the field and highlights the most important directions
• Discusses the underlying mathematics and linear algebra
• Includes an online companion site with both R and Matlab support
Written for researchers and graduate students in multivariate analysis and dimension reduction, as well as practitioners interested in statistical methodology, “An Introduction to Envelopes” offers the first book on the theory and methods of envelopes.
Game-Theoretic Foundations for Probability and Finance
Part 455 of the Wiley in Probability and Statistics series
Game-theoretic probability and finance come of age
Glenn Shafer and Vladimir Vovk's Probability and Finance, published in 2001, showed that perfect-information games can be used to define mathematical probability. Based on fifteen years of further research, Game-Theoretic Foundations for Probability and Finance presents a mature view of the foundational role game theory can play. Its account of probability theory opens the way to new methods of prediction and testing and makes many statistical methods more transparent and widely usable. Its contributions to finance theory include purely game-theoretic accounts of Ito's stochastic calculus, the capital asset pricing model, the equity premium, and portfolio theory.
Game-Theoretic Foundations for Probability and Finance is a book of research. It is also a teaching resource. Each chapter is supplemented with carefully designed exercises and notes relating the new theory to its historical context.
Praise from early readers
"Ever since Kolmogorov's Grundbegriffe, the standard mathematical treatment of probability theory has been measure-theoretic. In this ground-breaking work, Shafer and Vovk give a game-theoretic foundation instead. While being just as rigorous, the game-theoretic approach allows for vast and useful generalizations of classical measure-theoretic results, while also giving rise to new, radical ideas for prediction, statistics and mathematical finance without stochastic assumptions. The authors set out their theory in great detail, resulting in what is definitely one of the most important books on the foundations of probability to have appeared in the last few decades." – Peter Grünwald, CWI and University of Leiden
"Shafer and Vovk have thoroughly re-written their 2001 book on the game-theoretic foundations for probability and for finance. They have included an account of the tremendous growth that has occurred since, in the game-theoretic and pathwise approaches to stochastic analysis and in their applications to continuous-time finance. This new book will undoubtedly spur a better understanding of the foundations of these very important fields, and we should all be grateful to its authors." – Ioannis Karatzas, Columbia University
Statistical Intervals
A Guide for Practitioners and Researchers
Part 541 of the Wiley in Probability and Statistics series
Describes statistical intervals to quantify sampling uncertainty,focusing on key application needs and recently developed methodology in an easy-to-apply format
Statistical intervals provide invaluable tools for quantifying sampling uncertainty. The widely hailed first edition, published in 1991, described the use and construction of the most important statistical intervals. Particular emphasis was given to intervals-such as prediction intervals, tolerance intervals and confidence intervals on distribution quantiles-frequently needed in practice, but often neglected in introductory courses.
Vastly improved computer capabilities over the past 25 years have resulted in an explosion of the tools readily available to analysts. This second edition-more than double the size of the first-adds these new methods in an easy-to-apply format. In addition to extensive updating of the original chapters, the second edition includes new chapters on:
• Likelihood-based statistical intervals
• Nonparametric bootstrap intervals
• Parametric bootstrap and other simulation-based intervals
• An introduction to Bayesian intervals
• Bayesian intervals for the popular binomial, Poisson and normal distributions
• Statistical intervals for Bayesian hierarchical models
• Advanced case studies, further illustrating the use of the newly described methods
New technical appendices provide justification of the methods and pathways to extensions and further applications. A webpage directs readers to current readily accessible computer software and other useful information.
Statistical Intervals: A Guide for Practitioners and Researchers, Second Edition is an up-to-date working guide and reference for all who analyze data, allowing them to quantify the uncertainty in their results using statistical intervals.
Applied Bayesian Modelling
Part 595 of the Wiley in Probability and Statistics series
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBUGS and OPENBUGS. This feature continues in the new edition along with examples using R to broaden appeal and for completeness of coverage.
The Analysis of Covariance and Alternatives
Statistical Methods for Experiments, Quasi-Experiments, and Single-Case Studies
Part 608 of the Wiley in Probability and Statistics series
A complete guide to cutting-edge techniques and best practices for applying covariance analysis methods.
The Second Edition of “Analysis of Covariance and Alternatives” sheds new light on its topic, offering in-depth discussions of underlying assumptions, comprehensive interpretations of results, and comparisons of distinct approaches. The book has been extensively revised and updated to feature an in-depth review of prerequisites and the latest developments in the field.
The author begins with a discussion of essential topics relating to experimental design and analysis, including analysis of variance, multiple regression, effect size measures and newly developed methods of communicating statistical results. Subsequent chapters feature newly added methods for the analysis of experiments with ordered treatments, including two parametric and nonparametric monotone analyses as well as approaches based on the robust general linear model and reversed ordinal logistic regression. Four groundbreaking chapters on single-case designs introduce powerful new analyses for simple and complex single-case experiments. This Second Edition also features coverage of advanced methods including:
• Simple and multiple analysis of covariance using both the Fisher approach and the general linear model approach
• Methods to manage assumption departures, including heterogeneous slopes, nonlinear functions, dichotomous dependent variables, and covariates affected by treatments
• Power analysis and the application of covariance analysis to randomized-block designs, two-factor designs, pre-and post-test designs, and multiple dependent variable designs
• Measurement error correction and propensity score methods developed for quasi-experiments, observational studies, and uncontrolled clinical trials
Thoroughly updated to reflect the growing nature of the field, “Analysis of Covariance and Alternatives” is a suitable book for behavioral and medical sciences courses on design of experiments and regression and the upper-undergraduate and graduate levels. It also serves as an authoritative reference work for researchers and academics in the fields of medicine, clinical trials, epidemiology, public health, sociology, and engineering.
Applied Survival Analysis
Regression Modeling of Time-to-Event Data
Part 618 of the Wiley in Probability and Statistics series
Since publication of the first edition nearly a decade ago, analyses using time-to-event methods have increase considerably in all areas of scientific inquiry mainly as a result of model-building methods available in modern statistical software packages. However, there has been minimal coverage in the available literature to9 guide researchers, practitioners, and students who wish to apply these methods to health-related areas of study. “Applied Survival Analysis”, Second Edition provides a comprehensive and up-to-date introduction to regression modeling for time-to-event data in medical, epidemiological, biostatistical, and other health-related research.
This book places a unique emphasis on the practical and contemporary applications of regression modeling rather than the mathematical theory. It offers a clear and accessible presentation of modern modeling techniques supplemented with real-world examples and case studies. Key topics covered include: variable selection, identification of the scale of continuous covariates, the role of interactions in the model, assessment of fit and model assumptions, regression diagnostics, recurrent event models, frailty models, additive models, competing risk models, and missing data.
Features of the Second Edition include:
• Expanded coverage of interactions and the covariate-adjusted survival functions
• The use of the Worchester Heart Attack Study as the main modeling data set for illustrating discussed concepts and techniques
• New discussion of variable selection with multivariable fractional polynomials
• Further exploration of time-varying covariates, complex with examples
• Additional treatment of the exponential, Weibull, and log-logistic parametric regression models
• Increased emphasis on interpreting and using results as well as utilizing multiple imputation methods to analyze data with missing values
• New examples and exercises at the end of each chapter
Analyses throughout the text are performed using Stata® Version 9, and an accompanying FTP site contains the data sets used in the book. “Applied Survival Analysis”, Second Edition is an ideal book for graduate-level courses in biostatistics, statistics, and epidemiologic methods. It also serves as a valuable reference for practitioners and researchers in any health-related field or for professionals in insurance and government.
Bootstrap Methods
A Guide for Practitioners and Researchers
Part 619 of the Wiley in Probability and Statistics series
A practical and accessible introduction to the bootstrap method-newly revised and updated.
Over the past decade, the application of bootstrap methods to new areas of study has expanded, resulting in theoretical and applied advances across various fields. “Bootstrap Methods”, Second Edition is a highly approachable guide to the multidisciplinary, real-world uses of bootstrapping and is ideal for readers who have a professional interest in its methods, but are without an advanced background in mathematics.
Updated to reflect current techniques and the most up-to-date work on the topic, the Second Edition features:
• The addition of a second, extended bibliography devoted solely to publications from 1999—2007, which is a valuable collection of references on the latest research in the field
• A discussion of the new areas of applicability for bootstrap methods, including use in the pharmaceutical industry for estimating individual and population bioequivalence in clinical trials
• A revised chapter on when and why bootstrap fails and remedies for overcoming these drawbacks
• Added coverage on regression, censored data applications, P-value adjustment, ratio estimators, and missing data
• New examples and illustrations as well as extensive historical notes at the end of each chapter
With a strong focus on application, detailed explanations of methodology, and complete coverage of modern developments in the field, “Bootstrap Methods”, Second Edition is an indispensable reference for applied statisticians, engineers, scientists, clinicians, and other practitioners who regularly use statistical methods in research. It is also suitable as a supplementary text for courses in statistics and resampling methods at the upper-undergraduate and graduate levels.
Generalized, Linear, and Mixed Models
Part 651 of the Wiley in Probability and Statistics series
An accessible and self-contained introduction to statistical models-now in a modernized new edition
Generalized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.
A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed models is maintained throughout, and each chapter illustrates how these models are applicable in a wide array of contexts. In addition, a discussion of general methods for the analysis of such models is presented with an emphasis on the method of maximum likelihood for the estimation of parameters. The authors also provide comprehensive coverage of the latest statistical models for correlated, non-normally distributed data. Thoroughly updated to reflect the latest developments in the field, the Second Edition features:
• A new chapter that covers omitted covariates, incorrect random effects distribution, correlation of covariates and random effects, and robust variance estimation
• A new chapter that treats shared random effects models, latent class models, and properties of models
• A revised chapter on longitudinal data, which now includes a discussion of generalized linear models, modern advances in longitudinal data analysis, and the use between and within covariate decompositions
• Expanded coverage of marginal versus conditional models
• Numerous new and updated examples
With its accessible style and wealth of illustrative exercises, Generalized, Linear, and Mixed Models, Second Edition is an ideal book for courses on generalized linear and mixed models at the upper-undergraduate and beginning-graduate levels. It also serves as a valuable reference for applied statisticians, industrial practitioners, and researchers.
Analysis of Ordinal Categorical Data
Part 656 of the Wiley in Probability and Statistics series
Statistical science's first coordinated manual of methods for analyzing ordered categorical data, now fully revised and updated, continues to present applications and case studies in fields as diverse as sociology, public health, ecology, marketing, and pharmacy. “Analysis of Ordinal Categorical Data”, Second Edition provides an introduction to basic descriptive and inferential methods for categorical data, giving thorough coverage of new developments and recent methods. Special emphasis is placed on interpretation and application of methods including an integrated comparison of the available strategies for analyzing ordinal data. Practitioners of statistics in government, industry (particularly pharmaceutical), and academia will want this new edition.
Robust Statistics
Part 693 of the Wiley in Probability and Statistics series
A new edition of the classic, groundbreaking book on robust statistics.
Over twenty-five years after the publication of its predecessor, “Robust Statistics”, Second Edition continues to provide an authoritative and systematic treatment of the topic. This new edition has been thoroughly updated and expanded to reflect the latest advances in the field while also outlining the established theory and applications for building a solid foundation in robust statistics for both the theoretical and the applied statistician.
A comprehensive introduction and discussion on the formal mathematical background behind qualitative and quantitative robustness is provided, and subsequent chapters delve into basic types of scale estimates, asymptotic minimax theory, regression, robust covariance, and robust design. In addition to an extended treatment of robust regression, the Second Edition features four new chapters covering:
• Robust Tests
• Small Sample Asymptotics
• Breakdown Point
• Bayesian Robustness
An expanded treatment of robust regression and pseudo-values is also featured, and concepts, rather than mathematical completeness, are stressed in every discussion. Selected numerical algorithms for computing robust estimates and convergence proofs are provided throughout the book, along with quantitative robustness information for a variety of estimates. A General Remarks section appears at the beginning of each chapter and provides readers with ample motivation for working with the presented methods and techniques.
“Robust Statistics”, Second Edition is an ideal book for graduate-level courses on the topic. It also serves as a valuable reference for researchers and practitioners who wish to study the statistical research associated with robust statistics.
Statistical Rules of Thumb
Part 699 of the Wiley in Probability and Statistics series
Sensibly organized for quick reference, Statistical Rules of Thumb, Second Edition compiles simple rules that are widely applicable, robust, and elegant, and each captures key statistical concepts. This unique guide to the use of statistics for designing, conducting, and analyzing research studies illustrates real-world statistical applications through examples from fields such as public health and environmental studies. Along with an insightful discussion of the reasoning behind every technique, this easy-to-use handbook also conveys the various possibilities statisticians must think of when designing and conducting a study or analyzing its data.
Each chapter presents clearly defined rules related to inference, covariation, experimental design, consultation, and data representation, and each rule is organized and discussed under five succinct headings: introduction, statement and illustration of the rule, the derivation of the rule, a concluding discussion, and exploration of the concept's extensions. The author also introduces new rules of thumb for topics such as sample size for ratio analysis, absolute and relative risk, ANCOVA cautions, and dichotomization of continuous variables. Additional features of the Second Edition include:
• Additional rules on Bayesian topics
• New chapters on observational studies and Evidence-Based Medicine (EBM)
• Additional emphasis on variation and causation
• Updated material with new references, examples, and sources
A related Web site provides a rich learning environment and contains additional rules, presentations by the author, and a message board where readers can share their own strategies and discoveries. Statistical Rules of Thumb, Second Edition is an ideal supplementary book for courses in experimental design and survey research methods at the upper-undergraduate and graduate levels. It also serves as an indispensable reference for statisticians, researchers, consultants, and scientists who would like to develop an understanding of the statistical foundations of their research efforts. A related website www.vanbelle.org provides additional rules, author presentations and more.
Handbook of Monte Carlo Methods
Part 706 of the Wiley in Probability and Statistics series
A comprehensive overview of Monte Carlo simulation that explores the latest topics, techniques, and real-world applications
More and more of today's numerical problems found in engineering and finance are solved through Monte Carlo methods. The heightened popularity of these methods and their continuing development makes it important for researchers to have a comprehensive understanding of the Monte Carlo approach. Handbook of Monte Carlo Methods provides the theory, algorithms, and applications that helps provide a thorough understanding of the emerging dynamics of this rapidly-growing field.
The authors begin with a discussion of fundamentals such as how to generate random numbers on a computer. Subsequent chapters discuss key Monte Carlo topics and methods, including:
• Random variable and stochastic process generation
• Markov chain Monte Carlo, featuring key algorithms such as the Metropolis-Hastings method, the Gibbs sampler, and hit-and-run
• Discrete-event simulation
• Techniques for the statistical analysis of simulation data including the delta method, steady-state estimation, and kernel density estimation
• Variance reduction, including importance sampling, latin hypercube sampling, and conditional Monte Carlo
• Estimation of derivatives and sensitivity analysis
• Advanced topics including cross-entropy, rare events, kernel density estimation, quasi Monte Carlo, particle systems, and randomized optimization
The presented theoretical concepts are illustrated with worked examples that use MATLAB®, a related Web site houses the MATLAB® code, allowing readers to work hands-on with the material and also features the author's own lecture notes on Monte Carlo methods. Detailed appendices provide background material on probability theory, stochastic processes, and mathematical statistics as well as the key optimization concepts and techniques that are relevant to Monte Carlo simulation.
Handbook of Monte Carlo Methods is an excellent reference for applied statisticians and practitioners working in the fields of engineering and finance who use or would like to learn how to use Monte Carlo in their research. It is also a suitable supplement for courses on Monte Carlo methods and computational statistics at the upper-undergraduate and graduate levels.
Methods of Multivariate Analysis
Part 709 of the Wiley in Probability and Statistics series
Filled with new and timely content, “Methods of Multivariate Analysis”, Third Edition provides examples and exercises based on more than sixty real data sets from a wide variety of scientific fields. It takes a "methods" approach to the subject, placing an emphasis on how students and practitioners can employ multivariate analysis in real-life situations.
This Third Edition continues to explore the key descriptive and inferential procedures that result from multivariate analysis. Following a brief overview of the topic, the book goes on to review the fundamentals of matrix algebra, sampling from multivariate populations, and the extension of common univariate statistical procedures (including t-tests, analysis of variance, and multiple regression) to analogous multivariate techniques that involve several dependent variables. The latter half of the book describes statistical tools that are uniquely multivariate in nature, including procedures for discriminating among groups, characterizing low-dimensional latent structure in high-dimensional data, identifying clusters in data, and graphically illustrating relationships in low-dimensional space. In addition, the authors explore a wealth of newly added topics, including:
• Confirmatory Factor Analysis
• Classification Trees
• Dynamic Graphics
• Transformations to Normality
• Prediction for Multivariate Multiple Regression
• Kronecker Products and Vec Notation
New exercises have been added throughout the book, allowing readers to test their comprehension of the presented material. Detailed appendices provide partial solutions as well as supplemental tables, and an accompanying FTP site features the book's data sets and related SAS® code.
Requiring only a basic background in statistics, “Methods of Multivariate Analysis”, Third Edition is an excellent book for courses on multivariate analysis and applied statistics at the upper-undergraduate and graduate levels. The book also serves as a valuable reference for both statisticians and researchers across a wide variety of disciplines.
Geostatistics
Modeling Spatial Uncertainty
Part 713 of the Wiley in Probability and Statistics series
Praise for the First Edition
". . . a readable, comprehensive volume that . . . belongs on the desk, close at hand, of any serious researcher or practitioner." -Mathematical Geosciences
The state of the art in geostatistics
Geostatistical models and techniques such as kriging and stochastic multi-realizations exploit spatial correlations to evaluate natural resources, help optimize their development, and address environmental issues related to air and water quality, soil pollution, and forestry. Geostatistics: Modeling Spatial Uncertainty, Second Edition presents a comprehensive, up-to-date reference on the topic, now featuring the latest developments in the field.
The authors explain both the theory and applications of geostatistics through a unified treatment that emphasizes methodology. Key topics that are the foundation of geostatistics are explored in-depth, including stationary and nonstationary models; linear and nonlinear methods; change of support; multivariate approaches; and conditional simulations. The Second Edition highlights the growing number of applications of geostatistical methods and discusses three key areas of growth in the field:
•
New results and methods, including kriging very large datasets; kriging with outliers; nonse??parable space-time covariances; multipoint simulations; pluri-gaussian simulations; gradual deformation; and extreme value geostatistics
•
Newly formed connections between geostatistics and other approaches such as radial basis functions, Gaussian Markov random fields, and data assimilation
•
New perspectives on topics such as collocated cokriging, kriging with an external drift, discrete Gaussian change-of-support models, and simulation algorithms
Geostatistics, Second Edition is an excellent book for courses on the topic at the graduate level. It also serves as an invaluable reference for earth scientists, mining and petroleum engineers, geophysicists, and environmental statisticians who collect and analyze data in their everyday work.
Latent Class and Latent Transition Analysis
With Applications in the Social, Behavioral, and Health Sciences
Part 718 of the Wiley in Probability and Statistics series
A modern, comprehensive treatment of latent class and latent transition analysis for categorical data
On a daily basis, researchers in the social, behavioral, and health sciences collect information and fit statistical models to the gathered empirical data with the goal of making significant advances in these fields. In many cases, it can be useful to identify latent, or unobserved, subgroups in a population, where individuals' subgroup membership is inferred from their responses on a set of observed variables. Latent Class and Latent Transition Analysis provides a comprehensive and unified introduction to this topic through one-of-a-kind, step-by-step presentations and coverage of theoretical, technical, and practical issues in categorical latent variable modeling for both cross-sectional and longitudinal data.
The book begins with an introduction to latent class and latent transition analysis for categorical data. Subsequent chapters delve into more in-depth material, featuring:
•
A complete treatment of longitudinal latent class models
•
Focused coverage of the conceptual underpinnings of interpretation and evaluationof a latent class solution
•
Use of parameter restrictions and detection of identification problems
•
Advanced topics such as multi-group analysis and the modeling and interpretation of interactions between covariates
The authors present the topic in a style that is accessible yet rigorous. Each method is presented with both a theoretical background and the practical information that is useful for any data analyst. Empirical examples showcase the real-world applications of the discussed concepts and models, and each chapter concludes with a "Points to Remember" section that contains a brief summary of key ideas. All of the analyses in the book are performed using Proc LCA and Proc LTA, the authors' own software packages that can be run within the SAS® environment. A related Web site houses information on these freely available programs and the book's data sets, encouraging readers to reproduce the analyses and also try their own variations.
Latent Class and Latent Transition Analysis is an excellent book for courses on categorical data analysis and latent variable models at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners in the social, behavioral, and health sciences who conduct latent class and latent transition analysis in their everyday work.
Random Data
Analysis and Measurement Procedures
Part 729 of the Wiley in Probability and Statistics series
First published in 1971, “Random Data” served as an authoritative book on the analysis of experimental physical data for engineering and scientific applications. This Fourth Edition features coverage of new developments in random data management and analysis procedures that are applicable to a broad range of applied fields, from the aerospace and automotive industries to oceanographic and biomedical research.
This new edition continues to maintain a balance of classic theory and novel techniques. The authors expand on the treatment of random data analysis theory, including derivations of key relationships in probability and random process theory. The book remains unique in its practical treatment of nonstationary data analysis and nonlinear system analysis, presenting the latest techniques on modern data acquisition, storage, conversion, and qualification of random data prior to its digital analysis. The Fourth Edition also includes:
• A new chapter on frequency domain techniques to model and identify nonlinear systems from measured input/output random data
• New material on the analysis of multiple-input/single-output linear models
• The latest recommended methods for data acquisition and processing of random data
• Important mathematical formulas to design experiments and evaluate results of random data analysis and measurement procedures
• Answers to the problem in each chapter
Comprehensive and self-contained, “Random Data”, Fourth Edition is an indispensible book for courses on random data analysis theory and applications at the upper-under-graduate and graduate level. It is also an insightful reference for engineers and scientists who use statistical methods to investigate and solve problems with dynamic data.
Statistical Meta-Analysis With Applications
Part 738 of the Wiley in Probability and Statistics series
An accessible introduction to performing meta-analysis across various areas of research.
The practice of meta-analysis allows researchers to obtain findings from various studies and compile them to verify and form one overall conclusion. “Statistical Meta-Analysis with Applications” presents the necessary statistical methodologies that allow readers to tackle the four main stages of meta-analysis: problem formulation, data collection, data evaluation, and data analysis and interpretation. Combining the authors' expertise on the topic with a wealth of up-to-date information, this book successfully introduces the essential statistical practices for making thorough and accurate discoveries across a wide array of diverse fields, such as business, public health, biostatistics, and environmental studies.
Two main types of statistical analysis serve as the foundation of the methods and techniques: combining tests of effect size and combining estimates of effect size. Additional topics covered include:
• Meta-analysis regression procedures
• Multiple-endpoint and multiple-treatment studies
• The Bayesian approach to meta-analysis
• Publication bias
• Vote counting procedures
• Methods for combining individual tests and combining individual estimates
• Using meta-analysis to analyze binary and ordinal categorical data
Numerous worked-out examples in each chapter provide the reader with a step-by-step understanding of the presented methods. Extensive references are also included, outlining additional sources for further study.
Requiring only a working knowledge of statistics, Statistical Meta-Analysis with Applications is a valuable supplement for courses in biostatistics, business, public health, and social research at the upper-undergraduate and graduate levels. It is also an excellent reference for applied statisticians working in industry, academia, and government.
Nonparametric Statistical Methods
Part 751 of the Wiley in Probability and Statistics series
Thoroughly revised and updated, the new edition of Nonparametric Statistical Methods includes additional modern topics and procedures, more practical data sets, and new problems from real-life situations. The book continues to emphasize the importance of nonparametric methods as a significant branch of modern statistics and equips readers with the conceptual and technical skills necessary to select and apply the appropriate procedures for any given situation.
Written by leading statisticians, Nonparametric Statistical Methods, Third Edition provides readers with crucial nonparametric techniques in a variety of settings, emphasizing the assumptions underlying the methods. The book provides an extensive array of examples that clearly illustrate how to use nonparametric approaches for handling one- or two-sample location and dispersion problems, dichotomous data, and one-way and two-way layout problems. In addition, the Third Edition features:
• The use of the freely available R software to aid in computation and simulation, including many new R programs written explicitly for this new edition
• New chapters that address density estimation, wavelets, smoothing, ranked set sampling, and Bayesian non-parametrics
• Problems that illustrate examples from agricultural science, astronomy, biology, criminology, education, engineering, environmental science, geology, home economics, medicine, oceanography, physics, psychology, sociology, and space science Nonparametric Statistical Methods, Third Edition is an excellent reference for applied statisticians and practitioners who seek a review of nonparametric methods and their relevant applications. The book is also an ideal textbook for upper-undergraduate and first-year graduate courses in applied nonparametric statistics.
Design and Analysis of Experiments, Volume 3
Special Designs and Applications
Part 810 of the Wiley in Probability and Statistics series
Provides timely applications, modifications, and extensions of experimental designs for a variety of disciplines
Design and Analysis of Experiments, Volume 3: Special Designs and Applications continues building upon the philosophical foundations of experimental design by providing important, modern applications of experimental design to the many fields that utilize them. The book also presents optimal and efficient designs for practice and covers key topics in current statistical research.
Featuring contributions from leading researchers and academics, the book demonstrates how the presented concepts are used across various fields from genetics and medicinal and pharmaceutical research to manufacturing, engineering, and national security. Each chapter includes an introduction followed by the historical background as well as in-depth procedures that aid in the construction and analysis of the discussed designs. Topical coverage includes:
•
Genetic cross experiments, microarray experiments, and variety trials
•
Clinical trials, group-sequential designs, and adaptive designs
•
Fractional factorial and search, choice, and optimal designs for generalized linear models
•
Computer experiments with applications to homeland security
•
Robust parameter designs and split-plot type response surface designs
•
Analysis of directional data experiments
Throughout the book, illustrative and numerical examples utilize SAS®, JMP®, and R software programs to demonstrate the discussed techniques. Related data sets and software applications are available on the book's related FTP site.
Design and Analysis of Experiments, Volume 3 is an ideal textbook for graduate courses in experimental design and also serves as a practical, hands-on reference for statisticians and researchers across a wide array of subject areas, including biological sciences, engineering, medicine, and business.
Optimal Learning
Part 841 of the Wiley in Probability and Statistics series
Learn the science of collecting information to make effective decisions.
Everyday decisions are made without the benefit of accurate information. “Optimal Learning” develops the needed principles for gathering information to make decisions, especially when collecting information is time-consuming and expensive. Designed for readers with an elementary background in probability and statistics, the book presents effective and practical policies illustrated in a wide range of applications, from energy, homeland security, and transportation to engineering, health, and business.
This book covers the fundamental dimensions of a learning problem and presents a simple method for testing and comparing policies for learning. Special attention is given to the knowledge gradient policy and its use with a wide range of belief models, including lookup table and parametric and for online and offline problems. Three sections develop ideas with increasing levels of sophistication:
• Fundamentals explores fundamental topics, including adaptive learning, ranking and selection, the knowledge gradient, and bandit problems
• Extensions and Applications features coverage of linear belief models, subset selection models, scalar function optimization, optimal bidding, and stopping problems
• Advanced Topics explores complex methods including simulation optimization, active learning in mathematical programming, and optimal continuous measurements
Each chapter identifies a specific learning problem, presents the related, practical algorithms for implementation, and concludes with numerous exercises. A related website features additional applications and downloadable software, including MATLAB and the Optimal Learning Calculator, a spreadsheet-based package that provides an introduction to learning and a variety of policies for learning.
Approximate Dynamic Programming
Solving the Curses of Dimensionality
Part 842 of the Wiley in Probability and Statistics series
This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems.
Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. Approximate Dynamic Programming, Second Edition uniquely integrates four distinct disciplines-Markov decision processes, mathematical programming, simulation, and statistics-to demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP.
The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The Second Edition also features:
• A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations
• A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies
• Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient
• A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies
The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets.
Requiring only a basic understanding of statistics and probability, “Approximate Dynamic Programming”, Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work.
An Elementary Introduction to Statistical Learning Theory
Part 853 of the Wiley in Probability and Statistics series
A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning
A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference.
Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting.
Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference sections that supplies historical notes and additional resources for further study.
An Elementary Introduction to Statistical Learning Theory is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.
Statistical Analysis of Profile Monitoring
Part 865 of the Wiley in Probability and Statistics series
A one-of-a-kind presentation of the major achievements in statistical profile monitoring methods.
Statistical profile monitoring is an area of statistical quality control that is growing in significance for researchers and practitioners, specifically because of its range of applicability across various service and manufacturing settings. Comprised of contributions from renowned academicians and practitioners in the field, “Statistical Analysis of Profile Monitoring” presents the latest state-of-the-art research on the use of control charts to monitor process and product quality profiles. The book presents comprehensive coverage of profile monitoring definitions, techniques, models, and application examples, particularly in various areas of engineering and statistics.
The book begins with an introduction to the concept of profile monitoring and its applications in practice. Subsequent chapters explore the fundamental concepts, methods, and issues related to statistical profile monitoring, with topics of coverage including:
• Simple and multiple linear profiles
• Binary response profiles
• Parametric and nonparametric nonlinear profiles
• Multivariate linear profiles monitoring
• Statistical process control for geometric specifications
• Correlation and autocorrelation in profiles
• Nonparametric profile monitoring
Throughout the book, more than two dozen real-world case studies highlight the discussed topics along with innovative examples and applications of profile monitoring. “Statistical Analysis of Profile Monitoring” is an excellent book for courses on statistical quality control at the graduate level. It also serves as a valuable reference for quality engineers, researchers and anyone who works in monitoring and improving statistical processes.
Data Analysis
What Can Be Learned From the Past 50 Years
Part 874 of the Wiley in Probability and Statistics series
This book explores the many provocative questions concerning the fundamentals of data analysis. It is based on the time-tested experience of one of the gurus of the subject matter. Why should one study data analysis? How should it be taught? What techniques work best, and for whom? How valid are the results? How much data should be tested? Which machine languages should be used, if used at all? Emphasis on apprenticeship (through hands-on case studies) and anecdotes (through real-life applications) are the tools that Peter J. Huber uses in this volume. Concern with specific statistical techniques is not of immediate value, rather, questions of strategy — when to use which technique — are employed. Central to the discussion is an understanding of the significance of massive (or robust) data sets, the implementation of languages, and the use of models. Each is sprinkled with an ample number of examples and case studies. Personal practices, various pitfalls, and existing controversies are presented when applicable. The book serves as an excellent philosophical and historical companion to any present-day text in data analysis, robust statistics, data mining, statistical learning, or computational statistics.
Bias and Causation
Models and Judgment for Valid Comparisons
Part 885 of the Wiley in Probability and Statistics series
A one-of-a-kind resource on identifying and dealing with bias in statistical research on causal effects
Do cell phones cause cancer? Can a new curriculum increase student achievement? Determining what the real causes of such problems are, and how powerful their effects may be, are central issues in research across various fields of study. Some researchers are highly skeptical of drawing causal conclusions except in tightly controlled randomized experiments, while others discount the threats posed by different sources of bias, even in less rigorous observational studies. Bias and Causation presents a complete treatment of the subject, organizing and clarifying the diverse types of biases into a conceptual framework. The book treats various sources of bias in comparative studies-both randomized and observational-and offers guidance on how they should be addressed by researchers.
Utilizing a relatively simple mathematical approach, the author develops a theory of bias that outlines the essential nature of the problem and identifies the various sources of bias that are encountered in modern research. The book begins with an introduction to the study of causal inference and the related concepts and terminology. Next, an overview is provided of the methodological issues at the core of the difficulties posed by bias. Subsequent chapters explain the concepts of selection bias, confounding, intermediate causal factors, and information bias along with the distortion of a causal effect that can result when the exposure and/or the outcome is measured with error. The book concludes with a new classification of twenty general sources of bias and practical advice on how mathematical modeling and expert judgment can be combined to achieve the most credible causal conclusions.
Throughout the book, examples from the fields of medicine, public policy, and education are incorporated into the presentation of various topics. In addition, six detailed case studies illustrate concrete examples of the significance of biases in everyday research.
Requiring only a basic understanding of statistics and probability theory, Bias and Causation is an excellent supplement for courses on research methods and applied statistics at the upper-undergraduate and graduate level. It is also a valuable reference for practicing researchers and methodologists in various fields of study who work with statistical data.
Dirichlet and Related Distributions
Theory, Methods and Applications
Part 888 of the Wiley in Probability and Statistics series
The Dirichlet distribution appears in many areas of application, which include modelling of compositional data, Bayesian analysis, statistical genetics, and nonparametric inference. This book provides a comprehensive review of the Dirichlet distribution and two extended versions, the Grouped Dirichlet Distribution (GDD) and the Nested Dirichlet Distribution (NDD), arising from likelihood and Bayesian analysis of incomplete categorical data and survey data with non-response.
The theoretical properties and applications are also reviewed in detail for other related distributions, such as the inverted Dirichlet distribution, Dirichlet-multinomial distribution, the truncated Dirichlet distribution, the generalized Dirichlet distribution, Hyper-Dirichlet distribution, scaled Dirichlet distribution, mixed Dirichlet distribution, Liouville distribution, and the generalized Liouville distribution.
Key Features:
• Presents many of the results and applications that are scattered throughout the literature in one single volume.
• Looks at the most recent results such as survival function and characteristic function for the uniform distributions over the hyper-plane and simplex; distribution for linear function of Dirichlet components; estimation via the expectation-maximization gradient algorithm and application; etc.
• Likelihood and Bayesian analyses of incomplete categorical data by using GDD, NDD, and the generalized Dirichlet distribution are illustrated in detail through the EM algorithm and data augmentation structure.
• Presents a systematic exposition of the Dirichlet-multinomial distribution for multinomial data with extra variation which cannot be handled by the multinomial distribution.
• S-plus/R codes are featured along with practical examples illustrating the methods.
Practitioners and researchers working in areas such as medical science, biological science and social science will benefit from this book.
Mixtures
Estimation and Applications
Part 896 of the Wiley in Probability and Statistics series
This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete.
The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject. The applications are drawn from scientific discipline, including biostatistics, computer science, ecology and finance. This area of statistics is important to a range of disciplines, and its methodology attracts interest from researchers in the fields in which it can be applied.
Statistical Inference for Fractional Diffusion Processes
Part 901 of the Wiley in Probability and Statistics series
Stochastic processes are widely used for model building in the social, physical, engineering and life sciences as well as in financial economics. In model building, statistical inference for stochastic processes is of great importance from both a theoretical and an applications point of view.
This book deals with Fractional Diffusion Processes and statistical inference for such stochastic processes. The main focus of the book is to consider parametric and nonparametric inference problems for fractional diffusion processes when a complete path of the process over a finite interval is observable.
Key features:
• Introduces self-similar processes, fractional Brownian motion and stochastic integration with respect to fractional Brownian motion.
• Provides a comprehensive review of statistical inference for processes driven by fractional Brownian motion for modelling long range dependence.
• Presents a study of parametric and nonparametric inference problems for the fractional diffusion process.
• Discusses the fractional Brownian sheet and infinite dimensional fractional Brownian motion.
• Includes recent results and developments in the area of statistical inference of fractional diffusion processes.
Researchers and students working on the statistics of fractional diffusion processes and applied mathematicians and statisticians involved in stochastic process modelling will benefit from this book.
Latent Variable Models and Factor Analysis
A Unified Approach
Part 904 of the Wiley in Probability and Statistics series
“Latent Variable Models and Factor Analysis” provides a comprehensive and unified approach to factor analysis and latent variable modeling from a statistical perspective. This book presents a general framework to enable the derivation of the commonly used models, along with updated numerical examples. Nature and interpretation of a latent variable is also introduced along with related techniques for investigating dependency.
This book:
• Provides a unified approach showing how such apparently diverse methods as Latent Class Analysis and Factor Analysis are actually members of the same family.
• Presents new material on ordered manifest variables, MCMC methods, non-linear models as well as a new chapter on related techniques for investigating dependency.
• Includes new sections on structural equation models (SEM) and Markov Chain Monte Carlo methods for parameter estimation, along with new illustrative examples.
• Looks at recent developments on goodness-of-fit test statistics and on non-linear models and models with mixed latent variables, both categorical and continuous.
No prior acquaintance with latent variable modelling is pre-supposed but a broad understanding of statistical theory will make it easier to see the approach in its proper perspective. Applied statisticians, psychometricians, medical statisticians, biostatisticians, economists and social science researchers will benefit from this book.
Multilevel Statistical Models
Part 922 of the Wiley in Probability and Statistics series
Throughout the social, medical and other sciences the importance of understanding complex hierarchical data structures is well understood. Multilevel modelling is now the accepted statistical technique for handling such data and is widely available in computer software packages. A thorough understanding of these techniques is therefore important for all those working in these areas. This new edition of “Multilevel Statistical Models” brings these techniques together, starting from basic ideas and illustrating how more complex models are derived. Bayesian methodology using MCMC has been extended along with new material on smoothing models, multivariate responses, missing data, latent normal transformations for discrete responses, structural equation modeling and survival models.
Key Features:
• Provides a clear introduction and a comprehensive account of multilevel models.
• New methodological developments and applications are explored.
• Written by a leading expert in the field of multilevel methodology.
• Illustrated throughout with real-life examples, explaining theoretical concepts.
This book is suitable as a comprehensive text for postgraduate courses, as well as a general reference guide. Applied statisticians in the social sciences, economics, biological and medical disciplines will find this book beneficial.
Bayesian Networks
An Introduction
Part 924 of the Wiley in Probability and Statistics series
“Bayesian Networks: An Introduction” provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout.
Features include:
• An introduction to Dirichlet Distribution, Exponential Families and their applications.
• A detailed description of learning algorithms and Conditional Gaussian Distributions using Junction Tree methods.
• A discussion of Pearl's intervention calculus, with an introduction to the notion of see and do conditioning.
• All concepts are clearly defined and illustrated with examples and exercises. Solutions are provided online.
This book will prove a valuable resource for postgraduate students of statistics, computer engineering, mathematics, data mining, artificial intelligence, and biology.
Researchers and users of comparable modelling or statistical techniques such as neural networks will also find this book of interest.