During the past half-century, exponential families have attained a position at the center of parametric statistical inference. Theoretical advances have been matched, and more than matched, in the world of applications, where logistic regression by itself has become the go-to methodology in medical statistics, computer-based prediction algorithms, and the social sciences. This book is based on a one-semester graduate course for first year Ph.D. and advanced master's students. After presenting the basic structure of univariate and multivariate exponential families, their application to generalized linear models including logistic and Poisson regression is described in detail, emphasizing geometrical ideas, computational practice, and the analogy with ordinary linear regression. Connections are made with a variety of current statistical methodologies: missing data, survival analysis and proportional hazards, false discovery rates, bootstrapping, and empirical Bayes analysis. The book
This textbook offers a compact introductory course on Malliavin calculus, an active and powerful area of research. It covers recent applications, including density formulas, regularity of probability laws, central and non-central limit theorems for Gaussian functionals, convergence of densities and non-central limit theorems for the local time of Brownian motion. The book also includes a self-contained presentation of Brownian motion and stochastic calculus, as well as Lévy processes and stochastic calculus for jump processes. Accessible to non-experts, the book can be used by graduate students and researchers to develop their mastery of the core techniques necessary for further study.
This textbook offers a compact introductory course on Malliavin calculus, an active and powerful area of research. It covers recent applications, including density formulas, regularity of probability laws, central and non-central limit theorems for Gaussian functionals, convergence of densities and non-central limit theorems for the local time of Brownian motion. The book also includes a self-contained presentation of Brownian motion and stochastic calculus, as well as Lévy processes and stochastic calculus for jump processes. Accessible to non-experts, the book can be used by graduate students and researchers to develop their mastery of the core techniques necessary for further study.
Based on a starter course for beginning graduate students, Core Statistics provides concise coverage of the fundamentals of inference for parametric statistical models, including both theory and practical numerical computation. The book considers both frequentist maximum likelihood and Bayesian stochastic simulation while focusing on general methods applicable to a wide range of models and emphasizing the common questions addressed by the two approaches. This compact package serves as a lively introduction to the theory and tools that a beginning graduate student needs in order to make the transition to serious statistical analysis: inference; modeling; computation, including some numerics; and the R language. Aimed also at any quantitative scientist who uses statistical methods, this book will deepen readers' understanding of why and when methods work and explain how to develop suitable methods for non-standard situations, such as in ecology, big data and genomics.
In a surprising sequence of developments, the longest increasing subsequence problem, originally mentioned as merely a curious example in a 1961 paper, has proven to have deep connections to many seemingly unrelated branches of mathematics, such as random permutations, random matrices, Young tableaux, and the corner growth model. The detailed and playful study of these connections makes this book suitable as a starting point for a wider exploration of elegant mathematical ideas that are of interest to every mathematician and to many computer scientists, physicists and statisticians. The specific topics covered are the Vershik-Kerov–Logan-Shepp limit shape theorem, the Baik–Deift–Johansson theorem, the Tracy–Widom distribution, and the corner growth process. This exciting body of work, encompassing important advances in probability and combinatorics over the last forty years, is made accessible to a general graduate-level audience for the first time in a highly polished presentation.
Applications of queueing network models have multiplied in the last generation, including scheduling of large manufacturing systems, control of patient flow in health systems, load balancing in cloud computing, and matching in ride sharing. These problems are too large and complex for exact solution, but their scale allows approximation. This book is the first comprehensive treatment of fluid scaling, diffusion scaling, and many-server scaling in a single text presented at a level suitable for graduate students. Fluid scaling is used to verify stability, in particular treating max weight policies, and to study optimal control of transient queueing networks. Diffusion scaling is used to control systems in balanced heavy traffic, by solving for optimal scheduling, admission control, and routing in Brownian networks. Many-server scaling is studied in the quality and efficiency driven Halfin–Whitt regime and applied to load balancing in the supermarket model and to bipartite matching in
This introduction to some of the principal models in the theory of disordered systems leads the reader through the basics, to the very edge of contemporary research, with the minimum of technical fuss. Topics covered include random walk, percolation, self-avoiding walk, interacting particle systems, uniform spanning tree, random graphs, as well as the Ising, Potts, and random-cluster models for ferromagnetism, and the Lorentz model for motion in a random medium. Schramm–Löwner evolutions (SLE) arise in various contexts. The choice of topics is strongly motivated by modern applications and focuses on areas that merit further research. Special features include a simple account of Smirnov's proof of Cardy's formula for critical percolation, and a fairly full account of the theory of influence and sharp-thresholds. Accessible to a wide audience of mathematicians and physicists, this book can be used as a graduate course text. Each chapter ends with a range of exercises.
In a surprising sequence of developments, the longest increasing subsequence problem, originally mentioned as merely a curious example in a 1961 paper, has proven to have deep connections to many seemingly unrelated branches of mathematics, such as random permutations, random matrices, Young tableaux, and the corner growth model. The detailed and playful study of these connections makes this book suitable as a starting point for a wider exploration of elegant mathematical ideas that are of interest to every mathematician and to many computer scientists, physicists and statisticians. The specific topics covered are the Vershik-Kerov–Logan-Shepp limit shape theorem, the Baik–Deift–Johansson theorem, the Tracy–Widom distribution, and the corner growth process. This exciting body of work, encompassing important advances in probability and combinatorics over the last forty years, is made accessible to a general graduate-level audience for the first time in a highly polished presentation.
Based on a starter course for beginning graduate students, Core Statistics provides concise coverage of the fundamentals of inference for parametric statistical models, including both theory and practical numerical computation. The book considers both frequentist maximum likelihood and Bayesian stochastic simulation while focusing on general methods applicable to a wide range of models and emphasizing the common questions addressed by the two approaches. This compact package serves as a lively introduction to the theory and tools that a beginning graduate student needs in order to make the transition to serious statistical analysis: inference; modeling; computation, including some numerics; and the R language. Aimed also at any quantitative scientist who uses statistical methods, this book will deepen readers' understanding of why and when methods work and explain how to develop suitable methods for non-standard situations, such as in ecology, big data and genomics.
During the past half-century, exponential families have attained a position at the center of parametric statistical inference. Theoretical advances have been matched, and more than matched, in the world of applications, where logistic regression by itself has become the go-to methodology in medical statistics, computer-based prediction algorithms, and the social sciences. This book is based on a one-semester graduate course for first year Ph.D. and advanced master's students. After presenting the basic structure of univariate and multivariate exponential families, their application to generalized linear models including logistic and Poisson regression is described in detail, emphasizing geometrical ideas, computational practice, and the analogy with ordinary linear regression. Connections are made with a variety of current statistical methodologies: missing data, survival analysis and proportional hazards, false discovery rates, bootstrapping, and empirical Bayes analysis. The book
This is a graduate-level introduction to the theory of Boolean functions, an exciting area lying on the border of probability theory, discrete mathematics, analysis, and theoretical computer science. Certain functions are highly sensitive to noise; this can be seen via Fourier analysis on the hypercube. The key model analyzed in depth is critical percolation on the hexagonal lattice. For this model, the critical exponents, previously determined using the now-famous Schramm–Loewner evolution, appear here in the study of sensitivity behavior. Even for this relatively simple model, beyond the Fourier-analytic set-up, there are three crucially important but distinct approaches: hypercontractivity of operators, connections to randomized algorithms, and viewing the spectrum as a random Cantor set. This book assumes a basic background in probability theory and integration theory. Each chapter ends with exercises, some straightforward, some challenging.
Communication networks underpin our modern world, and provide fascinating and challenging examples of large-scale stochastic systems. Randomness arises in communication systems at many levels: for example, the initiation and termination times of calls in a telephone network, or the statistical structure of the arrival streams of packets at routers in the Internet. How can routing, flow control and connection acceptance algorithms be designed to work well in uncertain and random environments? This compact introduction illustrates how stochastic models can be used to shed light on important issues in the design and control of communication networks. It will appeal to readers with a mathematical background wishing to understand this important area of application, and to those with an engineering background who want to grasp the underlying mathematical theory. Each chapter ends with exercises and suggestions for further reading.
The Poisson process, a core object in modern probability, enjoys a richer theory than is sometimes appreciated. This volume develops the theory in the setting of a general abstract measure space, establishing basic results and properties as well as certain advanced topics in the stochastic analysis of the Poisson process. Also discussed are applications and related topics in stochastic geometry, including stationary point processes, the Boolean model, the Gilbert graph, stable allocations, and hyperplane processes. Comprehensive, rigorous, and self-contained, this text is ideal for graduate courses or for self-study, with a substantial number of exercises for each chapter. Mathematical prerequisites, mainly a sound knowledge of measure-theoretic probability, are kept in the background, but are reviewed comprehensively in the appendix. The authors are well-known researchers in probability theory; especially stochastic geometry. Their approach is informed both by their research and by
The Poisson process, a core object in modern probability, enjoys a richer theory than is sometimes appreciated. This volume develops the theory in the setting of a general abstract measure space, establishing basic results and properties as well as certain advanced topics in the stochastic analysis of the Poisson process. Also discussed are applications and related topics in stochastic geometry, including stationary point processes, the Boolean model, the Gilbert graph, stable allocations, and hyperplane processes. Comprehensive, rigorous, and self-contained, this text is ideal for graduate courses or for self-study, with a substantial number of exercises for each chapter. Mathematical prerequisites, mainly a sound knowledge of measure-theoretic probability, are kept in the background, but are reviewed comprehensively in the appendix. The authors are well-known researchers in probability theory; especially stochastic geometry. Their approach is informed both by their research and by
This is a graduate-level introduction to the theory of Boolean functions, an exciting area lying on the border of probability theory, discrete mathematics, analysis, and theoretical computer science. Certain functions are highly sensitive to noise; this can be seen via Fourier analysis on the hypercube. The key model analyzed in depth is critical percolation on the hexagonal lattice. For this model, the critical exponents, previously determined using the now-famous Schramm–Loewner evolution, appear here in the study of sensitivity behavior. Even for this relatively simple model, beyond the Fourier-analytic set-up, there are three crucially important but distinct approaches: hypercontractivity of operators, connections to randomized algorithms, and viewing the spectrum as a random Cantor set. This book assumes a basic background in probability theory and integration theory. Each chapter ends with exercises, some straightforward, some challenging.
Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book's practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include Matlab computations, and the numerous end-of-chapter exercises
Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book's practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include Matlab computations, and the numerous end-of-chapter exercises
Communication networks underpin our modern world, and provide fascinating and challenging examples of large-scale stochastic systems. Randomness arises in communication systems at many levels: for example, the initiation and termination times of calls in a telephone network, or the statistical structure of the arrival streams of packets at routers in the Internet. How can routing, flow control and connection acceptance algorithms be designed to work well in uncertain and random environments? This compact introduction illustrates how stochastic models can be used to shed light on important issues in the design and control of communication networks. It will appeal to readers with a mathematical background wishing to understand this important area of application, and to those with an engineering background who want to grasp the underlying mathematical theory. Each chapter ends with exercises and suggestions for further reading.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance – simulation and sampling, as well as experimental design and data collection – that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance – simulation and sampling, as well as experimental design and data collection – that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.